> Algorithm could also help unravel poisoning or environmental contamination cases.
In fact Thomas Hartung of John Hopkins University spearheaded research that created a predictive model for toxicity. Chemistry World reports:
> The computerised toxicity gauge was on average 87% accurate in reproducing the consensus from animal toxicology tests across nine common tests. Any given animal test has an 81% chance, on average, of obtaining the same result. The nine tests consume 57% of all animals in Europe for toxicology, around 600,000 animals.
Thomas Hartung goes on to explain:
> This [tool] is not only good for replacing these animals during the process of registering a substance,’ said Hartung. ‘It could be useful for finding the most poisonous substance to kill a spy, or inform a chemist not to synthesise a substance because it is a skin sensitiser and so not useful for the product they want.
The article mentions that models are only as good as the data, but that the algorithm is only available due to the mandated registration information in Europe under REACH.
Molecular toxicologist Mark Viant, from the University of Birmingham, UK furthers:
> The quality of models can only be as good as the quality of the data used to train the models. Under Reach, there remain grand challenges that Hartung’s promising approach cannot yet address, as the relevant toxicity data does not exist,’ adds Viant. An example is environmental hazard assessments, which often comprise of toxicity studies on only three species, he explains, whereas the knowledge obtained is meant to protect millions of species in the environment.
The hope is that this research can aid in reducing the amount of animals used in testing as well as serve as the base for future chemical toxicology models.
Reality Changing Observations:
Q1. Do you agree with using animals for chemical testing?
Q2. Do you trust Science to estimate the hazard of a chemical?
Q3. Would you take a drug that had only been tested by AI?