Our founder, Dr Carol Treasure, attended a workshop held by the European Chemicals Agency (ECHA) in Helsinki last week, and here shares a few thoughts from Day 1 of the event.
When I was invited to attend a workshop on “New Approach Methodologies” (NAMs) at ECHA’s HQ in Helsinki, I was intrigued to go along and hear about the latest regulatory developments “from the source”, and to contribute to the debate on current issues.
ECHA’s strapline is “Working for the safe use of chemicals” and, in setting the scene for the workshop, Dr Norbert Fedtke hit the nail on the head by highlighting the current political paradox:
“We currently face a higher demand for better safety assessment for humans AND increasing pressure to ban ALL animal experiments.”
It seems there has never been a time when accurate and reliable in vitro, in chemico and in silico methods are more in demand, to plug this gap.
Day 1 of the workshop focussed on three case studies, looking at the value of using read-across data for groups of chemicals with similar structures. The underlying principle is that, by understanding how chemical structural groups relate to toxic effects, it is possible to read across using the safety data generated from animal studies on a small number of reference chemicals, thereby negating the need for animal studies to be conducted on all chemicals for all endpoints, and reducing the total number of animals required. Such an approach is currently at the heart of ECHA’s “last resort principle” – which states that animal testing may only be used as a last resort in the context of REACH registration.
A variety of non-animal methods are used to generate data about the mechanism of action of the toxic effects (toxicodynamics) of structurally similar groups of chemicals, as well as how the body deals with these chemicals (toxicokinetics). Such methods include cell cultures, biochemical methods, genomics, proteomics and metabolomics, and computer modelling. With all these incredible technologies at our disposal, a lot of exciting progress is being made.
Currently, however, there’s a catch – and quite a significant one: ultimately, the results generated from all of these methods are related back to data obtained through animal testing. Some of these are historical studies dating back decades; others are new ones being performed now for REACH compliance. The animal data used as a benchmark shows huge variability between laboratories: a practical example highlighted variation between 2-900mg/kg/day for the LOEL in rats for one chemical (LOEL = Lowest Observed Effect Level). This creates a high level of uncertainty in safety assessment, and significant difficulty when in vitro models and other NAMs are validated against animal data. When “false positives” or “false negatives” are observed, which method is right?
In my view, a paradigm shift is needed, to a place where we are less dependent on unreliable data from animal studies to predict human safety. We must keep in mind that the traditional animal test methods were never validated, whereas today – quite rightly – we expect higher standards. Any new alternatives to animal testing must undergo a rigorous validation process prior to approval for use in hazard identification and safety assessment.
The scientific community now has so many sophisticated tools and human based methodologies for assessing consumer safety, which eradicate concerns over species differences and provide greater predictivity, but there is still a dominant tendency to ultimately relate everything back to flawed animal data, and retreat back under the comfort blanket of methods that we have used for decades.
Do we still need to use the animal data as a reference point? Such a comfort blanket provides a very dangerous, false sense of security.
Discussions during the workshop used the analogy of the seasons, and debated whether the Spring has arrived in terms of the progress of New Approach Methodologies (NAMs). In my view, Spring is turning to Summer, and it’s time to throw off that blanket and step out into the sunshine!