We’ve got the In-Vivo Blues

Toxicity testing for chemicals is outdated, unsound and unlikely to be precisely or otherwise, calculating the effects of low doses of chemicals in humanshelpcitingBhattacharya et al 2011, and someone’s going to have to breed tens of thousands more sacrificial lambs or mice.  Susan Kirk reports.

In vivo testing ain’t got no reason to be.
In vivo testing, it’s just depressing to me.
Well we gotta do better. We gotta do right.
Replace, refine, reduce, consider the animals’ plight.
In vivo testing ain’t got no reason to be.

(Harvey Clewell, III)

The science of toxicology“…toxicology, in its scope, casts a broad net, encompassing hazardous effects of chemicals (including drugs, industrial chemicals and pesticides), biological agents, also known as toxins (e.g., poisonous plants and venomous animals) and physical agents (e.g., radiation, noise).” was born out of necessity, sometime in the 19th century, because of an acquired, and, what could be a lethal knowledge, of,  poisons.

Later, in the 20th century, some say, after the second world-war, new industrial chemicals were being developed. They posed health risks, especially to the workers handling them. The question: how could we protect them?

Chemical testing started in the laboratory in animals in vivo or in animal cells in vitro under a premise that the dose–response relationship was monotonic—that is, increasing the dose increases the effect and vice versa. Thus, at some diminished level of dose, the effect would be minimal.

The overview of how chemicals are tested, which hasn’t altered much since its early beginnings over 50 years ago, is best visualised.

Figure 1
Toxicity Testing 101info © the Hamner 2013

copyright the Hamner 2013

(c) the Hamner 2013

Figure 1 shows an overview of how things work at the moment. In the first column there are a list of
endpoints.criticalCriticalAn endpoint can be thought of “…as an observable outcome in an…..organism, such as a clinical sign or pathologic state, that is indicative of a disease state……………from exposure to a toxicant.” Take cancer as an observable outcome. Here regulations throughout most of the world use risk assessment methods where the expectation is that any  amount of the chemical  has some degree of risk. The idea is to try to find doses with a calculated cancer risk of less than one in a million (US regulations). With other endpoints, the belief is that there are thresholds where very low doses pose no additional risk.

“We do the approximation of risk at low levels of exposure with a set of rules; but these rules haven’t been shown to be correct,” says Mel AnderseninfoInfoA toxicologist and one of the authors of the report “Toxicity Testing in the 21st Century: A Vision and A Strategy, (called the 21tox report in this article) written by scientists from the US National Research Council.

Another test may look at reproductive toxicity. Does the chemical induce infertility? Fewer offspring, or are they smaller in weight? The animals can be dissected (histopathology). Here the organ system is the target and measured end points show if the chemical changes the functions of this system.

Once the endpoint is chosen, the biological response of the chemical  - usually the percent of the subject affected  – is studied for different doses.

If the biological response is mortality, the dose that kills 50 percent of the exposed population is known as the lethal dose 50, or LD50. These results are placed along a sigmoid shaped plot that has a linear line at approximately 16 – 84 percent.

The toxicity rating indicates the amount of chemical required to produce death – a high toxicity rating means that small amounts are dangerous

Andersen explains it like this. “Picture a little plot. The Y axis is the proportion of animals affected; that’s the response. Along the X axis we put dose.

“If you go to a very high dose all of the animals are affected. That’s 100%. Then, as you bring the dose down, fewer and fewer animals are affected.

“The low dose region is actually down where we are getting some effects but not many of the animals or cells are affected.

“That’s the low dose region.”

The toxicity rating indicates the amount of chemical required to produce death – a high toxicity rating means that small amounts are dangerous.

Toxicity tests provide a LOAEL – no observed adverse effect level (the lowest dose that does not increase the particular adverse response above the background level of response.)

Sometimes “more often these days,” a different estimate is used called the benchmark dose (BMD). It comes from a statistical analysis of the dose response curve and provides a more refined measure of response than just an LOAEL.

A BMD10 or a BMD05, is used, these are, the BMD causing a 10% or 5% increase in response compared to background.

Two more steps are taken to go from the NOAEL or BMD to get an exposure level for people. These steps may include adjustments for differences in pharmacokinetics (PK) between animals and people or pharmacodynamics (PD) between animals and people.

More simply put PK– how chemical gets absorbed into the body and where they travel to, to cause toxicity. PD– the relative potency of the chemical to cause an effect once it reaches tissues.

Two questions that we ask in risk/safety assessments are PK and PD questions: First, does the same exposure in a human give the same or different tissue dose as seen in animals? Second, do human tissues responds to a unit of dose the same or differently than the response in animals?

Andersen says more important than gauging toxicity (the intrinsic hazard) is the risk associated with usage. A concept that incorporates exposure to dosage and hazard, or the adage, the dose makes the poison.

For example, processing of ethanol (alcohol) by the body results in a mildly toxic chemical called acetaldehyde, along with another chemical that slows down the releases of glucose from the liver, resulting in low blood sugar. You’re then in for the ‘feel like death’ effects of a bad hangover. Abusing (overusing) alcohol can be lethal. Smaller amounts may even be beneficial.

The Changes

The methodology for toxicity testing is still being debated but the fact that the science is at a crossroads appears undeniable.

The 21 tox report acknowledges “…major gaps in current toxicity testing.” These gaps are contentious and rely on a presumption that effects of some chemicals on public health are being missed or Andersen believes “probably not adequately tested.”

In the US and, globally, there are initiatives to change the status quo of testing. There are a number of drivers initiating this change, according to Andersen, and they emanate for at least three reasons. In the US he says it’s mostly a matter of economics and a need for quick answers.

In the EU, change was fuelled by the ethics of animal testing. In 1959 a paper set about a movement to minimise the use of animal testing in labs. The authors, Russell and Burch, called the movement the
three R’s. Replacement—reduction—refinementinfoInfohttp://www.animalethics.org.au/three-rs

Finally, there is a growing group, represented by dozens of scientists, arguing that new cell approaches are an underused benefit of advances in 21st century biology.

There is debate about a complete abstinence from animal testing. Naysayer’s predicting that animals can’t be replaced as they are unlikely to ever provide enough information about the complexity of living systems.

Andersen begs to differ, saying, a parallelogram approach means that if we see correspondence between cell and in life assays in the rat then we have some confidence that results with cells predict responses in animals. He understands animal testing will have to continue for some time yet.

These are complex messages and for some toxicologists they are controversial. Andersen and a few mates spread the word by performing at science conferences and workshops. As part of a blues trio, they throttle out inspiration with songs like, the In Vivo Blues, the lyrics are the introduction to this article. Here’s another example called ‘the dose response MTD blues’.

High dose experiments,
I keep on doin’ them ‘cause my mind is small.
If it wasn’t for high dose, I might not see nothing at all.
Well I test at high doses, called the MTD.
Human relevance is a mystery to me.
Yeah, I’m a high dose man,
Been doin’ high doses all of my days, yes.
(Harvey J. Clewell, III)

“[The lyrics] It resonates with people,” says Met’l plate Mel (Andersen).

It’s easier to leave a point with someone with a song or lyrics then it is to argue.

“Theres’ a fellow at EPA who feels different.

“After we finished playing he came over, looked me in the eye and said, ‘Well you boys play a lot better than I thought but you could do with a new lyricist.’”

“Probably if we had argued he never would have heard what we had to say.”

So lets call all this the vision of a bunch of quirky, futuristic science nerds. How will it change?

Figure 2 – ©

It will start to incorporate more intelligence from other sciences. The science of bio-monitoring, initiated in the 1890’s to monitor blood lead, will provide population and exposure data to a range of chemicals.

Expanded studies since the introduction of the National Health and Nutrition Examination Survey (NHANES) and more international bio-monitoring efforts, means regulators have an improved understanding of how widespread some chemical exposures are in the population.

New technologies in bio-monitoringinfoInfoHuman Biomonitoring for Environmental Chemicals 2006 National Academies Press citing Litt et al. (2004)  have the potential to transform the nation’s capacity to track exposure to pollutants and understand their impacts on health.

This datum will provide the context for the type of testing undertaken, whether that is toxicity pathway testing or targeted testing.

For example, one familiar cellular-response network is signalling by oestrogens in which initial exposure results in enhanced cell proliferation and tissue growth.

The term ‘endocrine disruption’helpHelp“An endocrine disrupter is an exogenous substance that causes adverse health effects in an intact organism, or its progeny, secondary to changes in endocrine function.”European Workshop on the Impact of Endocrine Disrupters on Human Health and Wildlife (Weybridge, UK; 1996). European Union Report EUR17459. has been coined.

“It is important to realise that endocrine disruption is not a toxicological endpoint per se as is cancer or allergy, but that it is a descriptor for a functional change that may lead to adverse health effects.”infoInfoThe European Union Scientific Committee on Toxicity, Eco-toxicity and the Environment (CSTEE) Andersen agrees saying, “Rather, endocrine disruption should be seen in the context of well-established endpoints, primarily reproductive toxicity and impaired development.”

In the US there are already plans by the EPA to put into effect new methods to assess endocrine disrupting chemicals. In a tiered approach that will start with the identification of the chemicals thought to have an effect in place; to identify and test for chemicals that may have these affects on the endocrine systems.

A Hamner project on oestrogen differs from USA EPA ED approaches. They focus on specific testing of chemicals with a human uterine cancer cell and look at how these cells respond to estrogen-like compounds.

The oestrogen pathway is evaluated in detail, especially the way that the organisation of the pathway controls response at high and at low doses. A level of response is predicted for different doses of oestrogen in the cells. This can then give an indication of what levels should be safe in people. These in vitro tests should eventually be sufficient for risk assessments without resorting to animal tests.

In fact, the test results to be generated in the new testing depart from the traditional data used by regulatory agencies to set health advisories and guidelines. The new testing would include high throughput testing and is outlined in the figures in Appendix 1.

Dose-response

The low dose anomaly

Andersen says the term ‘low dose’ is a little misleading.

“What we are really talking about is the low response region.”

For example a chemical that is very potent, like dioxin, has a low dose region at really low concentration but less potent chemicals have a low dose region at higher doses.

Low dose really means where we see a low percentage of animals that are affected or a low level of response to the chemical. Most of the debate in toxicology is about how well scientists know about what goes on in these low level response regions.

“How do you know there are not subtle changes going on at the low levels and if you maintain them for a long period of time will it show toxicity?”

“How do you go from understanding of high exposure effects to predict what will happen at low exposures? Usually the effects being studied have some background level. Trying to see whether there are any effects at low doses becomes a statistical question.

“We really can’t distinguish between no response and some small levels that is not statistically significant.

“If we had the tools to understand how the cell or animal responds then we can understand the shape and can say OK this is how the data fits with the understanding of dose response.”

Andersen also says that any method of testing has limitations but that he believes the use of animals may not necessarily be giving good estimates of risk in exposure to humans.

“….in this day and age with our improved knowledge of human biology we should be using human cell systems not animals and doing testing that’s targeted on modes of actions or pathways (toxicity pathways) relevant for people”

“We want to be able to tell to the public that there is no risk which we can’t do.

“Or to be able to say the chances of risk are much less than some level, say one in a million. Then we can say: it’s safe.

“That’s the region we can’t study.

“You will need to have ten million animals to do the study to look at the one in a million affect.

“That’s difficult!”

The cells

The cells are the crux in chemical testing and with a move away from animals it poses questions about what cells and their acquisition.

Three types of human cells might be useful. Disease free primary cells can sometimes come from accident victims or tissues from surgery.

Andersen says there’s too much testing to be done to rely on this source and usually, they die so the only way to get new cells is from other donors.

Human cancers can provide long living, freezable cells. The concern here is that cancer cells are already changed leaving a question mark on their validity to predict responses in normal tissues.

More confrontational is the use of pluripotent stem cells that come from fetal tissues or amniotic fluid. They can be coaxed to form any kind of cell, such as liver or lung.

The crème de crème may be another rapidly developing technology where mature primary cells are transformed into stem cells. These are called induced pluripotent stem cells (IPS) cells.

Where to from here

No one knows what this new test methodology will look like right now admits Andersen.

“There are glimpses here and there and the field is moving along.”

Andersen’s vision includes a group of assays done, in primary human or animal cells in petri dishes with the cell types chosen based on the knowledge of how chemicals affect the target systems.

Addressing the feasibility of moving to testing in human cells, Andersen and others believe that there are two key considerations for testing using cells in petri dishes. The first is having good cellular assays.

Many organisations including the Hamner Institutes for Health Sciences where Andersen works, are looking at creating more stable cell cultures that survive long enough to enable repeated dosing over weeks rather than days.

Challenges in establishing the validity of the new toxicity assay can be formidable ——expensive, time-consuming, and logistically and technically demanding.

First, the test results to be generated depart from the traditional data used by regulatory agencies to set health advisories and guidelines.

Because virtually all environmental agents will perturb signalling pathways to some degree, a key challenge will be to determine when such perturbations are likely to lead to toxic effects and when they are not.

If we can predict safe levels from testing with our uterine cells. How will we know if anyone should believe our predictions? Well, one way is to check to see how predictions from studies in cells predict responses in living rats or mice.

The parallelogram approach means that if we see correspondence between cell and in life assays in the rat than we have some confidence that results with human cells predict responses in people.”

“Testing for predicting safe exposures with human cells is really just getting started.

“Many groups throughout the world are working on these ideas.

“It’s an exciting time for those of us trying to improve testing methods and get better understanding of the safety of chemicals and drugs”, says Andersen.

Toxicologist, Dr Kim Boekelheide, from Brown University, a friend and colleague of Andersen says, “Andersen considers his work in developing this new paradigm to be last great challenge of his career, to which he will have dedicated over a decade to its solution.”

 

BIBLIOGRAPHY|| APPENDICES|| INVIVO

 

Did you find this information helpful? If you did, consider donating.

About Susan_Kirk

Susan Kirk is a freelance science journalist. She has written for many different publications and extensively for Fairfax media under their horticultural and agricultural mastheads. Her interests lie in the fields of biology; plant and human. She has TAFE qualifications in horticulture and is currently studying an advanced certificate in nutritional counselling. She is a member of the Media Alliance, the Australian Science Communicators and the Australian Medical Writers Association. She writes from Kureelpa on the Sunshine Coast hinterland (Australia).