Retiring the Mouse Model Gold Standard

    10/09/2016

    Sometimes I stumble on a great story while driving around and listening to the radio and it makes me ponder about the science that I do. Case in point, last month while listening to a replay of Freakonomics with Steven Dubner, I heard a program featuring a dozen people commenting about scientific ideas that are ready for retirement. It was based on an Edge column in 2014, “This Idea Must Die”, in which John Brockman invited dozens of world renowned scholars and experts to voice their opinions on outdated scientific ideas. Such ideas have led to practices that have been unknowingly accepted by the public but have not proven beneficial to anyone. One particular interview struck me: Azra Raza’s segment about why mouse models should no longer be used.

    Ever since Charles Darwin’s The Origin of Species by Natural Selection was published, the idea of using animal models to test human disease has played well with biologists. We do after all share a common ancestor with all animals and we can trace certain characteristics back to the same lineage of genes. Just take a quick browse through pubmed on any biological problem or disease and you will find the variety of commonly used models such as nematode worms, fruit flies, zebrafish, rats, yeast and other animals great and small. Mouse models have been the gold standard of biomedical research for years. They are powerful because they can be generated in great numbers quickly, producing a large amount of data for scientists to do statistical analysis and publish findings. It is easy to genetically manipulate mice, it is cheap and affordable to house them in universities and ethics committees have no problems approving their use on protocols.

    But here is the rub. According to Raza, a physician and research scientist working on myelodysplastic syndrome (acute leukemia) at Columbia University, the use of mouse models to study cancer treatments is outdated. Raza said that the origins of using mouse models came from successes in the 1970s by translating a mouse leukemia toxin into a clinical treatment. Unfortunately this chemotherapy has now proven itself to be extremely damaging but the lasting repercussions from the initial success of this study have spurned many other research disciplines to use the same animal model. She cites a paper in which 150 different drugs were tested on mice to treat sepsis but all of them were ultimately proven useless in the treatment of the human version of the disease.

    Part of the reason is that laboratories often use “xenografts”, transplanting a human patient’s tissue into the immunosuppressed mouse to test for a drug candidate’s efficacy. The resulting effect of a drug on the mouse would be radically different from that of the human. Furthermore, while drugs therapies in cancer have seen rising FDA approval rates in recent years (up to 20% of NDAs are being approved), 90% of all candidate molecules are still failing toxicity studies and this is attributed to using mice. Raza goes on to indict the current academic environment as being a chief culprit in the over-use of mouse models. Research scientists are goaded into applying for grants using murine models because the archaic system of NIH funding favors the use of these models over all others. In fact, many eminent scientists have built entire careers out of using mice and publication bias in the literature (p-hacking aside) has historically favored using murine models. Try publishing any paper in a high impact factor journal based on cell or tissue culture studies and every so often a reviewer will ask for an in vivo mouse model.

    A similar problem exists in neuroscience discovery and translation of drugs for the treatment of neurological disorders. A recent paper in the journal of Institute for Laboratory Animal Research cast doubt on the use of mice to study diseases of the brain. For example, of the 200 different potential interventions that have been published on a mouse model of Alzheimer’s Disease (APP mouse), not one has been reported effective in humans and up to 96% of those Alzheimer’s drugs have been lost to attrition. Another scientist reported that of 100 potentially promising candidate drugs tested for Amyotrophic Lateral Sclerosis (Lou Gehrig’s Disease), not one eventually showed success in human clinical trials.

    It seems there are four common misconceptions that scientists have been overlooking in using mouse models:

    Getting rid of your mouse problem:

    The upshot is that researchers are beginning to realize the risks of relying on traditional mouse models for a wholistic interpretation of disease. Here are a few solutions suggested by Joseph Garner in the ILAR journal:

    1. Reverse translating human biomarkers back into animals

    Pharmaceutical and biotech companies like to focus on the magic word “Biomarkers”, which are simply clinical measurements that tell you how far a disease is progressing. Pinpointing the exact biomarker, often based on a biochemical change, is essential to determination of a treatment course and ultimately the design of the drug. If we can find and validate a predictable biomarker in humans we must ensure it can be tracked in the same way using scalable methods in mice before using them as a standard model.

    2. Adopting human clinical trial designs in animal models

    The best human clinical trials are standardized by giving placebo (sugar pills) and the test drugs to a randomized group of up to thousands of patients in a blind fashion. The physician does not know which pill they are giving to the patient. Thus, it makes good practice to blindly give placebo and treatment regimes to mice during an experiment to mimic human models. In human trials, the dose and administration of these pills are scaled up gradually if the test patients do not suffer serious adverse events - damaging side effects. However, patients vary in sex, height, weight, age and medical histories. People often take these drugs at home, under different conditions, perhaps under stress of looking after family, or in the office while at work. Dropout or lagging behind on medication is common in these trials. In clinical trials these variations have to be built into the statistical analysis. Similarly such variations must also be designed for preclinical testing of mice - testing different demographics of mice by gender, age, weight and genetic strains. Genetically identical mice can vary drastically by behavioral anxiety and stress - often depending on how high up their cage is from the animal room floor.

    3. Enriching mouse cages and reducing animal handling stress

    While housing animals in a comfortable, clean, dry cage with food and water is an absolute basic key to all experiments and common sense for all laboratories, handling them to prevent stress is another matter. The difference between a student who nervously handles a mouse causing it stress before a behavioral test and a mature scientist who handles an experienced mouse that has been trained in a task is enormous. Increasingly it is common practice to get multiple experimenters of varying levels to train animals. The animals should be trained and familiar with a task before any measurements are made. This gets the animal handler familiar with managing mice and it gets mice familiar with being touched, then participating in a task, thus eliminating environmental bias.

    4. Validating changes in the protocol against success and failure

    As with cancer drugs, many neurological drugs are on the market to treat symptoms of a disorder without treating the underlying mechanism of disease. Sometimes these compounds reveal damaging side effects years later. The problem lies in not having understood the disease mechanism well enough before rushing to manufacture the drug. On the other hand, if scientists focused on an empirical biomarker that can be translated from humans into mice, such as insulin resistance against diabetes, we would be able to directly measure disease progression from birth to death. The drug design would be, as the FDA demands, more safe and efficacious.

     

    The goal, I ultimately hope, is that we shift from an inefficient, burdensome framework of using tired old animal models to screen for drugs that only work for a minority of patients into one that develops treatment products that work for a majority of people on a personalized level. The emergence of new clinical trial technologies such as “mouse avatars”, growing a patient’s own tumor in a mouse line to test for treatments is a step in the right direction. However we can do more in future, by eliminating the animal model altogether. We could use purely an in vitro paradigm, such as growing a patient's own stem cells or tumor grafts in a culture dish and then use computer modeling to test environmental factors that could cause epigenetic changes. This way we might come closer to solving the biggest medical problems.

     

    References

    Edge article by Azra Raza - https://www.edge.org/response-detail/25429

    Freakonomics - http://freakonomics.com/2015/03/05/this-idea-must-die-full-transcript/

    ILAR paper - http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4342719/

    Misleading Mouse ALS studies - http://www.nature.com/news/misleading-mouse-studies-waste-medical-resources-1.14938

    Preclinical Research - http://www.nature.com/news/preclinical-research-make-mouse-studies-work-1.14913

    Mouse Avatars - 3http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4092874/