It’s been an open secret in medicine for some time: A central part of the drug-discovery pipeline is broken.
For decades, scientists have relied on animal models of human disease to test potential new treatments. Yet it has become increasingly clear that most drugs found to be effective in, say, lab mice ultimately fail in human beings. Pharmaceutical companies have raised alarms. By one recent estimate, only 11 percent of drugs that enter human trials are ever approved for use. Researchers run down one dead end after another, and billions of dollars are wasted.
Is there any way out of this animal-model maze?
Many scientists have pointed their ire at the gap between human diseases and their animal simulacra. (For example, older models of Parkinson’s disease relied on injecting neurotoxins into a rat’s brain, causing it to walk in endless circles; the rats do not, however, exhibit the disease’s progressive, age-dependent hallmarks.) But while biological plausibility remains a problem for some animal models, researchers have recently found that it’s not the only reason for their failures.
There may be animal instincts to blame, but the animals are us.
In a new study, published on Tuesday in the journal PLoS Biology, researchers led by John P.A. Ioannidis, a professor at the Stanford University School of Medicine, systematically studied 4,445 experimental results drawn from animal models of neurological disease, like stroke or brain inflammation. They found rampant evidence of bias, Dr. Ioannidis said.
“What we found, really, is that the results are too good to be true,” he said.
Just One Problem
The team showed that 1,719 of their data sets claimed statistically significant results, nearly twice the expected number. (They divine “expected” by using the most precise study they had as a comparison; exceed the scope of its success, and you’re flagged. It’s a complex, sometimes controversial method.) Over all, only 30 percent of the study clusters they surveyed found positive results without suffering from a small sample size or “excess” significance; of those, only eight had a sample larger than 500 animals.
To Dr. Ioannidis, the results imply that bias must be widespread. And others agree.
“It means there must be strong biases out there to account for this,” said Ian Roberts, a professor of epidemiology and public health at the London School of Hygiene & Tropical Medicine, who was not involved in the study and who, a decade ago, pointed out the problems in animal research. This study should help accelerate the search for solutions, he said. “It should lead to a shake-up in the way people do animal experiments.”
Several biases could explain the evidence; none are new to science. There’s bias against publishing negative results, or researchers may massage results so inconclusive studies turn statistically real. There may also, rarely, be outright fraud.
It’s convincing work, but there’s at least one problem with Dr. Ioannidis’s study, said Jonathan Kimmelman, an associate professor of biomedical ethics at McGill University who studies the animal-model system. Like much of the field, it lumps together animal studies meant to discover treatments and trials meant to test those treatments.
“I have a lot less concern of problems with bias in discovery,” he said. For example, Mr. Kimmelman went to graduate school for seven years, and had only one discovery paper published; putting out all his negative results would have defined tedium. “It’s not that interesting to publish every piece of hay in search of a needle,” he said.
A Hopeful Message
Dr. Ioannidis’s study does not spring from a void. This year he pointed at similar issues in neuroscience; he’s also famous for a 2005 paper, “Why Most Published Research Findings Are False.”
Many of his co-authors are from a network of neurological researchers, called Camarades, who have been studying why animal models struggle. Spurred especially by Malcolm R. Macleod, a neurologist at the University of Edinburgh and a co-author of Dr. Ioannidis’s study, the group has published multiple papers noting those flaws.
Mostly, researchers say, animal studies would do well to follow best practices from medical trials. There should be rigorous randomization, and scientists must be blinded to their experimental and control subjects. The field should move away from small individual studies, prone to statistical flaws, and toward large consortia. Replication must be common. Systematic reviews of evidence should be the rule, not the exception.
In some ways, Dr. Ioannidis bears a hopeful message. “If anything, I’m not a pessimist on animal research,” he said. If the problems can be corrected, the field will remain an essential tool in drug discovery, he said. “I feel that it has an opportunity to improve on many fronts.”
Not everyone is paying attention, however. “It’s not clear to me yet that we’ve reached a tipping point in terms of taking this seriously,” Mr. Kimmelman said. There are flaws throughout the life cycle of animal research, he said, in conception, in conduct, and in reporting. There’s much work to do. And beyond the research money squandered, it’s also an ethical problem.
Patients are put at risk with each new ineffective drug derived from animal studies. And the animals suffer too. If society is going to accept this work, scientists must do it better, Mr. Roberts said.
“There’s no excuse,” he said.