Tags

, , , , ,

Biomedical Research Rewards Novelty At The Expense Of Replication

Enough funding to replicate results? In basic biomedical research, it would be more accurate to say none or little. A perverse incentive structure informs every critical aspect and entity involved in basic biomedical research, from the employer to grant funding agency to journal publisher, namely focus on novelty at the expense of replication. A couple of recent, highly cited studies of third party replication efforts reveal the scale and scope of the problem.

  • In 2011, Bayer HealthCare researchers could successfully reproduce only ~16 out of 67 seminal studies, i.e., ~25% (1). Deeper analysis only deepened cause for dismay. Similar rate of discrepancies ensued regardless
    • Whether exact experimental conditions were repeated or not.
    • The research field (Oncology/Women’s health/Cardiovascular).
    • Whether publication was prestigious/not, i.e., journal impact factor irrelevant.
    • The number of publications on a particular target.
    • The number of independent groups authoring the publications.
    • In other words, the result the very text-book definition of consternation because these are just the sort of factors implicitly believed by researchers to lend credence to peer-reviewed data.
  • In 2012, Amgen researchers published another replication attempt (2). They tried to replicate 53 landmark cancer studies and ended up successfully doing so for only 6, i.e., only a dismal low of ~11%. Even more shocking? These researchers worked closely with the original researchers to make sure they mimicked as closely as possible the experimental methods used in the original studies.

What could be the reason(s)? One likely culprit animating the basic biomedical research replication crisis is improper, even ignorant, use of statistics. No one person has done more to highlight this pernicious problem than John Ioannidis, starting with his modestly titled salvo, ‘Why most published research findings are false‘ (3). 10+ years have passed and predictably little has changed across the scientific research landscape. Why predictable? For one,  slow-moving and obdurately averse to change, science is a highly conservative enterprise. For another, the firmly ensconced perverse incentive structure actively resists replication. At every stage, this system rewards novelty, not replication, be it for getting or renewing grant funding, for tenure, for peer-reviewed publication. These big three embody the necessary and sufficient foundation for a successful research career. Rewarding fairly frequent publications, i.e., the publish-or-perish mandate, and having a strong publication bias, i.e., positive rather than negative results more likely to get published, the system in place thus has a strong, in-built self-fulling prophecy geared towards irreproducible results.

What’s the Future for Replication In Biomedical Research?

Quite unlike human clinical trials, basic research labs are free to design their experimental systems as they please with little or scant regard to sound statistical approach. Typically, statistics are applied only at the back end, on data generated from such experiments. Such a structure only exaggerates the propensity to misuse or even abuse statistical tools. Since the way the system currently operates, Confirmation bias is inevitable, it steers data analysis depending on how they align with the starting hypothesis. Hypotheses, by the way, don’t operate in a bias-free world either, not even in science. Some hypotheses are more important than others, importance largely determined by whether the most influential researchers and journals in a given field favor them at a given moment or not. Enter the related demon of Publication bias.

Rather than driven by malevolence, much of this behavior is merely a predictable response to the pressures and rewards in the prevailing biomedical research enterprise. For the system to improve, proper statistical tools need to be applied before beginning any experiments, i.e., in the experimental design itself. There is greater likelihood of this happening henceforth with the recent publication of the first multi-center randomized controlled clinical trial in mice (4). However, before we get too carried away, on the one hand, at the time of writing, in 4+ months, it’s been cited only 6 times since its publication on August 5, 2015. OTOH, arguably hundreds of thousands of basic research studies are published every year. Thus, it’s crystal clear that randomized controlled clinical trials would take years to become the norm in mouse and other animal model studies. The silver lining is that 10 years after Ioannidis’ shot across the bow, at least this multi-center study has established a precedent. Whether a flood or a trickle follows in its wake depends on how much and how quickly the centers of power, namely employers, funders and science publishers, reform the current perverse incentive structure.

In the field of psychology, the University of Virginia psychologist Brian Nosek established the Center for Open Science (Openness, Integrity, and Reproduciblity) in 2013. Its first venture, the Reproducibility Project (Reproducibility Project: Psychology Wiki), recently repeated 100 published psychology experiments and found only a third had similar statistically significant results as the originals. To make matters worse, effect sizes were ~half those in the originals. Published in August 2015 (5), this study shows clearly that the rot of irreproducible data is deep in psychology. However, psychology is social science, not biomedical research. And apart from the sporadic individual/group directed efforts cited above (1, 2), no such concerted effort or institution to replicate basic biomedical research exists as of yet.

Large-scale efforts to establish replication as a foundational pillar of biomedical research requires structural change, which in turn requires an about-turn on the part of those holding the reins, employers, funders, publishers. Until this triumvirate gets its act together to make replication and novelty both equally important foundational mainstays, status quo will continue to spew out a flood of potentially irreproducible and therefore, irrelevant results. Structural change may necessitate establishing institutes with the sole goal of replication, tenure rewards to scholars who focus on replication efforts, multi-center replication efforts published prominently in high profile journals. In short, a culture change which we know is the most difficult of all.

Bibliography

1. Prinz, Florian, Thomas Schlange, and Khusru Asadullah. “Believe it or not: how much can we rely on published data on potential drug targets?.” Nature reviews Drug discovery 10.9 (2011): 712-712. http://www.nature.com/nrd/journa…

2. Begley, C. Glenn, and Lee M. Ellis. “Drug development: Raise standards for preclinical cancer research.” Nature 483.7391 (2012): 531-533. http://www.mckeonreview.org.au/s…

3. Ioannidis, John PA. “Why most published research findings are false.” Chance 18.4 (2005): 40-47. http://www.plosmedicine.org/arti…

4. Llovera, Gemma, et al. “Results of a preclinical randomized controlled multicenter trial (pRCT): Anti-CD49d treatment for acute brain ischemia.” Science Translational Medicine 7.299 (2015): 299ra121-299ra121.

5. Open Science Collaboration. “Estimating the reproducibility of psychological science.” Science 349.6251 (2015): aac4716. http://etiennelebel.com/document…

Further Reading

In Peer-reviewed Scientific Literature

1. Vasilevsky, Nicole A., et al. “On the reproducibility of science: unique identification of research resources in the biomedical literature.” PeerJ 1 (2013): e148. https://peerj.com/articles/148.pdf. Cited 58 times as of Dec 31, 2015.

2. Mobley, Aaron, et al. “A survey on data reproducibility in cancer research provides insights into our limited ability to translate findings from the laboratory to the clinic.” (2013): e63221. http://www.plosone.org/article/f…. Cited 47 times as of Dec 31, 2015.

3. Bustin, Stephen A. “The reproducibility of biomedical research: Sleepers awake!.” Biomolecular Detection and Quantification 2 (2014): 35-42. http://ac.els-cdn.com/S221475351…. Cited 5 times as of Dec 31, 2015.

4. Corey, David R., et al. “Breakthrough Articles: Putting science first.” Nucleic acids research 42.18 (2014): 11273-11274. Putting science first. Cited once as of Dec 31, 2015.

5. Li, Fei, et al. “Authentication of experimental materials: A remedy for the reproducibility crisis?.” Genes & Diseases 2.4 (2015): 283. Uncited as of Dec 31, 2015.

Note the extremely tepid citation records of these papers, even the ones authored by heavyweights like Stephen Bustin, an indication that in the current cultural landscape of basic biomedical research, calls for replication efforts definitely swim against the tide. Something rotten in the state of Denmark? Very much so.

In Prominent News Media

1. Ed Yong, National Geographic, March 5, 2013. New Center Aims to Make Science More Open and Reliable

2. The Economist, October 19, 2013. How science goes wrong

3. The Economist, October 19, 2013. Trouble at the lab

4. Philip Ball, nautil.us, May 14, 2015. The Trouble With Scientists – Issue 24: Error – Nautilus

5. Chris Chambers, The Guardian, May 20, 2015. Psychology’s ‘registration revolution’ | Chris Chambers

6. Christie Aschwanden, FiveThirtyEight.com, August 19, 2015. Science Isn’t Broken

7. Ian Sample, The Guardian, August 27, 2015. Study delivers bleak verdict on validity of psychology experiment results

8. Ed Yong, The Atlantic, August 27, 2015. How Reliable Are Psychology Studies?

9. Ashrath Rathi, Quartz, August 28, 2015. This is how science can finally start to fix itself

10. John Ioannidis, The Guardian, August 28, 2015. Psychology experiments are failing the replication test – how is this surprising? | John Ioannidis

11. Bourree Lam, The Atlantic, September, 2015. What Science Can Tell Us About Bad Science

Other answers that elaborate the various entrenched obstacles to replication in biomedical research:

Tirumalai Kamala’s answer to How can we create truly large bio repositories to aid medical research?

Tirumalai Kamala’s answer to Are all HeLa cells from a commercial source equivalent to numbers or replications?

Tirumalai Kamala’s answer to Which people have considerable potential to become the great scientists of the future?

Tirumalai Kamala’s answer to Does analysing and organising statistics count as research?

https://www.quora.com/Is-there-enough-funding-for-science-experiment-results-to-be-replicated-by-third-parties/answer/Tirumalai-Kamala

Advertisements