- Too many exemptions. Legal requirements for reporting results of completed clinical trials contain so many exemptions that most trials entered into are exempt.
- Enforcement too lax. The US FDA and NIH are tasked with enforcing these reporting rules but not only do they themselves flout them, they also do little to ensure compliance. For example, a 2015 investigation ( ) found not a single researcher or trial sponsor had been fined or penalized since the law came into effect in 2008.
Clinical Trial Results Under-reporting: A Chronic Problem Of Epidemic Proportions
Under-reporting of clinical trial results has long been chronic and widespread for decades (2).
A 2016 analysis (3) of the fate of 5918 abstracts presented at the American Society of Anesthesiologists annual meetings from 2001 to 2004 found
- 1052 presented results of human RCTs.
- Only 54%, 568 of 1052, were published within 10 years of the initial presentation.
- RCTs with positive data (defined as one showing a statistically significant result in favor of the experimental group) were 42% more likely to be published compared to those with negative data.
- Positive or negative data notwithstanding, most of these studies were small with median 40 to 50 participants, which only adds insult to injury. Not only is scope of false inference already considerable, such inferences would be drawn from tiny, utterly unrepresentative slices of the whole, greatly increasing scope of false positives ( ).
A 2016 analysis (5) of drugs and biologics tested in pivotal trials from 1998 to 2008 found
- 54% of them failed.
- Trial results were published for only 40% of the ones that failed.
Eventually the epidemic scale of the problem prompted the US government to enact the 2007 US FDA Amendments Act (FDAAA), which (, emphasis mine)
‘requires that the results from clinical trials of Food and Drug Administration–approved drugs and devices conducted in the United States must be made publicly available atwithin 1 y of the completion of the trial, whether the results are published or not.’
So Much Promised, So Little Achieved: Laws Mean Nothing If They Aren’t Enforced
A 2016 analysis () found that of 13327 completed or terminated clinical trials from January 1 2008 until August 31, 2012,
- 51 top academic and non-profit institutions posted clinical trial results on only 13% of the time even two years after they’re finished when the legal requirement is to do so within one year of completion.
- Researchers published their findings in medical journals within two years only 29% of the time.
A 2017 analysis (8) of breast cancer trials registered atfrom 2000 to 2012 found no improvement in reporting. Rather it found
- Trials with statistically significant outcome more likely to be published.
- Under-reporting of clinical trial results.
- Delay in publication of statistically non-significant results.
Situation is not much different even for posting trial results on thewebpage. One study ( ) examined 17536 studies with results posted at , 2823 of which were completed randomized phase II or III trials.
- 1400 of 2823 completed trials (~50%) reported the treatment effect estimate and/or p value. Of these, 844 (60%) had statistically significant results.
- 1423 trials only posted data without reporting results, which, however, could be calculated, at least in theory. Calculation was possible for 929 (65%), not for 494 (35%), either due to insufficient reporting, data censoring or repeat measurements. Of these 929, only 342 (37%) had statistically significant results.
- Key comparison then is the large difference in statistically significant results between those who posted their treatment effect estimate and/or p value (844 of 1400, 60%) versus those who didn’t (342 of 929, 37%).
- Thus, positive result bias is prevalent not just in publications but even in merely reporting them to .
- Irony is posting results to
is likely far more valuable to the public
- Site is freely accessible worldwide unlike scientific journals which are more often hidden behind exorbitantly priced paywalls.
- A 2013 Plos Medicine analysis ( ) found posted results for 212 published studies were far more complete, especially in reporting potentially adverse events, information of vital importance to patients.
The American health news web-site, published results of a 2015 investigation ( ) that showed
- Top flouters weren’t merely prestigious ones like Memorial Sloan Kettering, Stanford, Eli Lilly and GlaxoSmithKline but ironically the enforcers themselves. Yes, even NIH routinely flouted the rule it’s supposed to enforce (see below from ).
- Such high-profile exposure caught the attention of the US Senate with one senator chiding the NIH for failing to do its job ( ).
- Even then US Vice-President Joe Biden called for defunding federal grant recipients who didn’t comply with the law.
- In the short-term, reporting of clinical trials results jumped 25% over the same period the previous years.
Later in 2016, the US government posted new regulations () requiring public reporting of many more clinical trials, including some for drugs and devices that never reach the market. Problem is these rules don’t go far enough. There are too many exemptions.
- Trials entered into before 2008 are exempt.
- So are ‘privately funded studies — including small trials examining just the safety of a new drug, small feasibility studies of medical devices, and behavioral intervention studies‘ ( ).
- Thus the rules only apply to a fraction of registered trials.
- The 2015 STAT news investigation ( ) found they only applied to a mere 4.5%, ~9000 of ~200000 trials.
Rampant, defined as ‘published research which is systematically unrepresentative of the population of completed studies‘ (12), and originally described in 1979 as the file drawer problem ( ). The usual suspects of employers (universities, academic institutes, biotech/pharma, hospitals, medical centers, etc), journal editors, peer reviewers, funding agencies, even mass media, seem to continue to strongly prefer positive and novel results. Cost of basing treatments on not the whole picture but on a small, biased slice of it is harm to patients ( , , , ). Treatments based on a single RCT could later turn out to be useless or even dangerous ( ).
1. STAT, Charles Piller, December 13, 2015.
2. Rotonda, Tavola. “Underreporting research is scientific misconduct.” Jama 263 (1990): 1405-1408.; Antes, Gerd, and Iain Chalmers. “Under-reporting of clinical trials is unethical.” The Lancet 361.9362 (2003): 978-979.
3. Chong, Simon W., et al. “The relationship between study findings and publication outcome in anesthesia research: a retrospective observational study examining publication bias.” Canadian Journal of Anesthesia/Journal canadien d’anesthésie 63.6 (2016): 682-690.
4. Dumas-Mallet, Estelle, et al. “Low statistical power in biomedical science: a review of three human research domains.” Royal Society Open Science 4.2 (2017): 160254.
5. Hwang, Thomas J., et al. “Failure of investigational drugs in late-stage clinical development and publication of trial results.” JAMA Internal Medicine 176.12 (2016): 1826-1833.
6. Riveros, Carolina, et al. “Timing and completeness of trial results posted at ClinicalTrials. gov and published in journals.” PLoS Med 10.12 (2013): e1001566.
7. Chen, Ruijun, et al. “Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers.” bmj 352 (2016): i637.
8. Song, Seung Yeon, et al. “The significance of the trial outcome was associated with publication rate and time to publication.” Journal of Clinical Epidemiology (2017).
9. Dechartres, Agnes, et al. “Reporting of statistically significant results at ClinicalTrials. gov for completed superiority randomized controlled trials.” BMC medicine 14.1 (2016): 192.
10. STAT, Charles Piller, February 19, 2016.
12. Rothstein HR, Sutton AJ, Borenstein M. Publication bias in meta-analysis. In: Rothstein HR, Sutton AJ, Borenstein M (Eds). Publication Bias in Meta-Analysis: Prevention, Assesment and Adjustments. John Wiley & Sons, Inc. 2006.
13. Rosenthal, Robert. “The file drawer problem and tolerance for null results.” Psychological bulletin 86.3 (1979): 638.
14. Horton, Richard. “Offline: What is medicine’s 5 sigma?.” The Lancet 385.9976 (2015): 1380.
15. Smaldino, Paul E., and Richard McElreath. “The natural selection of bad science.” Royal Society Open Science 3.9 (2016): 160384.
16. Higginson, Andrew D., and Marcus R. Munafò. “Current incentives for scientists lead to underpowered studies with erroneous conclusions.” PLoS Biology 14.11 (2016): e2000995.
17. Nissen, Silas Boye, et al. “Publication bias and the canonization of false facts.” Elife 5 (2016): e21451.
18. Prasad, Vinay, et al. “A decade of reversal: an analysis of 146 contradicted medical practices.” Mayo Clinic Proceedings. Vol. 88. No. 8. Elsevier, 2013.