Viewpoints in This Issue of JAMA


Mega-Trials for Blockbusters

John P. A. Ioannidis, M.D., D.Sc., of the Stanford University School of Medicine, Stanford, Calif., writes that mega-trials (very large, simple trials) should be conducted for licensed interventions with annual sales that exceed $1 billion, i.e., a blockbuster. These blockbusters “should have at least 1 trial performed with at least 10,000 patients randomized to the intervention of interest and as many randomized either to placebo (if deemed to be a reasonable choice) or to another active intervention that is the least expensive effective intervention available.”

“Blockbuster drugs are eventually used by millions of patients. Typically there is evidence from randomized trials suggesting that these drugs are effective—at least for some end points (not necessarily the most serious ones), for some follow-up (not necessarily long enough), and in some specific circumstances (not necessarily representing what happens in real life). The supporting randomized trials typically include only a few hundred participants or, in the best case, a few thousand participants, often with relatively short-term follow-up and are conducted among populations selected to avoid patients with co-morbid conditions and those who take some other drugs.”

“Eventually many blockbusters may prove to be fully worth their cost. Conversely, if some of these widely used products fail to demonstrate benefit in mega-trials, this could mean saving tens of billions of dollars per year and perhaps thousands of lives.”

(JAMA. 2013;309[3]:239-240. Available pre-embargo to the media at


Prespecified Falsification End Points – Can They Validate True Observational Associations?

Vinay Prasad, M.D., of the National Cancer Institute, Bethesda, Md., and Anupam B. Jena, M.D., Ph.D., of Harvard Medical School and Massachusetts General Hospital, Boston, write that “as observational studies have increased in number—fueled by a boom in electronic recordkeeping and the ease with which observational analyses of large databases can be performed—so too have failures to confirm initial research findings.” The authors suggest that to help identify whether the associations identified in an observational study are true rather than spurious correlations is the use of prespecified falsification hypotheses, which is a claim, distinct from the one being tested, that researchers believe is highly unlikely to be causally related to the intervention in question. “Prespecified falsification hypotheses may provide an intuitive and useful safeguard when observational data are used to find rare harms.”

“Prespecified falsification hypotheses can improve the validity of studies finding rare harms when researchers cannot determine answers to these questions from randomized controlled trials, either because of limited sample sizes or limited follow-up. However, falsification analysis is not a perfect tool for validating the associations in observational studies, nor is it intended to be. The absence of implausible falsification hypotheses does not imply that the primary association of interest is causal, nor does their presence guarantee that real relations do not exist. However, when many false relationships are present, caution is warranted in the interpretation of study findings.”

(JAMA. 2013;309[3]:241-242. Available pre-embargo to the media at

Editor’s Note: Please see the articles for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

 # # #