Essential 10

6. Outcome measures For hypothesis-testing studies, specify the primary outcome measure, i.e. the outcome measure that was used to determine the sample size. explanation

6b For hypothesis-testing studies, specify the primary outcome measure, i.e. the outcome measure that was used to determine the sample size.

In a hypothesis-testing experiment, the primary outcome measure answers the main biological question. It is the outcome of greatest importance, identified in the planning stages of the experiment and used as the basis for the sample size calculation (see item 2 - Sample size). For exploratory studies it is not necessary to identify a single primary outcome and often multiple outcomes are assessed (see item 13 – Objectives).

In a hypothesis-testing study powered to detect an effect on the primary outcome measure, data on secondary outcomes are used to evaluate additional effects of the intervention but subsequent statistical analysis of secondary outcome measures may be underpowered, making results and interpretation less reliable [1,2]. Studies that claim to test a hypothesis but do not specify a pre-defined primary outcome measure, or those that change the primary outcome measure after data were collected (also known as primary outcome switching) are liable to selectively report only statistically significant results, favouring more positive findings [3].

Registering a protocol in advance protects the researcher against concerns about selective outcome reporting (also known as data dredging or p-hacking) and provides evidence that the primary outcome reported in the manuscript accurately reflects what was planned [4] (see item 19 – Protocol registration).

In studies using inferential statistics to test a hypothesis (e.g. t-test, ANOVA), if more than one outcome was assessed, explicitly identify the primary outcome measure and state whether it was defined as such prior to data collection and whether it was used in the sample size calculation. If there was no primary outcome measure, explicitly state so. 



  1. John LK, Loewenstein G and Prelec D (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science. doi: 10.1177/0956797611430953
  2. Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, Crystal RG, Darnell RB, Ferrante RJ, Fillit H, Finkelstein R, Fisher M, Gendelman HE, Golub RM, Goudreau JL, Gross RA, Gubitz AK, Hesterlee SE, Howells DW, Huguenard J, Kelner K, Koroshetz W, Krainc D, Lazic SE, Levine MS, Macleod MR, McCall JM, Moxley RT, 3rd, Narasimhan K, Noble LJ, et al. (2012). A call for transparent reporting to optimize the predictive value of preclinical research. Nature. doi: 10.1038/nature11556
  3. Head ML, Holman L, Lanfear R, Kahn AT and Jennions MD (2015). The Extent and Consequences of P-Hacking in Science. PLOS Biology. doi: 10.1371/journal.pbio.1002106
  4. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N, Simonsohn U, Wagenmakers E-J, Ware JJ and Ioannidis JPA (2017). A manifesto for reproducible science. Nature Human Behaviour. doi: 10.1038/s41562-016-0021