BMJ 2015;350:h85 doi: 10.1136/bmj.h85 (Published 26 January 2015)

Page 1 of 2

Editorials

EDITORIALS A core set of trial outcomes for every medical discipline? An important step towards truly meaningful “big data” Walter Koroshetz acting director National Institute of Neurological Disorders and Stroke, Bethesda 20892, ML, USA

The linked article by Ioannidis and colleagues (doi:10.1136/ bmj.h72) asks two important questions: should each medical discipline reach a consensus on whether certain outcomes are important regardless of which intervention is being tested? And, once these are established, should there be a requirement that trials routinely collect information and report this set of core outcomes in a standardized fashion?1

The authors examined all available systematic reviews of treatments in premature infants and found what they consider to be “errors of omission” of respiratory outcomes in a large percentage of systemic reviews and clinical studies. To simplify, the authors argue that even though there are a host of clinical research questions that need answers to improve the care of premature infants, respiratory outcomes are so important that they should be included in all studies, whether the focus is on nutritional, endocrine, neurological, hematological, or other body systems. The generic issue of establishing and requiring core outcomes to enable combination of data from multiple studies is critical. The message is similar to the one fashioned by the Core Outcome Measures in Effectiveness Trials (COMET) initiative (www.comet-initiative.org/about/overview). Of course the benefit to society is that the value of the research effort is multiplied if a study designed to answer question A can help to answer questions B through to Z. Some questions require such large sample sizes that they cannot be answered unless primary data from multiple studies are combined. This certainly proved to be the case in identifying the association of disease with common genetic variants. The idea of investigators using a common data language to facilitate data sharing among studies is extremely attractive to funders interested in optimizing the scientific value of their investment in clinical research. To this end the National Institute of Neurological Disorders and Stroke has been working to establish common data elements for various neurological disorders (www.commondataelements.ninds.nih. gov). The common data elements include defined outcome measures. Various disorder specific working groups within this national institute classify common data elements as “core” if they advise inclusion in all clinical research in the disorder or

“supplemental” if use depends on the nature of the question to be answered. The idea is that individual studies that are designed, conducted, and reported using a common language will have greater scientific value because the datasets can be truthfully combined. As a next step in facilitating research on combined datasets in traumatic brain injury, in the United States the National Institutes of Health and the Department of Defense Health Affairs have established a data repository for traumatic brain injury research that is constructed on the common data elements (https://fitbir.nih.gov). A formal commitment to use the common data elements and deposit data in the repository is included in grant awards for relevant research.

What are the downsides of establishing and then requiring disease specific core outcome measures in clinical trials? It turns out to be more complicated in practice than one might assume. Most importantly, consensus does not validate an outcome measure. Validation requires testing outcomes for clinical relevance, inter-rater reliability, robustness among different cultural and age groups, and examination for ceiling and floor effects, among other things. Reaching a consensus to choose an already validated common outcome measure requires a champion, a certain level of cohesiveness in the specific investigator community, and a sustained effort to monitor and update how the outcome measure is used. Often strong opinions are voiced in the process of choosing one among many possible outcome measures. Some argue for simplicity and sacrifice accuracy or depth of information, others advocate for more precise and data rich outcome measures that require greater effort or resources to collect. There might not be a right answer, and the best choice needs to be moderated by a neutral party. Once an outcome measure is chosen there can be continued concern that the choice doesn’t fit the need of a particular study. Investigators can then alter how they collect the data, or they can develop new versions of the specified outcome measure. Combination of different data with the same name is especially destructive to the overall goal of combining like with like; therefore annotating how an outcome measure was actually used in each trial becomes important. Who keeps track of changing use over time? Perhaps most problematic is

[email protected] For personal use only: See rights and reprints http://www.bmj.com/permissions

Subscribe: http://www.bmj.com/subscribe

BMJ 2015;350:h85 doi: 10.1136/bmj.h85 (Published 26 January 2015)

Page 2 of 2

EDITORIALS

the continued use of inadequate outcome measures simply because they were chosen by a consensus process decades ago. This problem is especially prevalent in trials aimed at obtaining regulatory approval for a treatment, where precedents are difficult to change.

These are not insurmountable problems but need to be anticipated and dealt with in the process of optimizing the value of clinical data. It is doubtful that funders will continue to make investments in clinical research without the promise that the value of their investments will be greater than the sum of the reports generated by the individual studies. Indeed, it is most difficult to justify to patients that their participation in clinical research is marginalized by the lack of the organizational will needed to combine data among studies. Electronic data sharing infrastructure is now more accessible, the concept of “big data”

For personal use only: See rights and reprints http://www.bmj.com/permissions

has become popular. The investigator community can advance its science to another level by embracing the culture change and ensure that big data are good data. Establishing and using common outcome measures is the first step. Competing interests: I have read and understood the BMJ policy on declaration of interests and declare the following interests: none. Provenance and peer review: Commissioned; not externally peer reviewed. 1

Ioannidis JPA, Horbar JD, Ovelman CM, Brosseau Y, Thorlund K, Buus-Frank ME, et al. Completeness of main outcomes across randomized trials in entire discipline: survey of chronic lung disease outcomes in preterm infants. BMJ 2015;350:h72.

Cite this as: BMJ 2015;350:h85 © BMJ Publishing Group Ltd 2015

Subscribe: http://www.bmj.com/subscribe

A core set of trial outcomes for every medical discipline?

A core set of trial outcomes for every medical discipline? - PDF Download Free
485KB Sizes 1 Downloads 7 Views