RESEARCH ARTICLE For reprint orders, please contact: [email protected]

Principles for planning and conducting comparative effectiveness research Aims: To develop principles for planning and conducting comparative effectiveness research (CER). Methods: Beginning with a modified existing list of health technology assessment principles, we developed a set of CER principles using literature review, engagement of multiple experts and broad stakeholder feedback. Results & conclusion: Thirteen principles and actions to fulfill their intent are proposed. Principles include clarity of objectives, transparency, engagement of stakeholders, consideration of relevant perspectives, use of relevant comparators, and evaluation of relevant outcomes and treatment heterogeneity. Should these principles be found appropriate and useful, CER studies should be audited for adherence to them and monitored for their impact on care management, patient relevant outcomes and clinical guidelines. Keywords: comparative effectiveness research n comparative effectiveness research guidelines n conduct n outcomes research n patient-centeredness n planning n principles

Funding for comparative effectiveness research (CER) will grow substantially in the coming years, especially through the Patient-Centered Outcomes Research Institute (PCORI). To succeed in guiding healthcare decisions, CER should be planned and conducted with rigor and transparency. To increase the likelihood that the forthcoming research will yield the desired results, we developed a set of 13 best research practice principles for the planning and conduct of CER (Box 1). These principles build upon efforts of others. Whereas previous initiatives focused on health technology assessments (HTAs) [1,2], more broadly on outcomes research [3] or methodological practices [4,101,102], our recommendations comprise a more general set of principles that include the process of planning and conducting CER studies, and the potential for CER to improve healthcare and health.

Bryan R Luce*1,2, Michael F Drummond3, Robert W Dubois4, Peter J Neumann5, Bengt Jönsson6, Uwe Siebert7,8,9 & J Sanford Schwartz10,11 United BioSource Corporation, Science Policy, Bethesda, MD, USA 2 University of Washington, Seattle, WA, USA 3 University of York, Health Economics, York, UK 4 National Pharmaceutical Council, Washington, DC, USA 5 Institute for Clinical Research & Health Policy Studies, Tufts Medical Center & Tufts University School of Medicine, Boston, MA, USA 6 Stockholm School of Economics, Department of Economics, Stockholm, Sweden 7 University for Health Sciences, Medical Informatics & Technology, Hall i.T., Austria 8 Oncotyrol – Center for Personalized Cancer Medicine, Innsbruck, Austria 9 Harvard University, Boston, MA, USA 10 Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA 11 Wharton School of Business, Medicine & Health Management & Economics, University of Pennsylvania, Philadelphia, PA, USA *Author for correspondence: [email protected] 1

Methods

To guide our effort, we adopted the Institute of Medicine’s definition [5]: ‘CER is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policymakers to make informed decisions that will improve health care at both the individual and population levels.’ We employed an informal, iterative process in developing these principles. Since a review of the literature uncovered no principles specific to our aims, we began our task by modifying a set of recently published HTA principles in light of CER policy objectives, as was described by various legislative efforts [6], Federal agencies [103] and the Institute of Medicine [5]. The resulting draft list was first shared informally with a small convenience sample of stakeholders and researchers with whom the authors were personally familiar. A senior academic CER researcher was then engaged to recruit (independently from the investigators/authors) a diverse panel of five senior experts in CER who reviewed and critiqued in detail the draft set

10.2217/CER.12.41 © 2012 Future Medicine Ltd

1(5), 431–440 (2012)

part of

ISSN 2042-6305

431

Research Article  

Luce, Drummond, Dubois et al.

Box 1. Principles for planning and conducting comparative effectiveness research. Principle 1 ■■ Study objective: the objective of a CER study should be meaningful, explicitly stated and relevant for informing important clinical or healthcare decisions. Principle 2 ■■ Stakeholders: all relevant stakeholders should, to the extent feasible, be actively engaged or at least consulted and informed, during key stages of a CER study. Principle 3 ■■ Perspective: CER studies should address the perspectives of affected decision-makers. Principle 4 ■■ Relevance: from planning to conduct, study relevance should be evaluated in light of decision-maker needs. Principle 5 ■■ Bias and transparency: attempts should be made to minimize potential bias in CER studies and to conduct them in a transparent manner. Principle 6 ■■ Broad consideration of alternatives: CER studies should consider and make all reasonable efforts to include the full range of all relevant intervention, prevention, delivery and organizational strategies. Principle 7 ■■ Outcomes: CER studies should evaluate those clinical, other health-related and system outcomes most relevant to decision-makers. Principle 8 ■■ Data: CER studies should take advantage of all relevant, available data, including information that becomes available during the course of the study. Principle 9 ■■ Methods: CER studies should incorporate appropriate methods for assessing relevant outcomes of alternative interventions and intervention strategies. Principle 10 ■■ Heterogeneity: CER studies should identify and endeavor to evaluate intervention/treatment/ prevention effects across patients, subpopulations and systems. Principle 11 ■■ Uncertainty: CER studies should explicitly characterize the uncertainty in key study parameters and outcomes. Principle 12 ■■ Generalizability: CER studies should consider the generalizability and transferability of study findings across patients, settings, geography and systems of care. Principle 13 ■■ Follow through: CER studies should include a plan for dissemination, implementation and evaluation. CER: Comparative effectiveness research.

of principles. Every expert who was approached agreed to participate. A revised set of principles, now accompanied with recommended actions for adherence, were then reviewed in person by seven individuals (recruited informally via various means) representing five stakeholder sectors (patients, device and pharmaceutical manufacturers, medicine and HTA). Finally, our senior academic consultant independently recruited four individuals representing insurers and health plans who reviewed a revised document and submitted written comments. All expert/stakeholder reviewers were familiar with CER policy and/or methods (see

432

J. Compar. Effect. Res. (2012) 1(5)

Acknowledgments) and their input to the process was considered as advisory. Consensus was not sought since different stakeholders were expected to hold different positions. Thus, the final set of principles and recommended actions are solely those of the investigators. For each principle, we state its rationale, recommend specific actions to fulfill its intent and, in some cases, provide an example of the principle being exercised or recommended. We recognize that, at times, the spirit of a principle may conflict with the ability to fully adhere to it, in which case we argue that a ‘rule of reasonableness’ should apply. We address this by using

future science group

Principles for planning & conducting comparative effectiveness research 

qualified phrases such as ‘to the extent feasible’, ‘relevant’ or ‘reasonable’. Results

The planning and conduct of CER should address interests of a wide range of stakeholders and decision-makers. The term ‘stakeholder’ (e.g., patient, advocacy group, provider, insurer, employer, manufacturer or policymaker) includes any entity with an interest in a study’s outcome. ‘Decision-maker’ refers to the individual or party who will make decisions based on study results. Whereas all decisionmakers are stakeholders, the reverse does not apply. For example, a sponsoring manufacturer is a stakeholder but not normally considered a decision-maker. ■■ Principle 1: study objective

The CER objective is to guide decision-making in actual clinical and public health practice for general populations, relevant subgroups and individuals. Accomplishing this requires clearly and precisely specifying study objective(s) and linking it/them to the policy or clinical question, data, outcomes, and populations and settings in which results will be applied. Stating the objective clearly and explicitly is essential for stakeholders to evaluate the intent and value of the proposed effort. To address this principle, those planning and conducting a CER study should first identify the full range of stakeholders who will/might be affected by, or have interest in, the evidence expected to be developed. They then should determine and specify the question(s) that relevant decision-makers have (or should have) and the decisions likely to be informed by the evidence developed. ■■ Principle 2: stakeholders

Relevant stakeholders should be meaningfully involved in key aspects of a CER study, including: choice of study questions and methods, outcomes, and comparators of interest; review, comment and interpretation of draft findings; and development of strategies for dissemination, implementation and evaluation of results. As an example, PCORI recently announced its pilot project grants, which emphasized the importance of ‘including stakeholders in all stages of a multi-stakeholder research process, from the generation and prioritization of research questions to the conduct and analysis

future science group

Research Article

of a study to dissemination of study results – including methods for training participants in participatory research and the potential use of new technologies to facilitate engagement’ [104]. The funding announcement added that unless the applicant can demonstrate why stakeholder involvement is not feasible, stakeholders must be included as co-investigators with significant involvement at all appropriate stages of the project. ■■ Principle 3: perspective

For any particular CER topic, decision-makers will have varied and sometimes competing perspectives and interests. For example, a new therapy may be preferred by patients because of convenience, but not by health plans due to cost. Incorporating patient perspectives may pose special challenges to researchers, requiring efforts to understand preferences, beliefs and intentions, and the range of such attitudes and values across individuals and groups. Nevertheless, those conducting CER should endeavor to ensure that all relevant perspectives are considered and addressed. Whereas the societal perspective is broadest, requiring measuring all benefits, harms and costs no matter to whom or where they may fall, individual studies may sometimes necessarily focus on more limited perspectives. Such limitations should be made explicit with an explanation of why any relevant perspective is not addressed and how that may affect interpretation of results. ■■ Principle 4: relevance

Since CER studies seek to inform decisions about healthcare interventions in clinical and community practice, it is important to make sure that studies are relevant to decision-maker needs. It is also important to take account of real-world conditions that may change over time. Active consideration of study relevance will affect the CER methods to be used and whether to embark on, modify or continue a study. Timeliness matters because relevant evidence should be available when decisions are to be made (e.g., listing a drug on formulary). Therefore, the time required to conduct and complete a given CER study should be considered when assessing study feasibility and methods. For example, Medicare’s process for national coverage determinations imposes a 6-month limit from the time a technology assessment is undertaken to the initial draft of decisions [7].

www.futuremedicine.com

433

Research Article  

Luce, Drummond, Dubois et al.

When planning studies, investigators should assess current and future relevance, optimally undertaking a risk assessment of the likelihood that future events may undermine relevance. This requires consideration of time for design, conduct and reporting results to inform decisions and selecting an appropriate method to accomplish the objective. Lengthy prospective studies should be re-evaluated during their course and care taken to guard against being ‘locked in’ should study relevance change before its conclusion. ■■ Principle 5: bias & transparency

Conflicts of interest (COI) and bias – financial, intellectual or other – are inherent and inevitable in all applied research and are especially important in CER, since stakeholders often are directly or indirectly involved in key aspects of the process. For example, COI and potential/perceived bias cannot be avoided in a manufacturer-funded CER study. Similarly, stakeholder organizations, clinicians, patients and policymakers have inherent biases towards preferred interventions. Left unchecked, these actual and perceived biases can impact choice of study objectives, questions, methods, comprehensiveness of reporting data and findings, interpretation of results, likelihood of publication, and the acceptance of findings. Thus, all parties associated with sponsorship, funding, conduct, interpretation or oversight of CER should provide full public disclosure of all perceived and potential COI. There are numerous examples of acceptable COI disclosure policies and forms, such as those of most major peer-reviewed medical scientific journals, medical professional societies and agencies of the US Department of Health and Human Services. As far as possible, methodologies should adhere to accepted standards, such as those for systematic evidence reviews developed by the Cochrane Collaboration [105] or the Good Research Practices Task Force reports for economic evaluation and outcomes research [106]. CER study protocols and amendments should be publicly registered, posted online and available for stakeholder review and comment, with the final protocol posted on an open-access online registry similar to ClinicalTrials.gov [107]. While protecting legitimate intellectual property and privacy of study subjects, investigators should take all reasonable efforts to make study details (e.g., study design characteristics, statistical protocols, model/simulation specifications and even appropriate limited

434

J. Compar. Effect. Res. (2012) 1(5)

access to data files) available to independent researchers and stakeholders with a legitimate public interest. Investigators also should adhere to appropriate reporting standards (e.g., CONSORT procedures for clinical trials) [8]. Journals should adopt policies to require protocols to be posted and studies registered prior to study initiation as a means to minimize publication bias. ■■ Principle 6: broad consideration of alternatives

Any healthcare services or strategies that can accomplish a stated objective are appropriate for CER studies. These include preventive, diagnostic and therapeutic interventions, alternative financing and delivery systems, management strategies and behavioral interventions [5]. No relevant option should be excluded from consideration without adequate explanation. The PCORI working definition of patientcentered outcomes research applies to this principle [108]. To assess the value of options to inform decisions, PCORI will sponsor research to answer questions such as, ‘What are my options and what are the potential benefits and harms of those options?’ This research will ‘assess the benefits and harms of preventive, diagnostic, therapeutic, palliative or health delivery system interventions to inform decision-making, highlighting comparisons and outcomes that matter to people’ and will incorporate ‘a wide variety of settings and diversity of participants’ [108]. For example, CER on obesity interventions in different subgroups should consider interventions such as behavioral lifestyle changes, appetite-suppressive drugs, gastric surgery and multidisciplinary weight-loss programs [109]. To adhere to this principle, investigators should consider evaluating any reasonable intervention or strategy to achieve a CER objective. For example, preventive measures, clinical and systems management strategies, or sequences and combinations of interventions or strategies, are all relevant options for CER, as well as a single clinical intervention. ■■ Principle 7: outcomes

CER outcomes of interest are those that healthcare decision-makers value, and these outcomes may differ from those regulators require. Patients value quality-of-life improvements, ability to function in daily activities, convenience, even school or work performance, as well as survival or prevention of clinical events; payers may be

future science group

Principles for planning & conducting comparative effectiveness research 

interested in comparative efficiency; employers in productivity; global organizations in societal disease burden. Although cost considerations are controversial for CER [9–11], health interventions compete with other programs for scarce resources. Determining relative value for money spent may be required to inform some decisions. Relevant costs may include patient out-of-pocket expenses, direct medical costs (including patient out-of-pocket expenses), productivity loss or gain and even costs outside the healthcare system (e.g., social care, education and criminal justice). Therefore, investigators should identify and endeavor to generate all key outcomes important to the relevant decision-makers, prioritizing as necessary while explaining why any germane outcomes will not be captured. For example, long-term mortality/morbidity outcomes may be judged most relevant to patients but cannot be generated in a timely manner, thus conflicting with Relevance Principle 4. In such cases, validated intermediate/surrogate end points may need to be combined analytically with retrospective databases [12] and decision-analytic modeling [13] to approximate outcomes of interest. When costs and cost–effectiveness are clearly relevant, CER studies should endeavor to assess them. Since costs vary from setting to setting, this will require assessment of resources consumed separately from how these resources are valued. Sensitivity analyses will be required to evaluate costs from different decision-maker perspectives to enable decision-makers to better interpret comparative value in their own settings. ■■ Principle 8: data

Relevant data can come from multiple clinical and administrative sources either existing or generated in the course of a study. Coordination of data generation and linkage from both private and public healthcare systems is important and includes expanding the nature of clinical trials (e.g., ‘pragmatic’ clinical trials that minimize protocol-induced distortions) to a broad range of care settings, along with a range of clinical research networks and databases with high data integrity and validity that represent the full spectrum of patients, populations, settings and systems of care. Researchers should take full advantage of the massive US health information technology investment, including present and pending national and regional data-linking activities [14]. The full potential of these data will

future science group

Research Article

require development and adoption of common clinical data standards. Elements of this principle are currently being applied by the US government’s Multi-Payer Claims Database Project [110], which will merge databases containing hospital, outpatient and drug-utilization data. CER investigators should develop a comprehensive data ‘wish’ list to satisfy study objectives; when necessary data are missing or not ideal, explicity note that fact; and, consistent with privacy principles, seek to gain access for evaluation and possibly merging with other relevant data. Lengthy prospective studies should seek to synthesize external relevant data that becomes available during the study’s course. Bayesian adaptive designs in CER will be useful in this regard [15]. ■■ Principle 9: methods

There are many useful CER methods, including experiments, prospective observational, retrospective observational, decision-analytic modeling, meta-analysis and other analytical syntheses of existing data. The strengths and weaknesses of traditional randomized controlled trials (RCTs; e.g., high internal, low external validity) and observational methods (high external, low internal validity) are well understood. While observational and other nonexperimental methods are generally faster and less costly, they are subject to selection bias and incomplete adjustment for unmeasured confounders and may not be sufficiently sensitive when comparative effectiveness is modest [16–18]. Fortunately, a number of promising innovative design and analytical methods exist and are being developed to minimize relative weaknesses without overly sacrificing strengths for CER. RCTs can be designed to be more pragmatic to improve external validity to real-world settings [19], meta-analytic methods can generate comparisons across multiple trials using indirect techniques [20], a growing array of analytical techniques are available to adjust for confounding in observational and experimental designs [21], and decision-analytic modeling can be used to systematically synthesize evidence from short-term RCTs and long-term epidemiological studies and to evaluate uncertainty. However, even the most sophisticated statistical methods cannot compensate for inadequate data. Thus, to the degree a clinical trial or other data source does not represent clinical practice, the required data are not collected or available for analysis, or the validity

www.futuremedicine.com

435

Research Article  

Luce, Drummond, Dubois et al.

of the data is poor, anal­yses and estimation of comparative effectiveness will be limited. Consistent with this principle, Rawlins argued persuasively that the appropriate criterion for evidence to inform decision-making is that which is ‘fit for purpose’ [22]. CER investigators should thus select the methods that appropriately balance internal and external validity, feasibility, timeliness and efficiency to meet stated objectives. ■■ Principle 10: heterogeneity

Whereas clinical CER studies typically assess an average effect of therapies across the population studied, the applicability of these estimates across different patient subgroups and to particular patients inevitably varies. Because of this substantial treatment heterogeneity in benefit, harm and preferences for outcomes across patients and populations, a major goal of the US National CER Initiative is to design and evaluate studies to learn what works best, for whom, and under what conditions. Indeed, this is the major goal for personalized medicine. Heterogeneity in treatment effects is particularly important when effects are large but concentrated in relatively few patients. The PCORI’s statement that the research it sponsors ‘…shall … take into account … differences in effectiveness … of treatments [and] services … [in] subpopulations [e.g.] … minorities, women, age and groups … with different comorbidities, genetic or quality of life preferences…’ [108] exemplifies this principle. Therefore, CER studies ideally should be designed to inform benefits and harms among identifiable individuals and subpopulations that reflect the diversity of potential patients, including their susceptibility, preferences and values. These studies should analyze heterogeneous patient populations to identify individuals and populations that may respond differently, including individuals and groups that might benefit or be harmed most (e.g., based on age, comorbidity, sociodemographics, biomarkers, genetic variation, sex, race or ethnicity). To the extent that it is reasonable and possible, CER studies should also be designed to identify explanatory factors associated with differences in susceptibility, effectiveness and clinical outcomes. ■■ Principle 11: uncertainty

There are two conceptually different research questions to be answered in healthcare

436

J. Compar. Effect. Res. (2012) 1(5)

decision-making: given the available information, should the new technology be adopted? Should more information be obtained to confirm or to change this decision in the future? The answer to the second question depends strongly on the uncertainty of the study results. All study parameters and outcomes are subject to uncertainty due to random variation, sampling error, measurement error, and heterogeneity of treatment effects across patients and populations. Understanding the magnitude and sources of uncertainty not only informs about the confidence in the study results and enhances the study interpretation but also informs about the necessity, the type and required sample size of further studies. Formal value-of-information ana­lysis can guide prioritization, planning and conduct of future studies [23]. To address this principle, investigators should identify the factors underlying variations and assess their potential impact to help decisionmakers both assess their confidence in study results and to interpret and apply study findings. Specifically, the extent of parameter uncertainty in study input variables and potential impact on results should be characterized and quantified (e.g., using probabilistic sensitivity analysis [24,25]). If the study involves key assumptions that may lead to structural (e.g., modeling) uncertainty, these assumptions should be clearly identified and justified [111]. ■■ Principle 12: generalizability

This principle is closely associated with but distinct from the heterogeneity principle. A key CER tenet is to develop useful evidence for decision-making in patients treated in routine clinical and community practice settings. However, much existing clinical evidence is generated within highly structured research settings based on selected patient populations that often are not representative of broader patient populations and settings. Therefore, using clinical and epidemiologic databases, investigators should evaluate variation patterns of the condition of interest and its management among patients, populations, providers, settings and systems of care. When conducting prospective CER trials and registries, analysts should endeavor to recruit sites and obtain data from representative patients and community clinical and healthcare settings and, to the extent necessary and feasible, augment or interpret findings using more representative datasets.

future science group

Principles for planning & conducting comparative effectiveness research 

Given challenges with evaluating treatment outcomes within heterogeneous patient populations, state-of-the-art methods in study design and ana­lysis should be used to adjust for potential confounding by indication and other threats to validity that may occur more frequently in real-world settings. ■■ Principle 13: follow through

A CER study is only of value if it informs decisions that improve patient-centered care and patient-relevant outcomes. Therefore, the CER study plan and the final report should explicitly include plans and recommendations for dissemination, implementation and evaluation. The Agency for Healthcare Research and Quality’s commitment and policy to develop communications to the general public about the results of the studies that they commission partially exemplifies this principle [112]. CER investigators should include proposed plans prior to finalization of study design for relevant stakeholders to have the opportunity to review and comment. Furthermore, final reports should contain sections updating these plans such that, if subsequently carried out, will generate information of the impact or results on decisionmaking, patient and/or provider behavior, patient safety and outcomes, and even population health or improved healthcare efficiency. Discussion

The iterative process we employed and extensive input received from experts and stakeholders were critical to selecting and refining the principles and in formulating actions for adherence; the latter, for instance, resulting from strong recommendations of several CER experts. Also, we emphasized the nuanced ‘rule of reason’ concept due to concerns expressed by CER sponsors/funders (e.g., manufacturer stakeholders), who argued cogently that full compliance to all aspects of all principles would often prove to be unrealistic. These principles build on other efforts but uniquely focus on CER planning and conduct. We believe they are timely and raise several important issues. First, the need to address a broad range of objectives and perspectives and to satisfy different decision-makers and stakeholders raises challenges and has associated tensions. Studies in medicine and healthcare typically address a limited range of perspectives. For example, clinical studies address clinical end points (although they sometimes include patient-reported outcomes);

future science group

Research Article

economic evaluations address payer interests (although some take a broader societal perspective); health authorities focus on societal disease burden. Whereas CER studies should accommodate multiple perspectives and measure multiple end points, a given study cannot be all things to all people. Accordingly, the first principle suggests that researchers need to consider carefully the objectives of each study, in particular decisions and decision-maker(s) to whom the study is to inform. To the extent this conflict exists in a particular case, we suggest that fully adhering to the principle should be addressed rhetorically, but from a practical standpoint be considered an aspirational guideline. If not possible to fully adhere to a particular principle, the limitation or constraint should be identified and addressed, explaining reasons for the limitation and potential implications on study conclusions and implementation of findings. For example, when relevant stakeholders are omitted from the process, they should still be identified and their exclusion explained and justified. Second, while the proposed principles focus on the planning and conduct of CER, there are important aspects of creating the right environment that transcend responsibilities of an investigator. Their full realization may require a ­number of societal initiatives, for instance: ■■

Establishing a registry of CER studies;

■■

Ensuring the development and adoption of common clinical data standards consistent with the national investment in health information technology;

■■

Journals requiring protocols be posted and studies registered prior to study initiation;

■■

Ensuring that resulting recommendations are reviewed within a set period of time;

■■

Ensuring that resulting preventive, diagnostic and therapeutic strategies are subjected to evaluation;

■■

Creating multipayer databases covering healthcare across a wide spectrum of patients and range of practice settings.

Third, diverse stakeholders have diverse objectives, and will inevitably conflict from time to time. For example, limited resources inherently pit the interests of individuals or subgroups

www.futuremedicine.com

437

Research Article  

Luce, Drummond, Dubois et al.

against each other or those of the general population. While the proposed principles cannot fully resolve these tensions, they can be helpful in devising strategies to identify, clarify and minimize them. In so doing, they can help protect the ‘medical commons’ (i.e., helping the healthcare system and the nation to maximize health across all patients and society). Consideration of these principles underscores the core focus of CER and its distinction from studies designed to meet regulatory or commercial requirements. Whereas studies conducted for registration focus on safety and efficacy and often use short-to-moderate term (e.g., surrogate) end points, CER seeks to develop patientrelevant health outcomes on a longer time horizon. A balance will be required between comprehensiveness and timeframe necessary for outcomes of interest to be generated and the timeliness of results for the decision-makers. In some cases, prospective studies combined with retrospective databases [12] and decision-analytic modeling [13] can help achieve this balance, although likely increasing uncertainty factors. In addition, conducting CER in representative clinical practice settings is often key to achieving the goal of informing clinical decision-making. While we hope this document will assist PCORI’s mission, the principles herein are intended to apply to all CER efforts. We neither restrict ourselves to PCORI’s authorizing legislation, nor to its definition of patient-centered outcomes research. Further, while no one study will be able to fully meet all the principles, a coordinated CER program or set of programs should fulfill these principles in the aggregate. Achieving CER objectives will require an iterative process and will benefit from experimenting with a number of less conventional policies, such as an expanded application of ‘coverage with evidence development’ [26] and Bayesian adaptive applications in real-world settings [15]. Should these 13 principles be found appropriate and useful, it will be important to audit current and future CER studies to evaluate adherence to them. In support of their validity, adherence to the principles may correlate with a study’s influence on care or on the adoption of the CER study results into guidelines. Conclusion

The 13 principles presented in this paper represent an attempt to establish standards of good practice for the planning and conduct of an

438

J. Compar. Effect. Res. (2012) 1(5)

important new national initiative. We do not regard them as the last word on this matter and welcome further discussion and debate. Acknowledgements Various forms of this report have been reviewed in draft form by individuals chosen for their diverse perspectives, technical expertise and, especially, their role within stakeholder organizations. The purpose of their independent reviews is to provide candid and critical comments to assist us in making the report as sound as possible and to ensure that the report meets the needs of healthcare stakeholders, especially including decision-makers, and to ensure we keep to standards for objectivity and responsiveness to our goal. The review comments remain confidential to protect the integrity of the individuals and process. We thank the following members of an expert review panel, each of whom were engaged and compensated to review an early draft report: Mitchell Higashi, Steve Pearson, Harold Sox, Sean D Sullivan and Sean Tunis; and the following individuals for their reviews and comments: Jeff Allen, Nancy Dreyer, M Haim Erder, Nancy Hughes, Dell Mather, Newell McElwee, Chad Murphy, Angela Ostrom, Eleanor Perfetto, Catherine Piech, Donald Rucker, Kenneth Schaecher, Mitch Schnall, Karen Schoelles, Jeffrey White, Milton Weinstein and Don Yin. However, the manuscript represents the thoughts and opinions of the authors only. We also thank Jennifer Graff, Emily Sargent, Anne Samit, Nancy Brady, Chidinma Okparanta and Jill Javier for their research management and administrative support and the National Pharmaceutical Council for providing unrestricted funding (via contract) for this effort.

Financial & competing interests disclosure The authors have no conflicts of interests to declare with this manuscript. Unrestricted funding (via contract) for this research and manuscript was provided by the National Pharmaceutical Council, which exercised no role in the development or review of either the study or the manuscript. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed. No writing assistance was utilized in the production of this manuscript.

Ethical conduct of research The authors state that they have obtained appropriate insti­ tutional review board approval or have followed the princi­ ples outlined in the Declaration of Helsinki for all human or animal experimental investigations. In addition, for investi­gations involving human subjects, informed consent has been obtained from the participants involved.

future science group

Principles for planning & conducting comparative effectiveness research 

Research Article

Executive summary These principles build on other efforts but uniquely focus on comparative effectiveness research (CER) planning and conduct. We identify a set of actions that may be necessary to fulfill the intent of each principle. ■■ Whereas studies in medicine and healthcare typically address a limited range of objectives and perspectives, CER should seek to address a broad range to satisfy different and multiple decision-makers and stakeholders. ■■ A ‘rule of reason’ may be necessary in recognition that a given study cannot be all things to all people. ■■ Should full adherence to a principle be deemed unreasonable, following good faith effort, its spirit should be acknowledged (e.g., no relevant option should be excluded without explanation). ■■ While no one study will be able to fully meet all the principles, a coordinated CER program or set of programs should endeavor to fulfill these principles in the aggregate. ■■ Achieving CER objectives will require an iterative process benefiting from experimenting with various less-conventional policies (e.g., ‘coverage with evidence development’ and Bayesian adaptive applications). ■■ For the proposed principles to be fully operational, there is a need for various external policies to be adopted (e.g., a CER registry and universal data standards). ■■ It will be important to evaluate adherence of CER studies to these principles and to determine their impact (e.g., a study’s influence on care or on the adoption of the CER study results into guidelines). ■■ ■■

References Papers of special note have been highlighted as: of interest of considerable interest n

6

111th United States Congress. The Patient Protection and Affordable Care Act. PL 111–148 (2010).

7

Neumann PJ, Kamae MS, Palmer JA. Medicare’s national coverage decisions for technologies, 1999–2007. Health Aff. (Millwood) 27(6), 1620–1631 (2008).

8

Moher D, Hopewell S, Schulz KF et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 340, c869 (2010)

n n

1

2

n

3

4

n n

5

n n

Busse R, Orvain J, Velasco M et al. Best practice in undertaking and reporting health technology assessments. Int. J. Technol. Assess. Health Care 18, 361–422 (2002). Drummond MF, Schwartz JS, Jönsson B et al. Key principles for the improved conduct of health technology assessments for resource allocation decisions. Int. J. Technol. Assess. Health Care 24, 244–258 (2008). Presents a list of principles that are relevant to the comparative effectiveness research (CER) principles proposed in the present manuscript. Emanuel EJ, Fuchs VR, Garber AM. Essential elements of a technology and outcomes assessment initiative. JAMA 298(11), 1323–1325 (2007). Willke RJ, Mullins CD. ‘Ten commandments’ for conducting comparative effectiveness research using ‘real-world data’. J. Manag. Care Pharm. 17(9 Suppl. A), S10–S15 (2011). The recommendations contained in this paper overlap somewhat with those in the present paper but are more narrowly focused on methodological practices specific to using ‘real-world’ data. Institute of Medicine. Initial National Priorities For Comparative Effectiveness Research. National Academies Press, Washington, DC, USA (2009). Considered highly influential in both defining CER and proposing a national agenda for CER studies.

future science group

9

for transformational change. Ann. Intern. Med. 151(3), 206–209 (2009) 16 Fleurence RL, Naci H, Jansen JP. The critical

role of observational evidence in comparative effectiveness research. Health Aff. (Millwood) 29(10), 1826–1833 (2010). 17 Rosenbaum PR. Dilemmas and craftsmanship.

In: Design of Observational Studies (Springer Series in Statistics). Rosenbaum PR (Ed.). Springer, NY, USA, 3–18 (2009). 18 Strom BL. Methodologic challenges to

studying patient safety and comparative effectiveness. Med. Care 45(10 Suppl. 2), S13–S15 (2007).

Garber AM. A menu without prices. Ann. Intern. Med. 148(12), 964–966 (2008).

10 Neumann PJ, Weinstein MC. Legislating

19 Thorpe KE, Zwarenstein M, Oxman AD et al.

against use of cost–effectiveness information. N. Engl. J. Med. 363(16), 1495–1497 (2010).

A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. J. Clin. Epidemiol. 62(5), 464–475 (2009).

11 Wilensky GR. Cost–effectiveness

information: yes, it’s important, but keep it separate, please! Ann. Intern. Med. 148(12), 967–968 (2008).

20 Ioannidis JP. Integration of evidence from

multiple meta-analyses: a primer on umbrella reviews, treatment networks and multiple treatments meta-analyses. CMAJ 181(8), 488–493 (2009).

12 Berger ML, Mamdani M, Atkins D,

Johnson ML. Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report – Part I. Value Health 12(8), 1044–1052 (2009).

21 Cox E, Martin BC, Van Staa T, Garbe E,

Siebert U, Johnson ML. Good research practices for comparative effectiveness research: approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: the International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retrospective Database Analysis Task Force Report – Part II. Value Health 12(8), 1053–1061 (2009).

13 Siebert U. When should decision-analytic

modeling be used in the economic evaluation of health care? J. Health Econ. 4(3), 143–150 (2003). 14 Blumenthal D. Stimulating the adoption of

health information technology. N. Engl. J. Med. 360(15), 1477–1479 (2009). 15 Luce BR, Kramer JM, Goodman SN et al.

Rethinking randomized clinical trials for comparative effectiveness research: the need

www.futuremedicine.com

n

Useful reference on controlling for potential bias and confounding in nonrandomized studies.

439

Research Article  

Luce, Drummond, Dubois et al.

22 Rawlins M. De testimonio: on the evidence

103 Federal Coordinating Council on

for decisions about the use of therapeutic interventions. Lancet 372(9656), 2152–2161 (2008).

Comparative Effectiveness Research. Report to the President and the Congress on Comparative Effectiveness Research: Executive Summary. US Department of Health and Human Services (2009). www.hhs.gov/recovery/programs/cer/ execsummary.html (Accessed 11 July 2012)

23 Claxton K. Bayesian approaches to the value

of information: implications for the regulation of new pharmaceuticals. Health Econ. 8(3), 269–274 (1999). 24 Doubilet P, Begg CB, Weinstein MC,

Braun P, McNeil BJ. Probabilistic sensitivity analysis using Monte Carlo simulation. A practical approach. Med. Decis. Making 5(2), 157–177 (1985).

104 Patient Centered Outcomes Research

Institute. PCORI Funding Announcement: Pilot Projects Grants (2011). www.pcori.org/assets/PCORI-Pilot-ProjectsFunding-Announcement-Amendment-1-_ v2_-09302011.pdf (Accessed 14 August 2012)

25 Oakley JE, O’Hagan A. Probabilistic

sensitivity analysis of complex models: a Bayesian approach. J. R. Stat. Soc. Ser. B 66(3), 751–769 (2004). 26 Tunis SR, Pearson SD.Coverage options for

promising technologies: Medicare’s ‘coverage with evidence development’. Health Aff. (Millwood) 25(5), 1218–1230 (2006).

■■ Websites 101 Agency for Healthcare Research and Quality.

Methods Guide for Effectiveness and Comparative Effectiveness Reviews (2008). www.ncbi.nlm.nih.gov/books/NBK47095 (Accessed 11 July 2012) 102 Patient-Centered Outcomes Research

Institute. Preliminary Draft Methodology Report. ‘Our Questions, Our Decisions: Standards for Patient-Centered Outcomes Research’. Patient-Centered Outcomes Research Institute (2012). www.pcori.org/assets/Preliminary-DraftMethodology-Report.pdf (Accessed 6 July 2012)

440

n

Provides a clear indication of the importance that the Patient-Centered Outcomes Research Institute places on the inclusion of stakeholders when planning and conducting patient-centered CER.

105 Cochrane Handbook for Systematic Reviews

of Interventions (2011). www.cochrane-handbook.org (Accessed 29 December 2011) 106 International Society for Pharmacoeconomics

and Outcomes Research. ISPOR good outcomes research practices index. International Society for Pharmacoeconomics and Outcomes Research (2011). www.ispor.org/workpaper/practices_index. asp (Accessed 29 December 2011) 107 ClinicalTrials.gov. National Library of

Medicine (2011). http://clinicaltrials.gov (Accessed 29 December 2011)

J. Compar. Effect. Res. (2012) 1(5)

108 Patient-Centered Outcomes Research

Institute (2011). www.pcori.org/assets/PCORI-NationalPriorities-and-Research-Agenda-2012-05-21FINAL.pdf (Accessed 14 August 2012) n n

A 21-page document detailing the Patient-Centered Outcomes Research Institute’s national priorities and research agenda in considerable detail, as of 21 May 2012.

109 WHO. WHO Obesity: Preventing and

Managing the Global Epidemic. Report of a WHO Consultation on Obesity (1998). http://whqlibdoc.who.int/hq/1998/ WHO_NUT_NCD_98.1_(p1-158).pdf (Accessed 8 August 2012) 110 Chappel A. Multi-Payer Claims Database

(MPCD) for comparative effectiveness research. National Committee on Vital and Health Statistics, US Department of Health & Human Services (2011). www.ncvhs.hhs.gov/110616p1.pdf (Accessed 29 December 2011) 111 Bojke L, Claxton K, Palmer S, Sculpher M.

Defining and characterizing structural uncertainty in decision analytic models (2006). www.york.ac.uk/che/pdf/rp9.pdf (Accessed 29 December 2011) 112 Mission statement: Office of

Communications and Knowledge Transfer. Agency for Healthcare Research and Quality (2008). www.ahrq.gov/about/ockt/ocktmiss.htm (Accessed 29 December 2011)

future science group

Principles for planning and conducting comparative effectiveness research.

To develop principles for planning and conducting comparative effectiveness research (CER)...
487KB Sizes 0 Downloads 0 Views