For reprint orders, please contact: [email protected]

FROM METHODS TO POLICY

Key questions remain unanswered Robert W Dubois*

The From Methods to Policy series began 2 years ago, and much has transpired since then. The Patient-Centered Outcomes Research Institute (PCORI) has been established and has approved 197 comparative effectiveness research (CER) grants in excess of US$303 million [101]. It is broadly understood that the portfolio of CER questions expands far beyond comparing one surgical approach or one drug to another. Patient centricity has taken hold, not only in PCORI-related endeavors, but more widely. Additionally, tremendous excitement surrounds ‘big data’ and the ability to understand real-world care. Despite progress in some areas, there remain critical unanswered methodologic questions. These questions have important CER-related implications, and until consensus answers become available, payer policies and decisions by providers and patients will be incomplete at best and misguided at worst. The first question relates to the choice between experimental and nonexperimental study designs. Some believe that randomized trials provide the only valid approach to compare interventions. Others disagree and believe that real-world performance differs importantly from trial-based efficacy and that evidence from nonexperimental studies must be obtained. Will the field find common ground to determine the proper blend of the two? Can there be consensus that in circumstances where effect sizes are expected to be small and confounding quite likely then randomization would be required, but that in scenarios where treatment compliance or surgical skill or other real-world characteristics have substantial variance, then observational approaches might be preferred? As PCORI establishes its clinical data research networks, will pragmatic studies with randomization in routine care become an efficient and accepted approach? The next methodologic question relates to the uncertainty surrounding the definition of a ‘good’ nonexperimental study. In the September issue of the Journal of Comparative Effectiveness Research, Nancy Dreyer discussed the GRACE principles and the GRACE checklist [1]. Complicating the topic, a variety of other standards or guidelines exist, including those developed by PCORI [102], the International Society for Pharmacoeconomics and Outcomes Research [2] and the Agency for Healthcare Research and Quality [103]. Unless these guidelines for good practice and standards coalesce over time, researchers might conduct a study based upon one set of guidelines only to be deemed insufficient when a journal or a decisionmaker applies a different standard. Furthermore, the US FDA uses a ‘substantial evidence’ threshold for including results in drug labels, which typically entails data from adequate and well-controlled investigations. Until the field can agree upon what good or ‘adequate and well-controlled’ investigations look like, it is quite unlikely that the FDA will accept nonexperimental outcome studies. For researchers and consumers of that research to have clear direction, we need to harmonize these diverse assessments of what is good or adequate for each type of decision-making. Finally, will the clinically rich databases built around electronic health records (EHR) provide the hoped for answers to many CER questions? As McElwee and I discussed in the September issue, there is substantial interest in using EHR information [3]. However, systems developed to capture clinical information about

10.2217/CER.13.79 © 2014 Future Medicine Ltd

3(1), 9–10 (2014)

“...there remain critical unanswered

methodologic questions. These questions have important comparative effectiveness research-related implications, and until consensus answers become available, payer policies and decisions ... will be incomplete at best and misguided at worst.”

*National Pharmaceutical Council, 1717 Pennsylvania Avenue, Northwest Suite 800, Washington, DC 20006, USA Tel.: +1 202 827 2079 [email protected]

part of

ISSN 2042-6305

9

FROM METHODS TO POLICY Dubois indivi­dual patients may or may not have the same utility for research. Valid CER requires careful delineation of a study patient population, the intervention, the comparator, and the outcomes to assess. Unfortunately, how providers enter data varies substantially (e.g., free text, International Classification of Diseases coding or laboratory-specified end points; ‘heart failure’ vs ‘congestive heart failure’ vs ‘CHF’ vs ‘HF’ vs ‘low ejection fraction’) and identifying clinically similar patients creates challenges. While meaningful use of EHRs is increasing, these systems do not enforce data standards such as Amazon (WA, USA) or other internet-based interactions [104]. These websites routinely reject telephone numbers if not entered in a particular format (xxx-xxx-xxxx), and accept only those two digit state codes that match or determine in real-time that a credit card number is active. Without similar data-entry guidelines, edits or wider adoption of interoperability standards, the References 1

2

3

10

Dreyer NA. Using observational studies for comparative effectiveness: finding quality with GRACE. J. Comp. Eff. Res. 2(5), 413–418 (2013). Berger ML, Mandani M, Atkins D, Johnson ML. Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report – Part 1. Value Health 12(8), 1044–1052 (2009). McElwee NE, Dubois RW. Enthusiasm for rapid-learning health systems exceeds the

utility of these database for CER will remain uncertain. Additionally, the above complexities grow as those databases receive clinical information from multiple institutions or from disparate EHR systems. Work remains to address the three questions raised in this article. It is our hope that future From Methods to Policy articles will explore these questions more fully. Financial & competing interests disclosure RW Dubois is employed by the National Pharmaceutical Council, a policy research organization supported by the nation’s major research-based pharmaceutical companies. The author has no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed. No writing assistance was utilized in the production of this manuscript.

current standards for conducting it. J. Comp. Eff. Res. 2(5), 425–427 (2013).

■■ Websites 101 Patient-Centered Outcomes Research

Institute. PCORI Board approves $114 million for patient-centered outcomes research (2013). www.pcori.org/2013/pcori-board-approves114-million-for-pcor (Accessed 12 November 2013) 102 Patient-Centered Outcomes Research

Institute. Public comment draft report of the Patient-Centered Outcomes Research Institute (PCORI) Methodology Committee (2012).

J. Comp. Eff. Res. (2014) 3(1)

http://pcori.org/assets/MethodologyReportComment.pdf (Accessed 26 September 2013) 103 Methods Guide for Effectiveness and

Comparative Effectiveness Reviews. AHRQ Publication No. 10(13)-EHC063-EF (2013). http://effectivehealthcare.ahrq.gov/ehc/ products/60/318/CER-methodsguide-130916.pdf (Accessed 26 September 2013) 104 Amazon.

www.amazon.com (Accessed 20 November 2013)

future science group

From methods to policy: key questions remain unanswered.

From methods to policy: key questions remain unanswered. - PDF Download Free
587KB Sizes 0 Downloads 0 Views