Comparative Effectiveness Research in Practice and Policy for Radiation Oncology William F. Lawrence, MD, MS Interest in comparative effectiveness research (CER) has increased dramatically over the past decade, yet perceptions about what comprises CER varies. CER has several attributes relevant to practice and policy: (1) The goal of CER is to inform decisions about health care. (2) Literature synthesis is used in addition to primary research. (3) CER evaluates not only overall outcomes for the population but also evaluates subgroups that may have heterogeneous outcomes. (4) Research places an emphasis on outcomes in the “real-world” settings. (5) Outcomes studied should be relevant to patients. In radiation oncology, where many of the traditional clinical trials are comparative in nature, the line between CER and “traditional” research may be blurred, but an increased emphasis on CER can help to bridge the research enterprise and clinical practice, helping to inform decision making at the patient, clinician, and policy levels. Semin Radiat Oncol 24:54-60 Published by Elsevier Inc.

S

ection 1013 of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 directed the Agency for Healthcare Research and Quality (AHRQ) to conduct and support research with a focus on outcomes; comparative clinical effectiveness; and appropriateness of pharmaceuticals, devices, and health care services.1 With this act, and the subsequent billion-dollar Federal investment in comparative effectiveness research (CER) appropriated through the American Recovery and Reinvestment Act of 2009, interest in CER has grown tremendously. Yet perceptions about what CER is and how it is defined vary.2 The Wikipedia page on CER is illuminating, although the page discusses the issues around comparing outcomes of interventions, much of the discussion is spent addressing issues of health care resource allocation.3 So before discussing its effect, we need to briefly discuss what we mean by CER. The Federal Coordinating Council for CER, established under the American Recovery and Reinvestment Act of 2009, defines CER as follows: Comparative effectiveness research is the conduct and synthesis of research comparing the benefits and harms of

Center for Outcomes and Evidence, Agency for Healthcare Research and Quality, Rockville, MD. Disclaimer: The opinions expressed here are those of the author and do not represent official policy of the Agency for Healthcare Research and Quality or the Department of Health and Human Services. The author declares no conflict of interest. Address reprint requests to William Lawrence, MD, MS, Agency for Healthcare Research and Quality, 540 Gaither Rd, Rockville, MD 20850. E-mail: [email protected]

54

1053-4296/13/$-see front matter Published by Elsevier Inc. doi:http://dx.doi.org/10.1016/j.semradonc.2013.09.001

different interventions and strategies to prevent, diagnose, treat and monitor health conditions in “real world” settings. The purpose of this research is to improve health outcomes by developing and disseminating evidence-based information to patients, clinicians, and other decision-makers, responding to their expressed needs, about which interventions are most effective for which patients under specific circumstances.4 The Institute of Medicine (IOM) has a similar definition5 and enumerates 6 characteristics of CER (Table 1). These definitions are closely aligned with the definition of patientcentered outcomes research developed by the Patient-Centered Outcomes Research Institute.6 CER can be considered to be closely linked to patient-centered outcomes research, in that both are focused on assessing benefits and harms that matter to patients and helping individual patients make the best choices about their care. The IOM’s and Federal Coordinating Council’s definitions have several implications that are relevant to CER's use in practice and policy. (1) The goal of CER is to inform decisions about care. Whether from an individual or population perspective, the purpose of the research is to provide relevant outcomes for alternative options. (2) CER information comes from not only primary research, but also syntheses of the research literature. Literature syntheses ensure a balanced and comprehensive approach to evaluating the literature, not just picking a few favorable studies.

CER in practice and policy

55

Table 1 Institute of Medicine Criteria for Comparative Effectiveness Research 1. 2. 3. 4. 5. 6.

CER has the objective of directly informing a specific clinical decision from the patient perspective or a health policy decision from the population perspective. CER compares at least 2 alternative interventions, each with the potential to be “best practice.” CER describes results at the population and subgroup levels. CER measures outcomes—both benefits and harms—that are important to patients. CER employs methods and data sources appropriate for the decision of interest. CER is conducted in settings that are similar to those in which the intervention will be used in practice. – From Initial National Priorities for Comparative Effectiveness Research4

(3) The information should be as applicable as possible to a specific patient to most accurately inform the decision. Not all patients are the same, so the ability to account for patient heterogeneity through the study of specific subgroups ensures the most accurate information for a specific patient. (4) The research should reflect the “real-world” clinical practice settings. Efficacy research may happen under very different conditions than routine clinical care; in contrast, CER focuses on outcomes achieved in routine clinical practice settings. (5) The information should include meaningful outcomes. Although study outcomes, such as dose-distributions or partial responses, can be valuable in evaluating a technology, in CER the focus is on whether interventions improve outcomes that are meaningful to the patient. In summary, CER is designed to ask not only whether an intervention can work, but also whether the intervention would work, for whom, and in what settings? An important part of these questions is how we define “work.” Often, patients need to balance the potential benefits and harms of options, and different patients may place very different emphasis on possible outcomes; what “works” for 1 patient may be completely unacceptable for another.

Informing Clinical Practice What We Know A major role of CER is to inform clinicians in their practice, both for the purposes of their decision making and for informing their patients so that they are better able to participate in shared decision making regarding their health care options. One of the primary ways that CER can inform physicians is to help understand what is currently known through systematic reviews that provide thorough syntheses on available research literature on health care interventions. In our rapidly evolving health care system, with an enormous number of publications contributing to potential information overload, it has become increasingly difficult for clinicians to keep up with the literature base that keeps them informed, and increasingly easy to miss relevant literature, potentially providing a biased view of the available research.7

Systematic review has developed into a field of scientific expertise, and organizations, such as the AHRQ Evidencebased Practice Centers (EPC) Program, the Cochrane Collaboration, and the IOM, have developed standards and methods around best practices in performing systematic reviews.7-9 The AHRQ EPC Program specializes in the conduct of systematic reviews across a wide range of health care interventions; it evaluates the strength of evidence of a body of literature, considering issues, such as the risk of bias in studies, the consistency of finding across studies, the precision around the measures of effect, and the directness of the relationship between interventions and outcomes.10 EPC centers have conducted systematic reviews on radiation therapies in head and neck cancer,11 prostate cancer,12,13 and lung cancer.14 Systematic reviews not only help to point out what we know, but the uncertainties identified also help to illuminate where more research is needed. Although medical and radiation oncology research generally tends to be comparative, sometimes comparative information is lacking, for example, in nonoperable patients with stage I non–small cell lung cancer, a review found that all studies were single-arm studies, preventing statements on head-to-head comparison.14 Sometimes limited study duration or limited numbers of participants or both limit our ability to evaluate longer-term outcomes. A systematic review of intensity-modulated radiation therapy (IMRT) for head and neck cancer found that IMRT was associated with less xerostomia and with better xerostomiarelated quality of life than either 2-dimensional (2D) radiation therapy or 3D conformal radiation therapy (3D-CRT); although these are highly patient-relevant outcomes, the trials provided insufficient information to evaluate the relative effect of these therapies on tumor control and overall survival.11 Similarly, higher-dose external beam radiation appears to reduce biochemical progression compared with lower dose for men treated for clinically localized prostate cancer, but the effect on overall survival is uncertain.12,13 For prostate cancer, a disease treatable by multiple modalities, one of the largest gaps is cross-modality comparisons, for example, comparing surgery with external beam radiation.12 The Cochrane Collaboration (www.cochrane.org) also has a number of radiotherapyrelated reviews. Systematic reviews typically lay out an analytical framework for how an intervention could improve outcomes to better understand what parts of the framework have sufficient evidence and what parts do not (Fig. 1). These frameworks

W.F. Lawrence

56

Figure 1 Analytical framework for synthesizing research evaluating the outcomes of therapies for nonoperative treatment of non–small cell lung cancer (NSCLC). The left-hand column identifies the specific population to be studied. The top arrow represents research demonstrating a direct link between interventions and final health outcomes of interest. The middle arrow represents research linking interventions to the intermediate outcome of local control, with an implied relationship between the intermediate and final health outcomes. The lower curved arrows represent the research demonstrating an association between interventions and adverse events. KQ, key question. (Reprinted from Ratko et al.14)

are useful not only to understand the approach reviewers used to direct searches to evaluate the relevant body of literature, but also to point out the evidence on what parts of the framework have sufficient evidence to draw strong conclusions, and where the evidence is insufficient and therefore future research is needed. Ideally, the research should demonstrate a direct link between the use of an intervention, compared with the nextbest alternative, and improved patient-relevant outcomes, such as survival and quality of life. However, we have to often piece together the evidence linking interventions to health outcomes, such as demonstrating a link between the intervention and short-term outcomes, with the assumption that short-term outcomes result in long-term gains. However, these assumptions need to be tested, and reviews can point out where the evidence may or may not support such inferences. These frameworks can also help ensure that the review focuses on issues that are important for practice. The reports focus on patient-relevant outcomes as the ultimate end points, even if important surrogate or immediate outcomes are included in the review. Although surrogate end points are an important part of research, they are only useful to the extent that they represent a valid association with the end points of interest for the treatment options involved. Two recently prominent examples demonstrate that this association is not always the case. The Action to Control Cardiovascular Risk in Diabetes trial showed that intensive therapy to control the levels of glycated hemoglobin (a common measure of diabetes

control) reduced the levels of glycated hemoglobin compared with normal therapy; however, although the surrogate measure improved, overall mortality was higher in the intensive therapy group.15 In the cancer field, a randomized trial of sipuleucel-T for metastatic castration-resistant prostate cancer showed similar time to disease progression between the intervention group and a placebo-control group; despite lack of difference in this commonly used intermediate outcome, the sipuleucel-T group was found to have prolonged overall survival.16 A wide variety of outcomes can be included in the review, but the analytical framework helps to highlight the most relevant outcomes.

New Research New CER is typically aimed at filling these gaps in our knowledge. As described in more detail in the article by Meyer et al17 and also in this issue, this research may be observational or interventional, and interventional approaches could comprise either traditional randomized clinical trials (RCTs) or “pragmatic” clinical trials18; the fundamental aim is to try to demonstrate effectiveness in the real-world settings. For example, a systematic review on therapies for localized prostate cancer12 demonstrated a lack of evidence about patientcentered outcomes for men treated with different external beam radiation modalities, for example, 3D-CRT, IMRT, and proton beam therapy (PBT), despite heavy adoption of IMRT

CER in practice and policy and increasing adoption of PBT compared with the older 3D-CRT. Sheets et al19 addressed this gap by evaluating clinical outcomes of elderly men treated with these therapies using Surveillance, Epidemiology, and End-Results (SEER) cancer registry data linked to Medicare claims data. They found that external-beam radiation therapy for men with prostate cancer has shifted almost completely from 3D-CRT to IMRT, and that men treated with IMRT compared with 3D-CRT tended to have better clinical outcomes, including fewer diagnosed gastrointestinal morbidities and hip fractures albeit with a mildly increased rate of diagnosis of erectile dysfunction, and had fewer treatments for recurrences. However, the same study did not show a benefit of PBT over IMRT, although the sample was more limited. What types of study design should count as “comparative effectiveness research”? All types of designs are needed, and the specific approach would depend on how much we already know, how close the benefits and harms of different interventions are, and how well we understand causal relationships in the research areas. A framework for thinking about data needs for cancer comparative effectiveness has been well described.20 Observational registry studies, such as the study by Sheets et al,19 can make a valuable addition to the literature, particularly when RCTs are not feasible or simply have not been done. Cancer is a field that is particularly rich with registry data from the SEER registry (http:\\www.seer.cancer. gov) and National Program of Cancer Registries (www.cdc. gov/cancer/npcr/). For prostate cancer, as with many others, the balance of minimizing the adverse effects of treatment on quality of life against reducing the chance of recurrence and progression is a major issue, raising the need for patientreported quality-of-life outcomes; registries that include a prospective longitudinal cohort component focusing on patient-reported outcomes, such as the population-based Comparative Effectiveness Analysis of Surgery and Radiation study21 and the North Carolina Prostate Cancer Comparative Effectiveness and Survivorship Study22 would help clarify the relationship between radiation-based and nonradiation-based treatment modalities, and overall quality of life in men with localized prostate cancer. The observational approaches do not minimize the need for traditional RCTs, for example, while Sheets and colleagues compare IMRT with PBT, a multicentered RCT is currently in progress (http://clinicaltrials.gov/ ct2/show/NCT01617161). It is rare for 1 study to provide all the answers we need to choose between interventions; rather, it is better to think about a portfolio of research using a variety of designs, specifically aimed at using the best designs feasible to provide the answers that clinicians and patients need to choose the best intervention for a particular person. Radiation oncology–related CER is not only important to radiation oncologists. Other specialties, including primary care, medical oncology, and surgical specialties, need to be aware of this information, so that they can help make patients aware of all treatment options, including those options in the field of radiation oncology. For example, Jang et al23 found that for Medicare beneficiaries in the SEER registry with localized prostate cancer, treatment choice was correlated with whether men had appointments with primary care providers, medical

57 oncologsists, or radiation oncologists before initial treatment in addition to seeing urologists. In addition, only approximately 50% of these men were seen by radiation oncologists before initiating treatment. Clinicians and patients need to be aware of all of their treatment options.

Informing Patients Information from CER is aimed at informing patient decision making, as well as clinician decision making. Shared decision making involves at least the clinician and the patient,24,25 and it may include others such as family members. A critical part of this process involves informing patients about the potential benefits and harms of their health care options. CER has helped support patient decision making by providing outcomes data to inform patients about relevant outcomes. If the benefits of CER are to be realized for patient decision making, it is clear that research information needs to be translated into accessible information to patients. Although one of the roles of clinicians is to educate patients about their options, the role of tools aimed at directly informing patients has increasingly been adopted. One important approach for translating CER into information usable by patients is through use of patient decision aids, which can help incorporate CER into useful tools to help patients arrive at informed choices. Patient decision aids, whether paper based in booklets, or as interactive computer programs, are designed to support patients in their decision-making process and aim to supplement patientclinician decision making.26 Patient decision aids have 3 basic functions.27 The first function is to provide information. For a particular condition, the decision aid should provide up-to-date information about the condition, what the possible diagnostic or therapeutic options are, and the likely outcomes for each option. These data should include the level of uncertainty around the outcomes, and ideally be tailored as close as possible to the specific demographic or other relevant characteristics of the patient. CER, with the emphasis on comparison across options, patient-relevant outcomes, and subgroup information, can and has supported the information provision sections of patient decision aids. The second function of decision aids is to clarify a patient's values about the outcomes of treatment options.27 Many decisions do not have 1 clear “best” option. Scientific uncertainty may make it unclear whether the benefits of a particular option outweigh the harms. Even if the potential benefits and harms are well described, not everyone may value the potential outcomes the same way. Decisions may involve trade-offs, for example, active treatment of prostate cancer, such as surgery or radiation, may decrease the risk of tumor progression, but increase the risk of urinary, bowel, and sexual dysfunction. Some men may be willing to trade-off risk of progression and shortened survival to maintain sexual function.28 Research can clarify what outcomes are relevant to patients, whether these outcomes are clinical, such as risk of development of metastatic disease; health related, such as effect on health-related quality of life and function; or even process related, such as time spent

58 in treatment and recovery. Patient decision aids can then describe what it is like to experience the outcomes, and engage patients to reveal their values about potential outcomes of their available options. Finally, patient decision aids can provide coaching and guide patients in deliberation.27 Although CER can provide information, the patient must consider the implications of this information for their decision making. These tools can guide patients through the steps of decision making, and encourage them to communicate with families and with clinicians. These steps can help a patient become more actively involved with their clinician in shared decision making. In trials, patient decision aids can help to improve patient knowledge, involvement in their treatment decision making, and making informed value-based choices.29 Despite their efficacy, implementation in practice has been met with a number of barriers.30,31 Still, some health systems are implementing decision aids,32 and the Patient Protection and Affordable Care Act of 2010 encourages the development and implementation of patient decision aids in practice. If CER is to inform decision making, then translating CER findings into actionable information for the patient is essential. AHRQ's Effective Health Care Program includes the John M. Eisenberg Center for Clinical Decisions and Communications Science, responsible for translating EPC systematic reviews into materials useful for patients and clinicians. The Effective Health Care website (http:\\www.effectivehealthcare.ahrq.gov) has access to patient guides for the EPC head and neck cancer and prostate cancer reviews, as well as an interactive decision aid on treatment options for men with clinically localized prostate cancer based upon the EPC prostate cancer review. Groups such as Healthwise (www.healthwise.org) and the Informed Medical Decisions Foundation (http:\\www.informedmedical decisions.org) offer a variety of patient decision aids, including decision aids on the treatment of localized prostate cancer and the treatment of early breast cancer. These decision aids may not be available for public use, but some of them are available for evaluation. Although different decision aids may take different approaches, they all aim to inform patients about their treatment options, and help them explore what is known about the benefits and harms of each. The Ottawa Hospital Research Institute (http://decisionaid.ohri.ca/index.html) maintains a registry of patient decision aids for a wide variety of conditions.

Informing Policy Although CER has the potential to be applicable to a wide range of policy applications—for example, pharmacy formulary decision making, acquiring a facility to add PBT to current institutional capacity, insurance coverage, and reimbursement —finding clear evidence of use of research to inform these decision-making processes is not always available in the published literature. The conceptual approach is similar; however, the goal of CER for policy applications would be to help inform decision making through understanding population and population subgroup outcomes of different

W.F. Lawrence alternatives. As with patient and clinician decision making, although CER may inform policy decisions, it does not make them; other considerations in addition to CER evidence must often be taken into account.33 Systematic reviews may provide data for determination of coverage decisions. The Center for Medicare and Medicaid Services, for example, can request a systematic review of comparative or noncomparative findings as part of an external technology assessment used as part of the information to inform a national coverage determination; other inputs into decision making may include internal evaluations and input from the Medicare Evidence Development and Coverage Advisory Committee and from the public. Other payers use systematic reviews to help inform coverage policies. For example, the Blue Cross Blue Shield Association's Technology Evaluation Center (http://www.bcbs.com/blueresour ces/tec/) provides systematic technology assessments aimed at evaluating the clinical effectiveness for informing coverage decisions. A more transparent and potentially highly effective application of CER in policy is the use of CER for clinical policies through informing the development of clinical practice guidelines. In 2011, the IOM established standards for developing trustworthy clinical practice guidelines.34 They defined clinical practice guidelines as “statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options.” This definition aligns with core CER principles of informing care through comprehensive information on the outcomes of alternative options. These standards emphasize that a multidisciplinary panel of experts and representatives from affected groups should be involved in developing guidelines, and also that this development process should be based on a thorough systematic review of the evidence. This approach not only allows a role for the experience of experts in formulating a clinical policy, but also explicitly recognizes the need to transparently delineate the strength of evidence underlying specific guideline recommendations. In the United States, the National Guideline Clearinghouse (NGC) acts as a repository for evidence-based guidelines (http: \\www.guideline.gov). Developers can submit guidelines for inclusion in the NGC if they meet the criteria, including having a systematic development process and that a systematic review of the evidence was performed. These guidelines cover a wide variety of practice areas including radiation oncology. The NGC currently lists 208 guidelines as being relevant to radiation oncologists,35 including guidelines developed by the American Society for Radiation Oncology and appropriateness criteria developed by the American College of Radiology among other guideline developers. In June 2014, the NGC will employ a revised definition of a clinical practice guideline to reflect the IOM definition and revised inclusion criteria to align with this definition. The adoption of evidence-based guidelines is an important way to influence clinical policy with research; the adoption of the IOM definition on clinical practice guideline development would help to ensure a more rigorous and open evidence-based approach that includes CER principles.

CER in practice and policy

59

Conclusions

Effectiveness Review No. 20. Rockville, MD: Agency for Healthcare Research and Quality, 2010 Wilt TJ, Shamiliyan T, Taylor B, et al: Comparative Effectiveness of Therapies for Clinically Localized Prostate Cancer. Comparative Effectiveness Review No. 13. Rockville, MD: Agency for Healthcare Research and Quality, 2008 Ip S, Dvorak T, Yu WW, et al: Comparative Evaluation of Radiation Treatments for Clinically Localized Prostate Cancer: An Update. Rockville, MD: Agency for Healthcare Research and Quality, 2010 Ratko TA, Vats V, Brock J, et al: Local Nonsurgical Therapies for Stage I and Symptomatic Obstructive Non-Small-Cell Lung Cancer. Rockville, MD: Agency for Healthcare Research and Quality, 2013 ACCORD Study Group. Long-term effects of intensive glucose lowering on cardiovascular outcomes. N Engl J Med 364:818-828, 2011 Kantoff PW, Higano CS, Shore ND, et al: IMPACT Study Investigators: Sipuleucel-T immunotherapy for castration-resistant prostate cancer. N Engl J Med 363:411-422, 2010 Meyer AM, Carpenter WR, Abernethy AP, et al: Data for cancer comparative effectiveness research: Past, present, and future potential. Cancer 118:5186-5197, 2012 Tunis SR, Stryer DB, Clancy CM: Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. J Am Med Assoc 290:1624-1632, 2003 Sheets NC, Goldin GH, Meyer AM, et al: Intensity-modulated radiation therapy, proton therapy, or conformal radiation therapy and morbidity and disease control in localized prostate cancer. J Am Med Assoc 307:1611-1620, 2012 Carpenter WR, Meyer AM, Abernethy AP, et al: A framework for understanding cancer comparative effectiveness research data needs. J Clin Epidemiol 65:1150-1158, 2012 Barocas DA, Chen V, Cooperberg M, et al: Using a population-based observational cohort study to address difficult comparative effectiveness research questions: The CEASAR study. J Comp Effectiveness Res 2:367-370, 2013 Chen RC, Nielsen E, Reeve BB, et al: Perceptions regarding prostate cancer (CaP) treatment options: Results from the North Carolina Prostate Cancer Comparative Effectiveness and Survivorship Study (NC ProCESS). J Clin Oncol 31(suppl), 2013; [abstr 6530] Jang TL, Bekelman JE, Liu Y, et al: Physician visits prior to treatment for clinically localized prostate cancer. Arch Intern Med 170:440-450, 2010 Charles C, Gafni A, Whelan T: Shared decision-making in the medical encounter: What does it mean? (or it takes at least two to tango). Soc Sci Med 44:681-692, 1997 Elwyn G, Edwards A, Kinnersley P, et al: Shared decision making and the concept of equipoise: The competencies of involving patients in healthcare choices. Br J Gen Pract 50:892-897, 2000 Elwyn G, O’Connor A, Stacey D, et al: International Patient Decision Aids Standards (IPDAS) Collaboration: Developing a quality criteria framework for patient decision aids: Online international Delphi consensus process. Br Med J 333:417, 2006 O’Connor AM, Llewellyn-Thomas HA, Flood AB: Modifying unwarranted variations in health care: Shared decision making using patient decision aids. Health Aff (Millwood) Suppl Variation:VAR63-72, 2004 Singer PA, Tasch ES, Stocking C, et al: Sex or survival: Trade-offs between quality and quantity of life. J Clin Oncol 9:328-334, 1991 Stacey D, Bennett CL, Barry MJ, et al: Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 10: CD001431, 2011 Caldon LJ, Collins KA, Reed MW, et al: Clinicans' concerns about decision support interventions for patients facing breast cancer surgery options: Understanding the challenge of implementing shared decision making. Health Expect 14:133-146, 2011 Elwyn G, Dipl-Psych IS, Tietbohl C, et al: The implementation of patient decision support interventions into routine clinical practice: A systematic review. Accessed from the International Patient Decision Aids Standards (IPDAS) Collaboration. http://ipdas.ohri.ca/IPDAS-Implementation.pdf. Accessed August 23, 2013

As the explicit goal of CER is to inform decisions about health care, this research helps to bridge the traditional research enterprise to clinical practice and to health care policy. Inclusion of literature syntheses in CER helps to provide a comprehensive view of what we know from existing research, and provides a clear understanding of research gaps. Focusing new research on these gaps will address important areas of uncertainty that may hinder optimal decision making about health care options. CER emphasizes outcomes that are important to patients—knowledge of these outcomes can assist a clinician in his or her practice to communicate about options with their patients and can encourage shared decision making between patient and provider. Attention to heterogeneity of outcomes across patient or population subgroups can help tailor information for decision making at the individual level and understand the variation in outcomes across patient subgroups at the policy level. If data on these outcomes are not available, they are explicitly identified as gaps in the research. The dividing line between “traditional” clinical research and CER is blurry, particularly in a field where clinical trials tend to be comparative by nature. Despite this overlap in research approaches, increased attention to CER principles, with an emphasis on outcomes in the real-world settings, a focus on patient-centered outcomes, and increased emphasis on dissemination of research findings to relevant stakeholders can help to make research more relevant to patients, clinicians, and policy makers.

References 1. Slutsky JR, Clancy CM: Patient-centered comparative effectiveness research: Essential for high quality care. Arch Intern Med 170:403-404, 2010 2. Ashton CR, Wray NP: Comparative Effectiveness Research: Evidence, Medicine, and Policy. Oxford: Oxford University Press, 2013 3. http://en.wikipedia.org/wiki/Comparative_effectiveness_research. Accessed August 23, 2013 4. U.S. Department of Health and Human Services: Federal Coordinating Council for Comparative Effectiveness Research: Report to the President and Congress, June 30, 2009 5. IOM (Institute of Medicine). Initial National Priorities for Comparative Effectiveness Research. Washington, DC: The National Academies Press, 2009 6. http://pcori.org/research-we-support/pcor/. Accessed August 23, 2013 7. IOM (Institute of Medicine). Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC: The National Academies Press, 2011 8. Slutsky J, Atkins D, Chang S, et al: AHRQ series paper 1: Comparing medical interventions: AHRQ and the Effective Health Care Program. J Clin Epidemiol 63:481-483, 2010 9. Higgins JPT, Green S (eds): Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available at: http://www.cochrane-handbook.org 10. Owens DK, Lohr KN, Atkins D, et al: AHRQ series paper 5: Grading the strength of a body of evidence when comparing medical interventions— Agency for Healthcare Research and Quality and the Effective Health Care Program. J Clin Epidemiol 63:513-523, 2010 11. Samson DA, Ratko TA, Rothenberg BM, et al: Comparative Effectiveness and Safety of Radiotherapy for Head and Neck Cancer. Comparative

12.

13.

14.

15. 16.

17.

18.

19.

20.

21.

22.

23.

24.

25.

26.

27.

28. 29.

30.

31.

60 32. Hsu C, Liss DT, Westbrook EO, et al: Incorporating patient decision aids into standard clinical practice in an integrated delivery system. Med Decis Making 33:85-97, 2013 33. Randhawa G: Moving to a user-driven research paradigm. eGEMs (Generating Evidence & Methods to improve patient outcomes) 1:Iss. 2, Article 2, 2013

W.F. Lawrence 34. IOM (Institute of Medicine). Clinical Practice Guidelines We Can Trust. Washington, DC: The National Academies Press, 2011 35. National Guidelines Clearinghouse. http:\\www.guideline.gov. Accessed August 23, 2013

Comparative effectiveness research in practice and policy for radiation oncology.

Interest in comparative effectiveness research (CER) has increased dramatically over the past decade, yet perceptions about what comprises CER varies...
335KB Sizes 0 Downloads 0 Views