An Implementation Science Perspective on Psychological Science and Cancer What Is Known and Opportunities for Research, Policy, and Practice

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Borsika Rabin Russell E. Glasgow

We discuss the role of implementation science in cancer and summarize the need for this perspective. Following a summary of key implementation science principles and lessons learned, we review the literature on implementation of cancer prevention and control activities across the continuum from prevention to palliative care. We identified 10 unique relevant reviews, four of which were specific to cancer. Multicomponent implementation strategies were found to be superior to single-component interventions, but it was not possible to draw conclusions about specific strategies or the range of conditions across which strategies were effective. Particular gaps identified include the need for more studies of health policies and reports of cost, cost-effectiveness, and resources required. Following this review, we summarize the types of evidence needed to make research findings more actionable and discuss emerging implementation science opportunities for psychological research on cancer prevention and control. These include innovative study designs (i.e., rapid learning designs, simulation modeling, comparative effectiveness, pragmatic studies, mixed-methods research) and measurement science (i.e., development of context-relevant measures; practical, longitudinal measures to gauge improvement; costeffectiveness data; and harmonized patient report data). We conclude by identifying a few grand challenges for psychologists that if successfully addressed would accelerate integration of evidence into cancer practice and policy more consistently and rapidly. Keywords: implementation, cancer, review, recommendations, psychological science Supplemental materials:

http://dx.doi.org/10.1037/a0036107.supp

T

he ultimate goal of cancer research is to reduce the prevalence and impact of cancer. The Institute of Medicine has conclusively identified the “chasm” between research and practice (Institute of Medicine, 2003a) and the need for “whole person care” for cancer patients (Institute of Medicine, 2007). Expenses associated with new cancer diagnostic and treatment procedures are major reasons for the increase in health care expenditures (Hillman & Goldsmith, 2011). Reviews by McGlynn and others indicate that as with other conditions, only approxFebruary–March 2015 ● American Psychologist © 2015 American Psychological Association 0003-066X/15/$12.00 Vol. 70, No. 2, 211–220 http://dx.doi.org/10.1037/a0036107

Kaiser Permanente Colorado, Denver, Colorado National Cancer Institute, Rockville, Maryland, and University of Colorado School of Medicine

imately 50% of the evidence-based recommendations for cancer prevention and treatment are implemented in practice (McGlynn, 2003). Thus, there is an urgent need to translate psychological science into cancer policy and practice. Collectively, the above findings make a compelling case for the need for implementation research to help integrate research into both practice and policy. Implementation science, in turn, needs the contributions of psychology to address the complex, multilevel contextual and interpersonal issues involved in cancer prevention and control. Other articles in this special issue address advances in several areas of psychological research related to cancer. Here, we discuss implementation science, which studies how these advances can more rapidly and consistently be integrated into policy and practice, something that has not happened with sufficient frequency or speed to influence population health (Khoury, Gwinn, Glasgow, & Kramer, 2012). Examples of implementation science have been around for several decades (Rogers, 2003; Steckler et al., 1994), but the field has only recently coalesced as a defined Editor’s note. This article is one of 13 in the “Cancer and Psychology” special issue of the American Psychologist (February–March 2015). Paige Green McDonald, Jerry Suls, and Russell Glasgow provided the scholarly lead for the special issue. Authors’ note. Borsika Rabin, CRN Cancer Communication Research Center, Institute of Health Research, Kaiser Permanente Colorado, Denver, Colorado; Russell E. Glasgow, Division of Cancer Control and Population Sciences, National Cancer Institute, Rockville, Maryland, and Department of Family Medicine, University of Colorado School of Medicine. Funding for the preparation of this manuscript was partially provided through the National Cancer Institute Centers of Excellence in Cancer Communication Research (Award P20CA137219). The opinions expressed in this article are those of the authors and do not necessarily represent those of the National Cancer Institute. We would like to acknowledge Michelle Henton for support with the conduct of the review and assembly of the evidence tables in the supplemental materials and Barbara McCray for help with the overall preparation of the manuscript. Correspondence concerning this article should be addressed to Russell E. Glasgow, Colorado Health Outcomes Program, University of Colorado School of Medicine, 13199 East Montview Boulevard, Suite 300, Mail Stop F443, Room 323, Aurora, CO 80045. E-mail: [email protected]

211

cer (including emerging study designs and measurement recommendations).

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Lessons Learned

Borsika Rabin

area of research (Brownson, Colditz, & Proctor, 2012; Green, Ottoson, Garcia, & Hiatt, 2009). Variously referred to as dissemination and implementation (see http:// grants.nih.gov/grants/guide/pa-files/PAR-10-038.html), diffusion research (Dearing, 2004; Rogers, 2003), and knowledge translation by Canadian and European colleagues (Graham & Tetroe, 2009), this burgeoning field has nevertheless produced several key findings and lessons learned. In this article, we use the term implementation science to refer to the entire field of integrating research into practice and/or policy, including “scale-up” or largescale dissemination efforts. Implementation science has been defined as the area of research that “seeks to understand the processes and factors that are associated with successful integration of evidence-based interventions within a particular setting (e.g., a worksite, clinic or school)” (Rabin & Brownson, 2012, p. 32; National Institutes of Health, 2010), and there is a leading international journal, Implementation Science (http://www.implementationscience.com), devoted exclusively to it. We will discuss both cancer-specific prevention and control strategies and crosscutting implementation science conclusions and lessons learned. These more general strategies and programs are more likely to be adopted by health systems and communities, which cannot afford to implement different prevention and management approaches for every problem. The purposes of this article are (a) to summarize what is known about implementation of evidence and implementation/dissemination strategies in general that may be relevant for cancer; (b) to review implementation research across the cancer care continuum; and (c) to discuss emerging opportunities, grand challenges, and recommendations for future implementation research in psychology and can212

Many evidence-based interventions do not reach a broad range of representative, high-risk participants, are difficult to implement with fidelity in busy real-world settings, and are infrequently taken to scale or sustained (Glasgow, Klesges, Dzewaltowski, Bull, & Estabrooks, 2004; see also http://cancercontrol.cancer.gov/is/reaim). Much has been learned about barriers to implementation and scale-up, and evidence is accumulating about the types of implementation strategies that are most successful and generalizable (Brownson et al., 2012). We think the following general conclusions and lessons learned from implementation science are relevant for cancer researchers, practitioners, and policymakers interested in moving the evidence reviewed in other articles in this special issue into practice (Glasgow, Lichtenstein, & Marcus, 2003). 1. Effectiveness evidence alone is not sufficient. Many evidence-based cancer prevention and control recommendations, such as colorectal cancer (CRC) screening, are not widely adopted, whereas other much less evidence-based and more controversial procedures, such as PSA (prostatespecific antigen) screening and expensive new forms of diagnosis and radiation therapy (Hillman & Goldsmith, 2011), are in widespread use. The evidence integration triangle is a recent implementation science model created to help focus attention on two factors in addition to the research evidence (Glasgow, Green, Taylor, & Stange, 2012). These factors are (a) practical, longitudinal measures or metrics to evaluate implementation progress in real-world settings and (b) participatory implementation processes that engage stakeholders throughout the implementation process. Each of these three interrelated factors (i.e., evidence, measures, participatory approaches) is seen as necessary but not sufficient, and it is the interaction among or alignment of the factors, done in a contextually sensitive manner, that is key to successful implementation. Cancer application opportunities include various guidelines such as for cancer screening (e.g., CRC recommendations) and evidence-based treatments (e.g., tobacco cessation) for which there is strong evidence of effectiveness but large room for improvement in terms of their application in real-world settings. 2. There is a need for more rapid, relevant evidence that includes costs and health equity impacts. Health research has focused predominantly on establishing efficacy and much less on the population impact of assessments and interventions in real-world settings or on their broader consequences (Khoury et al., 2012; Glasgow et al., 2013). Key implementation factors from a public health perspective include the level of involvement or participation among low-resource and high-risk settings, organizations, staff, and citizens; the consistency of implementation across these contextual factors, including adaptations made; and program or policy costs and health disparity effects (Gaglio & Glasgow, 2012; Glasgow, Fisher, HaireFebruary–March 2015 ● American Psychologist

resource demands and the fit between an intervention and potential settings and target populations. Leviton, Khan, Rog, Dawkins, and Cotton (2010) have used the term evaluability to refer to the process of explicitly considering the chances of a project’s working in real-world settings up front, and they have applied it as part of their selection process for some grants funded by the Robert Wood Johnson Foundation.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Narrative Overview Methods

Russell E. Glasgow

Joshu, & Goldstein, 2007; Khoury et al., 2012). Data on broader outcomes, including factors central to policymakers and organizational decision makers, such as resources required, unanticipated consequences, and patient-centered impacts, are also needed (Zwarenstein et al., 2008). 3. Context is critical, and we need to consider contextual factors and answer “realist” questions. Most research, systematic reviews, and evidence syntheses in cancer have focused on average effects across participants, staff, and settings, with little information being gathered on the organizational, community, or intervention contexts or on implementation process factors. More relevant than average effects to decision makers, including clinicians, practitioners, and patients/family members, are answers to questions such as “Which intervention components, delivered by whom, under what conditions, and in which settings, are most effective (and cost-effective) for which patients” (Pawson, Greenhalgh, Harvey, & Walshe, 2005; Pawson & Tilley, 1997; Rothwell, 2005)? Such questions, termed “realist issues,” are also more compatible with the increasing emphasis in cancer research on precision medicine, targeted treatments, and comparative effectiveness research (Glasgow et al., 2013; Simonds et al., 2013). 4. Implementation, dissemination, and sustainability need to be planned for from the outset, not left to the end of a project. Such planning begins with ongoing, meaningful engagement of stakeholders—those who will be impacted by or make decisions about the adoption and sustainability of a program or policy after the research is completed—as discussed in the evidence integration triangle model mentioned earlier. Below we review specific methods for “designing for dissemination,” but the key lesson learned is that it does not work to conduct research as usual without consideration of real-world issues such as February–March 2015 ● American Psychologist

We conducted a narrative overview of the literature on implementation and dissemination of interventions and summarized findings that are relevant to cancer control across the cancer continuum from prevention to survivorship and end of life (Glasgow et al., 2013). To avoid duplications, we relied on existing reviews of the literature as described by Smith, Devane, Begley, and Clarke (2011). We used a combination of snowball sampling (i.e., through reviews of bibliographies of relevant publications with which we were familiar) and search of electronic databases (i.e., we searched the OVID and PsycNET databases using the search terms cancer, intervention, implementation, dissemination, and review) to identify systematic reviews and reviews of systematic reviews of implementation research relevant to cancer control. Search was completed when saturation was reached (i.e., when additional searches produced already-included articles). We included any systematic or nonsystematic (i.e., narrative) review and any review of systematic or nonsystematic reviews published in English between 2002 and 2012 and focused on dissemination and/or implementation studies of interventions that had relevance for cancer control across the cancer continuum (i.e., primary and secondary prevention, treatment, survivorship, and end-of-life care). We also included reviews that did not specifically focus solely on cancer but summarized information on implementation strategies that could be relevant for this area (e.g., implementation of clinical guidelines and research information into clinical practice). For all reviews, data extracted included (a) scope of the review; (b) review type (systematic review of primary studies or review of reviews); (c) years for studies included in the review; (d) inclusion criteria for the review; (e) number of included primary studies and/or reviews; (f) target setting of included studies; (g) target audience for included studies; (h) whether the review was cancer specific; (i) what stage(s) of the cancer control continuum were included; (j) overall quality of included studies and quality criteria used; (k) geographical area for included studies; (l) study design for included studies; (m) relevant outcomes in included studies; (n) theories, frameworks, or models used to inform the design or execution of the intervention; (o) implementation strategies identified; and (p) key findings. A senior research assistant abstracted information from all articles, and the first author (B. R.) conducted a secondary review of all articles. Based on this information, an evidence table was created, and key issues and patterns rele213

vant to implementation science were identified and summarized.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Findings from the Narrative Overview We identified 15 eligible publications describing 11 unique reviews, including four reviews of systematic reviews, four systematic reviews, and one review that used both approaches. In addition, two narrative reviews were included. The characteristics of the included reviews are summarized in Tables S1 and S2 in the supplemental materials. Of the four systematic reviews, only one limited primary studies to randomized controlled trials. The rest were more inclusive, with multiple reviews considering any type of design. Four of the reviews were cancer specific; the other seven assessed related topics and specialties more broadly (e.g., health care guideline implementation). The four cancer-specific reviews focused on different aspects of the cancer control continuum; two covered multiple aspects (e.g., prevention through end of life and diagnoses through end of life), one focused exclusively on prevention, and one focused exclusively on screening. Most of the evidence was gathered in the health care setting context (n ⫽ 5) with interventions focused on health care providers and/or patients (n ⫽ 7). The nine unique reviews included 46 systematic reviews and 444 primary studies. In addition, the review by Greenhalgh, Robert, Macfarlane, Bate, and Kyriakidou (2004) used 495 sources (213 empirical studies and 282 nonempirical studies), and the study by Hack and colleagues (2011) did not specify the number of included sources. Some duplication of individual studies across reviews is not accounted for in these numbers. All reviews of systematic reviews assessed quality using the AMSTAR (assessment of multiple systematic reviews; Shea et al., 2009) or other quality assessment approaches and reported variation in quality across reviews. All except for two systematic reviews reported on the quality of included primary studies. The systematic reviews used different criteria to assess the quality of included studies, most of which included the categories of design, execution, and validity of conclusions. While there was some variation in reported quality, most studies found that the overall quality and reporting for primary studies were poor. There was variation in study designs, but randomized controlled trials (RCTs) were the most commonly identified designs. Most reviews found that provider behavior and patient outcomes (often clinical and behavioral) were most commonly reported, followed by changes in the process of care. Few studies reported on true process outcomes (e.g., reach, adoption, or implementation) or economic outcomes, and the quality of reporting for the latter was often weak. Few reviews discussed use of theory or theoretical frameworks for the design or evaluation of studied interventions. Those that did report on theory (Davies, Walker, & Grimshaw, 2010; Grimshaw et al., 2006) found (a) that systematic reviews infrequently report on the theory/framework use of the underlying primary studies and (b) that there were mixed findings regarding the use of 214

theory/framework in primary studies, one review reporting that only a minority of primary studies explicitly used theory/framework (Davies et al., 2010) and another indicating that most of the studies applied one or multiple theories/frameworks (Rabin, Glasgow, Kerner, Klump, & Brownson, 2010). The implementation strategies identified could be categorized as patient/consumer/client-directed strategies, provider-focused strategies, and organizational/community strategies. The most commonly tested strategies were provider-focused distribution of educational materials through outreach, audit and feedback approaches, use of opinion leaders, and reminder systems. Multifaceted (or multicomponent) interventions that combined the use of multiple strategies were also common. Patient/client-directed strategies included dissemination of educational materials, use of decision aids and client reminders, mass and small media campaigns, workshops, websites, and CD-ROMs. Organizational/community-level strategies involved structural changes in processes (e.g., change in consultation length), health information technology (e.g., an electronic medical record system), and use of multidisciplinary teams. Overall, reviews were not able to provide definite conclusions regarding use of specific strategies. Certain strategies were identified as promising (e.g., educational outreach, audit and feedback) but as requiring additional studies in order to understand the generalizability of the findings. Passive approaches to disseminating evidencebased interventions (e.g., mailing) were consistently found to be ineffective. Multiple reviews concluded that multifaceted interventions were more effective than single-component strategies.

Summary of Narrative Review Our focused review found that there was heterogeneity in the included primary studies and systematic reviews in terms of quality, design, strategies utilized, and outcomes reported. Most studies focused on implementation strategies targeted to health care providers and/or patients in the health care setting. We found no studies focused on policy except for one study that used organizational-level strategies (Brouwers, Garcia, Makarski, & Daraz, 2011). The quality of available evidence was, overall, limited. Major concerns include lack of specification of the extent to which the interventions studied were evidence-based and mixed quality of the primary studies assessed. While inclusion criteria for most reviews allowed for a broad range of study designs, the majority of primary studies were RCTs. In terms of opportunities and needs for future research, few studies reported on cost and cost-effectiveness, and few assessed use of theory for the planning, development, or evaluation of implementation. Due to the heterogeneity in design, strategies, and outcomes, there is a lack of evidence to support recommending any single strategy as robust and effective across settings and target groups. It can be concluded that multifaceted and active strategies for implementation are more effective than single strategies February–March 2015 ● American Psychologist

and passive strategies. Several promising approaches require further study.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Emerging Issues and Approaches Needed Given the results above and the key lessons learned from implementation science research in general (Brownson et al., 2012; Glasgow & Chambers, 2012; Glasgow & Steiner, 2012), we offer the following recommendations for the types of designs and approaches needed to more consistently and rapidly move psychological cancer research into policy and practice. Implementation science includes a broad range of methodologies relevant for investigating implementation questions. These emerging methods are listed in Table 1 and briefly summarized below. The decision about what type of design to use should be made on a case-by-case basis after considering the question, the state of the science, the practical constraints, and the input of key stakeholders (Glasgow et al., 2013; Mercer, DeVinney, Fine, & Green, 2007). True experimental studies are usually considered the strongest type of evidence from an internal validity perspective (CONSORT, 2013). Two types of experimental designs are widely applicable in implementation science. Pragmatic trials (Glasgow, Gaglio, et al., 2012; Zwarenstein et al., 2008) are controlled experimental designs conducted from the stakeholder’s, rather than the researcher’s, perspective and consider questions, outcomes, settings, samples, and comparison groups relevant to decision makers. Comparative effectiveness research has many of the same characteristics (Glasgow & Steiner, 2012) and explicitly compares two or more real-world alternative programs, procedures, or policies. Issues of cost and cost-effectiveness are central to implementation science, can be combined with most of the other methods in Table 1, and are often the first questions that those interested in potentially adopting a program ask. There are many economic issues

and approaches (Gold, Siegel, Russell, & Weinstein, 2003), but those related to implementation and replication costs from the perspective of an adopting setting are often most relevant to implementation science (Ritzwoller et al., 2011). It is critical to plan for dissemination from the beginning of a study, but we found almost no examples of this in the relevant cancer literature. Two ways of doing this are to employ evaluability or simulation modeling methods. Evaluability (Leviton et al., 2010) involves an initial set of questions about the feasibility of a program or procedure and the chances that such a procedure would be able to be successful in typical settings. Evaluability is assessed before spending large amounts of time, money, and effort on research that is extremely unlikely to have practical application. Simulation modeling relies on mathematical models of different types to estimate results of programs under different sets of assumptions (Mabry, Marcus, Clark, Leischow, & Mendez, 2010). Both evaluability and simulation modeling have the advantages of being rapid and of identifying potential unintended consequences. In implementation science it is often not possible or there is insufficient time to conduct randomized studies (Kessler & Glasgow, 2011; Mercer et al., 2007). Two alternative approaches that can produce rapid answers and that are especially applicable to issues such as health policies are natural experiments and observational studies. These approaches have limitations and pose potential threats to validity, but they also have the strength of using data from real-world settings, often on complete “populations” or sets of persons, such as all members of a community or a health care plan, and in some cases contain data on hundreds of thousands of observations. A strength of both of these designs is that they focus on replication: The more different types of conditions and settings that an effect is observed across, the more confident one can be that results can be attributed to the program/policy being

Table 1 Emerging Areas of Implementation Science Research Relevant to Cancer Type of research

Pragmatic trials Comparative effectiveness research

Cost and economic analyses Evaluability Simulation modeling Natural experiments Observational studies Qualitative and mixed methods

February–March 2015 ● American Psychologist

Examples

Zwarenstein et al., 2008 Glasgow, Gaglio, et al., 2012 Glasgow & Steiner, 2012 Institute of Medicine, 2011a Glasgow et al., 2013 Ritzwoller et al., 2011 Leviton, Khan, Rog, Dawkins, & Cotton, 2010 Mabry, Marcus, Clark, Leischow, & Mendez, 2010 Solberg et al., 2010 Etheredge, 2007 Crabtree & Miller, 1999 Creswell, Klassen, Plano Clark, & Smith, 2011

215

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

studied rather than to extraneous factors. Finally, implementation science makes extensive use of qualitative and mixed-methods research to help understand why and how results are produced. Often, it is not enough to know just that a certain subgroup did not participate or that a program did not work in a given setting; we need to understand the reasons behind the findings, which are usually best addressed through qualitative methods (Crabtree & Miller, 1999; Creswell, Klassen, Plano Clark, & Smith, 2011).

Rowland, 2011). Since at-risk and disadvantaged populations (and the settings that serve such persons) are often likely to be excluded from traditional research (Chin et al., 2007), special efforts should be made to include such settings and persons and to report information related to health inequities. Research that has the characteristics above also tends to be more transparent— that is, to be clear regarding the conditions under which results do and do not apply (Pawson et al., 2005).

Pressing Needs

Grand Challenges

Our review also revealed key gaps between the types of information produced and designs that were employed and those needed to produce answers to core implementation questions in cancer. Table 2 summarizes characteristics of the contextually relevant evidence needed to translate research into practice. Most of these qualities can be inserted into almost all of the research methods listed in Table 1. These characteristics, rather than the specific type of design, are what most often distinguish implementation science from more basic research. Above all, implementation science attempts to produce research that is both rigorous and relevant: All research focuses on rigor, but implementation science places equal emphasis on relevance to key settings, samples, staff, and stakeholders for whom a research question is important. Often, implementation scientists are called upon to produce rapid evidence, for example, when there are pressing societal concerns such as those involving HIV/AIDS or the obesity epidemic (Institute of Medicine, 2011b; Thigpen, Puddy, Singer, & Hall, 2012). Implementation research is pragmatic—it attempts to produce answers that are feasible and can be practically applied. It also attempts to produce results that are robust and can be broadly applied in typical or representative settings. Cancer implementation topics for which psychological expertise can be especially helpful include understanding of risk, especially genetic-based risks; issues of implicit bias related to health disparities; and cost-effectiveness of different approaches to survivorship care plans (Parry, Kent, Mariotto, Alfano, &

There are several opportunities to apply psychological evidence and principles to address important implementation science questions in cancer. Some of the most important challenges are summarized in Table 3. They can be categorized into two categories: (a) issues related to framing and engaging target populations and (b) identification of cost-effective cancer prevention and control interventions that reach those most in need. Health Messages and Engagement The first challenge involves framing and communicating population health issues, both to the general public and to patients faced with medical decisions. The predominant “just the numerical facts” approach used by most scientists has not been effective, as evidenced by the outcry over recent decisions around mammography and PSA screening (Qaseem, Barry, Denberg, Owens, & Shekelle, 2013; U.S. Preventive Services Task Force, 2009). Challenges exist in communicating complex information about cancer risks, including harms, given low health literacy and numeracy (Institute of Medicine, 2004) as well as misleading communications from vested interests. Social and cognitive psychologists with expertise in framing health messages and connecting decisions to patient values can contribute to decision aids (Fagerlin et al., 2007), communications to legislators and other decision makers (Brownson & Jones, 2009), and mass communication messages. Community, clinical, organizational, and liaison psychologists can help advance partnership approaches. Almost all practitioners view themselves as “patient

Table 2 Design Characteristics Needed for Integrating Research Into Practice and Policy Characteristic

Examples

Relevant Rapid Pragmatic—efficient and cost-informative Robust—generalizable Addresses disparities

Glasgow & Chambers, 2012 Etheredge, 2007 Zwarenstein et al., 2008 Green & Glasgow, 2006 Chin et al., 2007 Lavizzo-Mourey & Jung, 2005 Institute of Medicine, 2003b Thorpe et al., 2009

Transparent

216

February–March 2015 ● American Psychologist

Table 3 Cancer Implementation Challenges and Opportunities for Psychological Science Area and challenge/opportunity

Role for psychological science

1. Framing of health issues from population perspective—complexity of data, low health numeracy and literacy 2. Community engagement—only lip service

Health communication, prevention, message framing

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

3. EHR and integrated health care—lack of standard patient-report measures 4. Social media and Web 2.0 cancer communities (e.g., Susan Love’s Army of Women)—confidentiality, trust issues 5. Cancer survivorship and care plans—multiple comorbidities 6. Care coordination and comorbidity—complex patients with complex challenges 7. Global health issues—low resources, lack of infrastructure Note.

Measures to identify and processes to facilitate meaningful engagement of partners and patients Information technology—contributing valid, practical, patient-report measures for harmonization Social psychology, self-disclosure, group process, self-help psychology Shared decision making, risk communication Risk communication, social skills, assertion and selfmanagement training Cross-cultural psychology—“disruptive innovations”

EHR ⫽ electronic health records.

centered,” and almost all researchers state that they employ community-based participatory procedures (Minkler, Wallerstein, & Wilson, 2008). Observation of patient– clinician encounters or level of input and decision making from community partners in many projects reveals a vastly different picture, however. Experimental and measurement psychologists can contribute by developing measures to differentiate procedures that are truly patient-centered and community-oriented from those that just give lip service to these principles. Community and clinical psychologists can also contribute by developing standards and criteria for patient centeredness and community engagement. One concrete example of the need for measures of patient input concerns practical, valid patient-report items that can be broadly used in health care and eventually become part of the electronic health record. Patient-report items are one key component missing from current attempts to standardize medical records, and the patient perspective on issues including health behaviors, psychosocial issues, and preferences is essential for patient-centered cancer care (Estabrooks et al., 2012; Glasgow, Kaplan, Ockene, Fisher, & Emmons, 2012). Another example is that organizational and community psychologists can contribute to understanding of the complex multilevel issues associated with implementation of cancer prevention and control interventions in complex health systems.

eHealth interventions (Noar & Harrington, 2013). Particular opportunities exist for social psychologists and others with expertise in self-disclosure, group process, and related issues to study online cancer survivor groups and Wikipedia-like processes. Issues of risk communication are paramount in helping patients to develop cancer care plans, especially given that the majority of cancer survivors are older adults. This relates to the broader issue of care coordination and comorbid conditions faced by the more than 13 million cancer survivors in the United States alone (Mackay, Eriksen, & Shafey, 2006; Parry et al., 2011). The final grand challenge for psychologists concerns implementation and evaluation of cost-effective, feasible interventions internationally, and especially in low- and middleincome countries (Seffrin, 2011). Cross-cultural psychologists in particular can be helpful in identifying and helping to design and evaluate programs that take into account local context and culture. With the majority of new cases of cancer and other chronic diseases expected to come from the developing world, there is an urgent need to identify cost-effective programs that will be both feasible and relevant in global settings. There is also great opportunity to learn from interventions in low- and middle-income countries about “task shifting” (i.e., effective sharing of tasks between specialists and lay staff; Patel et al., 2011) and disruptive innovations that use delivery channels and processes that could potentially save the United States huge amounts of health care costs.

Resources for Robust, Cost-Effective Interventions

Conclusions

Psychologists have made important contributions to eHealth (Bennett & Glasgow, 2009; Strecher, 2007) and cancer communication (Lee, Cappella, Lerman, & Strasser, 2013) and are well positioned to make further contributions to the new Web 2.0 world that involves consumers co-creating content for

Psychology has made important contributions to cancer prevention and treatment (see this issue; Holland, 2003). With the recent advances reviewed in other articles in this special issue along with the new challenges and opportunities provided by contextual factors such as the

February–March 2015 ● American Psychologist

217

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

patient-centered medical home, the Health Information Technology for Economic and Clinical Health Act (HITECH Act, with its implications for the meaningful use of electronic health records), and the Patient Protection and Affordable Care Act, there is an even more urgent need to translate psychological science into policy and practice. We have summarized the literature to date on cancer research implementation and identified both implementation science methods that can produce rapid, generalizable results as well as key opportunities for psychologists. Implementation science needs the contributions of psychology to address the complex, multilevel issues discussed in this article, and the experience psychologists can gain from becoming involved in cancer-related implementation science should benefit them in other pursuits as well. REFERENCES Bennett, G. G., & Glasgow, R. E. (2009). The delivery of public health interventions via the Internet: Actualizing their potential. Annual Review of Public Health, 30, 273–292. doi:10.1146/annurev.publhealth .031308.100235 Brouwers, M. C., Garcia, K., Makarski, J., & Daraz, L. (2011). The landscape of knowledge translation interventions in cancer control: What do we know and where to next? A review of systematic reviews. Implementation Science, 6, Article 130. doi:10.1186/1748-5908-6-130 Brownson, R. C., Colditz, G. A., & Proctor, E. K. (2012). Dissemination and implementation research in health: Translating science to practice (1st ed.). New York, NY: Oxford University Press. doi:10.1093/acprof: oso/9780199751877.001.0001 Brownson, R. C., & Jones, E. (2009). Bridging the gap: Translating research into policy and practice. Preventive Medicine, 49(4), 313–315. doi:10.1016/j.ypmed.2009.06.008 Chin, M. H., Drum, M. L., Guillen, M., Rimington, A., Levie, J. R., Kirchhoff, A., . . . Schaefer, C. T. (2007). Improving and sustaining diabetes care in community health centers with the health disparities collaboratives. Medical Care, 45(12), 1135–1143. doi:10.1097/MLR .0b013e31812da80e CONSORT. (2013). The CONSORT statement. Retrieved from http:// www.consort-statement.org/consort-statement/ Crabtree, B. F., & Miller, W. L. (1999). Doing qualitative research. Thousand Oaks, CA: Sage. Creswell, J. W., Klassen, A. C., Plano Clark, V. L., & Smith, K. C. for the Office of Behavioral and Social Sciences Research. (2011, August). Best practices for mixed methods research in the health sciences. Retrieved from http://obssr.od.nih.gov/mixed_methods_research Davies, P., Walker, A. E., & Grimshaw, J. M. (2010). A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations. Implementation Science, 5, Article 14. doi:10.1186/17485908-5-14 Dearing, J. W. (2004). Improving the state of health programming by using diffusion theory. Journal of Health Communication, 9(Suppl. 1), 21–36. doi:10.1080/10810730490271502 Estabrooks, P. A., Boyle, M., Emmons, K. M., Glasgow, R. E., Hesse, B. W., Kaplan, R. N., . . . Taylor, M. V. (2012). Harmonized patientreported data elements in the electronic health record: Supporting meaningful use by primary care action on health behaviors and key psychosocial factors. Journal of the American Medical Informatics Association, 19(4), 575–582. doi:10.1136/amiajnl-2011-000576 Etheredge, L. M. (2007). A rapid-learning health system. Health Affairs (Millwood), 26(2), w107–w118. doi:10.1377/hlthaff.26.2.w107 Fagerlin, A., Zikmund-Fisher, B. J., Ubel, P. A., Jankovic, A., Derry, H. A., & Smith, D. M. (2007). Measuring numeracy without a math test: Development of the Subjective Numeracy Scale. Medical Decision Making, 27(5), 672– 680. doi:10.1177/0272989X07304449 Gaglio, B., & Glasgow, R. E. (2012). Evaluation approaches for dissem-

218

ination and implementation research. In R. Brownson, G. Colditz, & E. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (1st ed., pp. 327–356). New York, NY: Oxford University Press. doi:10.1093/acprof:oso/9780199751877.003 .0016 Glasgow, R. E., & Chambers, D. (2012). Developing robust, sustainable, implementation systems using rigorous, rapid and relevant science. Clinical and Translational Science, 5(1), 48 –55. doi:10.1111/j.17528062.2011.00383.x Glasgow, R. E., Doria-Rose, V. P., Khoury, M. J., Elzarrad, M., Brown, M. L., & Stange, K. C. (2013). Comparative effectiveness research in cancer: What has been funded and what knowledge gaps remain? Journal of the National Cancer Institute, 105(11), 766 –773. doi: 10.1093/jnci/djt066 Glasgow, R. E., Fisher, E. B., Haire-Joshu, D., & Goldstein, M. G. (2007). National Institutes of Health science agenda: A public health perspective. American Journal of Public Health, 97(11), 1936 –1938. doi: 10.2105/AJPH.2007.118356 Glasgow, R. E., Gaglio, B., Bennett, G., Jerome, G. J., Yeh, H. C., Sarwer, D. B., . . . Wells, B. (2012). Applying the PRECIS criteria to describe three effectiveness trials of weight loss in obese patients with comorbid conditions. Health Services Research, 47(3, Pt. 1), 1051–1067. doi: 10.1111/j.1475-6773.2011.01347.x Glasgow, R. E., Green, L. W., Taylor, M. V., & Stange, K. C. (2012). An evidence integration triangle for aligning science with policy and practice. American Journal of Preventive Medicine, 42(6), 646 – 654. doi: 10.1016/j.amepre.2012.02.016 Glasgow, R. E., Kaplan, R. M., Ockene, J. K., Fisher, E. B., & Emmons, K. M. (2012). Patient-reported measures of psychosocial issues and health behavior should be added to electronic health records. Health Affairs (Millwood), 31(3), 497–504. doi:10.1377/hlthaff.2010.1295 Glasgow, R. E., Klesges, L. M., Dzewaltowski, D. A., Bull, S. S., & Estabrooks, P. (2004). The future of health behavior change research: What is needed to improve translation of research into health promotion practice? Annals of Behavioral Medicine, 27(1), 3–12. doi:10.1207/ s15324796abm2701_2 Glasgow, R. E., Lichtenstein, E., & Marcus, A. C. (2003). Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy to effectiveness transition. American Journal of Public Health, 93(8), 1261–1267. doi:10.2105/AJPH.93.8.1261 Glasgow, R. E., & Steiner, J. F. (2012). Comparative effectiveness research to accelerate translation: Recommendations for an emerging field of science. In R. C. Brownson, G. Colditz, & E. Proctor (Eds.), Dissemination and implementation research in health: Translating science and practice (pp. 72–93). New York, NY: Oxford University Press. doi:10.1093/acprof:oso/9780199751877.003.0004 Gold, M. R., Siegel, J. E., Russell, L. B., & Weinstein, M. C. (2003). Cost-effectiveness in health and medicine. New York, NY: Oxford University Press. Graham, I. D., & Tetroe, J. (2009). Learning from the U.S. Department of Veterans Affairs Quality Enhancement Research Initiative: QUERI Series. Implementation Science, 4, Article 13. doi:10.1186/1748-59084-13 Green, L. W., & Glasgow, R. E. (2006). Evaluating the relevance, generalization, and applicability of research: Issues in external validity and translation methodology. Evaluation and the Health Professions, 29(1), 126 –153. doi:10.1177/0163278705284445 Green, L. W., Ottoson, J. M., Garcia, C., & Hiatt, R. A. (2009). Diffusion theory and knowledge dissemination, utilization, and integration in public health. Annual Review of Public Health, 30, 151–174. doi: 10.1146/annurev.publhealth.031308.100049 Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Quarterly, 82(4), 581– 629. doi:10.1111/j.0887-378X.2004.00325.x Grimshaw, J., Eccles, M., Thomas, R., MacLennan, G., Ramsay, C., Fraser, C., & Vale, L. (2006). Toward evidence-based quality improvement. Evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966 –1998. Journal of General Internal Medicine, 21(Suppl, 2), S14 –S20. doi:10.1007/ s11606-006-0269-7

February–March 2015 ● American Psychologist

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Hack, T. F., Carlson, L., Butler, L., Degner, L. F., Jakulj, F., Pickles, T., . . . Weir, L. (2011). Facilitating the implementation of empirically valid interventions in psychosocial oncology and supportive care. Support Care Center, 19, 1097–1105. Hillman, B. J., & Goldsmith, B. J. (2011). The sorcerer’s apprentice: How medical imaging is changing health care. New York, NY: Oxford University Press. Holland, J. C. (2003). American Cancer Society Award Lecture. Psychological care of patients: Psycho-oncology’s contribution. Journal of Clinical Oncology, 21(23, Suppl.), 253s–265s. doi:10.1200/JCO.2003 .09.133 Institute of Medicine. (2003a). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academies Press. Institute of Medicine. (2003b). Unequal treatment: Confronting racial and ethnic disparities in health care. Washington, DC: National Academies Press. Institute of Medicine. (2004). Health literacy: A prescription to end confusion. Washington, DC: National Academies Press. Institute of Medicine. (2007). Cancer care for the whole patient: Meeting psychosocial health needs. Washington, DC: National Academies Press. Institute of Medicine. (2011a). Learning what works: Infrastructure required for comparative effectiveness research – Workshop summary. Retrieved from http://www.iom.edu/Reports/2011/LearningWhat-Works-Infrastructure-Required-for-Comparative-EffectivenessResearch.aspx Institute of Medicine. (2011b). The learning healthcare system in America. Retrieved from http://www8.nationalacademies.org/cp/projectview .aspx?key⫽IOM-EO-10-06 Kessler, R., & Glasgow, R. E. (2011). A proposal to speed translation of healthcare research into practice: Dramatic change is needed. American Journal of Preventive Medicine, 40(6), 637– 644. doi:10.1016/j.amepre .2011.02.023 Khoury, M. J., Gwinn, M. L., Glasgow, R. E., & Kramer, B. S. (2012). A population approach to precision medicine. American Journal of Preventive Medicine, 42(6), 639 – 645. doi:10.1016/j.amepre.2012.02.012 Lavizzo-Mourey, R., & Jung, M. (2005). Fighting unequal treatment: The Robert Wood Johnson Foundation and a quality-improvement approach to disparities. Circulation, 111, 1208 –1209. doi:10.1161/01.CIR .0000157739.93631.EB Lee, S., Cappella, J. N., Lerman, C., & Strasser, A. A. (2013). Effects of smoking cues and argument strength of antismoking advertisements on former smokers’ self-efficacy, attitude, and intention to refrain from smoking. Nicotine & Tobacco Research, 15(2), 527–533. doi:10.1093/ ntr/nts171 Leviton, L. C., Khan, L. K., Rog, D., Dawkins, N., & Cotton, D. (2010). Evaluability assessment to improve public health policies, programs, and practices. Annual Review of Public Health, 31, 213–233. doi: 10.1146/annurev.publhealth.012809.103625 Mabry, P. L., Marcus, S. E., Clark, P. I., Leischow, S. J., & Mendez, D. (2010). Systems science: A revolution in public health policy research. American Journal of Public Health, 100(7), 1161–1163. doi:10.2105/ AJPH.2010.198176 Mackay, J., Eriksen, M., & Shafey, O. (2006). The cancer atlas. Retrieved from http://www.cancer.org/aboutus/globalhealth/cancerandtobaccocontrolresources/the-cancer-and-tobacco-atlases McGlynn, E. A. (2003). An evidence-based national quality measurement and reporting system. Medical Care, 41(1, Suppl.), I-8-151161–I. Mercer, S. L., DeVinney, B. J., Fine, L. J., & Green, L. W. (2007). Study designs for effectiveness and translation research: Identifying tradeoffs. American Journal of Preventive Medicine, 33(2), 139 –154. doi: 10.1016/j.amepre.2007.04.005 Minkler, M., Wallerstein, N., & Wilson, N. (2008). Improving health through community organization and community building. In K. Glanz, B. K. Rimer, & K. Viswanath (Eds.), Health behavior and health education (4th ed., pp. 287–312). San Francisco, CA: Wiley. National Institutes of Health. (2010). Dissemination and implementation research in health (R21) (PAR-10 – 040 ed.). Retrieved from http:// grants.nih.gov/grants/guide/pa-files/PAR-10-040.html

February–March 2015 ● American Psychologist

Noar, S. M., & Harrington, N. G. (2013). Interactive health communication technologies: Promising strategies for health behavior change (1st ed.). New York, NY: Routledge. Parry, C., Kent, E. E., Mariotto, A. B., Alfano, C. M., & Rowland, J. H. (2011). Cancer survivors: A booming population. Cancer Epidemiology, Biomarkers & Prevention, 20(10), 1996 –2005. doi:10.1158/10559965.EPI-11-0729 Patel, V., Weiss, H. A., Chowdhary, N., Naik, S., Pednekar, S., Chatterjee, S., . . . Kirkwood, B. R. (2011). Lay health worker led intervention for depressive and anxiety disorders in India: Impact on clinical and disability outcomes over 12 months. The British Journal of Psychiatry, 199(6), 459 – 466. doi:10.1192/bjp.bp.111.092155 Pawson, R., Greenhalgh, T., Harvey, G., & Walshe, K. (2005). Realist review: A new method of systematic review designed for complex policy interventions. Journal of Health Services Research and Policy, 10(Suppl. 1), 21–34. doi:10.1258/1355819054308530 Pawson, R., & Tilley, N. (1997). Realistic evaluation. Thousand Oaks, CA: Sage. Qaseem, A., Barry, M. J., Denberg, T. D., Owens, D. K., & Shekelle, P. (2013). Screening for prostate cancer: A guidance statement from the Clinical Guidelines Committee of the American College of Physicians. Annals of Internal Medicine, 158(10), 761–769. doi:10.7326/00034819-158-10-201305210-00633 Rabin, B. A., & Brownson, R. C. (2012). Developing the terminology for dissemination and implementation research in health. In R. C. Brownson, G. A. Colditz, & E. K. Proctor (Eds.), Dissemination and implementation research in health: Translating science to practice (pp. 23–51). New York, NY: Oxford University Press. doi:10.1093/acprof: oso/9780199751877.003.0002 Rabin, B. A., Glasgow, R. E., Kerner, J. F., Klump, M. P., & Brownson, R. C. (2010). Dissemination and implementation research on community-based cancer prevention: A systematic review. American Journal of Preventive Medicine, 38(4), 443– 456. doi:10.1016/j.amepre.2009.12 .035 Ritzwoller, D. P., Sukhanova, A. S., Glasgow, R. E., Toobert, D. J., Strycker, L. A., Gaglio, B., & King, D. K. (2011). Intervention costs and cost-effectiveness for a multiple-risk-factor diabetes self-management trial for Latinas: Economic analysis of ¡Viva Bien! Translational Behavioral Medicine, 1(3), 427– 435. doi:10.1007/s13142-011-0037-z Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York, NY: Free Press. Rothwell, P. M. (2005). External validity of randomised controlled trials: To whom do the results of this trial apply? Lancet, 365, 82–93. doi:10.1016/S0140-6736(04)17670-8 Seffrin, J. R. (2011). Conquering cancer in the 21st century: Leading a movement to save more lives worldwide. Health Education and Behavior, 38(2), 111–115. doi:10.1177/1090198111404836 Shea, B. J., Hamel, C., Wells, G. A., Bouter, L. M., Kristjansson, E., Grimshaw, J., . . . Boers, M. (2009). AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology, 62(10), 1013–1020. doi: 10.1016/j.jclinepi.2008.10.009 Simonds, N. I., Khoury, M. J., Schully, S. D., Armstrong, K., Cohn, W. F., Fenstermacher, D. A., . . . Freedman, A. N. (2013). Comparative effectiveness research in cancer genomics and precision medicine: Current landscape and future prospects. Journal of the National Cancer Institute, 105(13), 929 –936. doi:10.1093/jnci/djt108 Smith, V., Devane, D., Begley, C. M., & Clarke, M. (2011). Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Medical Research Methodology, 11(1), Article 15. doi:10.1186/1471-2288-11-15 Solberg, L. I., Glasgow, R. E., Unützer, J., Jaeckels, N., Oftedahl, G., Beck, A., . . . Crain, A. L. (2010). Partnership research: A practical trial design for evaluation of a natural experiment to improve depression care. Medical Care, 48(7), 576 –582. doi:10.1097/ MLR.0b013e3181dbea62 Steckler, A., Allegrante, J. P., Altman, D., Brown, R., Burdine, J. N., Goodman, R. M., & Jorgensen, C. (1994). Health education intervention strategies: Recommendations for future research. Health Education Quarterly, 22, 307–328. doi:10.1177/109019819402200305

219

designers. Canadian Medical Association Journal, 180(10), E47–E57. doi:10.1503/cmaj.090523 U.S. Preventive Services Task Force. (2009). Screening for breast cancer: U.S. Preventive Services Task Force recommendation statement. Annals of Internal Medicine, 151(10), 716 –726. doi:10.7326/0003-4819151-10-200911170-00008 Zwarenstein, M., Treweek, S., Gagnier, J. J., Altman, D. G., Tunis, S., Haynes, B., . . . Moher, D. (2008). Improving the reporting of pragmatic trials: An extension of the CONSORT statement. British Medical Journal, 337, Article 2390. doi:10.1136/bmja2390

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Strecher, V. (2007). Internet methods for delivering behavioral and healthrelated interventions (eHealth). Annual Review of Clinical Psychology, 3, 53–76. doi:10.1146/annurev.clinpsy.3.022806.091428 Thigpen, S., Puddy, R. W., Singer, H. H., & Hall, D. M. (2012). Moving knowledge into action: Developing the Rapid Synthesis and Translation Process within the Interactive Systems Framework. American Journal of Community Psychology, 50(3–4), 285–294. doi:10.1007/s10464-012-9537-3 Thorpe, K. E., Zwarenstein, M., Oxman, A. D., Treweek, S., Furberg, C. D., Altman, D. G., . . . Chalkidou, K. (2009). A pragmatic-explanatory continuum indicator summary (PRECIS): A tool to help trial

220

February–March 2015 ● American Psychologist

An implementation science perspective on psychological science and cancer: what is known and opportunities for research, policy, and practice.

We discuss the role of implementation science in cancer and summarize the need for this perspective. Following a summary of key implementation science...
234KB Sizes 0 Downloads 11 Views