J Canc Educ DOI 10.1007/s13187-014-0644-8

Designing and Operationalizing a Customized Internal Evaluation Model for Cancer Treatment Support Programs Heather K. Moore & Jaime Preussler & Ellen M. Denzen & Tammy J. Payton & Viengneesee Thao & Elizabeth A. Murphy & Eileen Harwood

# Springer Science+Business Media New York 2014

Abstract Be The Match® Patient and Health Professional Services (PHPS) supports patients undergoing hematopoietic cell transplant (HCT) and caregivers by providing educational programs and resources. HCT is a potentially curative therapy for blood cancers such as leukemia and lymphoma. To help meet the increasing demand for support services, PHPS implemented a multipronged plan to build and sustain the organization’s capacity to conduct evaluation of its programs and resources. To do so, PHPS created and operationalized an internal evaluation model, developed customized resources to help stakeholders incorporate evaluation in program planning, and implemented utilization-focused evaluation for quality improvement. Formal mentorship was also critical in the development of an evidence-based, customized model and navigating inherent challenges throughout the process. Our model can serve as a guide for evaluators on establishing and operationalizing an internal evaluation program. Ultimately, we seek to improve support and education services from the time of diagnosis through survivorship.

Keywords Program evaluation . Evaluation of cancer treatment support programs . Quality improvement . Hematopoietic cell transplant (HCT) Electronic supplementary material The online version of this article (doi:10.1007/s13187-014-0644-8) contains supplementary material, which is available to authorized users. H. K. Moore (*) : J. Preussler : E. M. Denzen : T. J. Payton : V. Thao : E. A. Murphy Be The Match® Patient and Health Professional Services, 3001 Broadway Street NE, Minneapolis, MN 55413, USA e-mail: [email protected] E. Harwood Division of Epidemiology and Community Health, School of Public Health, University of Minnesota—Twin Cities Campus, 1300 South Second Street, Suite 300, Minneapolis, MN 55454, USA

Introduction “Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future program development” [11]. Pressure to evaluate often drives nonprofit organizations’ willingness to talk about, if not fully embrace, program evaluation [12]. In theory, program decision makers commit to being accountable to stakeholders and to improving program effectiveness for participants. However, most decision makers struggle to balance competing priorities and a lack of skills, data, and finances with an earnest desire to implement and sustain evaluations. Increasingly, nonprofit organizations are turning to internal evaluators as a solution—a trend discussed in-depth in the Winter 2011 issue of New Directions for Evaluation [4, 11, 18]. Important to the success of conducting evaluations internally is a commitment to a process of building capacity for that work [12]. It has long been understood that internal evaluators play many complex roles within their organizations and evaluation responsibilities are often added on to tasks assigned to coordinator, assistant, and other staff positions. As internal evaluators, these employees serve as observers, interpreters, consultants, technical experts, supporters of organizational learning and management decision processes, and change agents [17]. Understandably, many are concerned about the ethical conflicts that these employees face in carrying out their multiple duties to the organization [7, 11]. One way to mitigate these concerns is to seek external evaluation help and feedback. External evaluation expertise for the primary purpose of building evaluation capacity within an organization can be obtained through an evaluation steering committee approach, support for professional staff training and certification, or through partnerships with external evaluators who agree to mentor designated staff [18]. Coaching or mentorship was recently found to play “a

J Canc Educ

crucial role in developing evaluation competencies” and has been underutilized in formal training [6, 12]. Real-world application is where evaluation competencies are most effectively developed, and real-world context is where the complexities of evaluation are most revealed. In these circumstances, internal evaluators and their mentors should be cautioned that the best practice for internal evaluations is to respond to the organization’s unique situation and most pressing problems by using the most feasible and appropriate methodologies available [18]. The goal of this paper is to describe how one organization, Be The Match®, developed and implemented a multipronged plan to build and sustain the capacity to conduct timely and useful program evaluation for cancer treatment support and education services.

Be The Match® Patient and Health Professional Services In the USA, 20,000 hematopoietic cell transplants (HCTs) are performed annually [10] to treat blood cancers (e.g., leukemias and lymphomas) and immune system disorders (e.g., multiple myeloma). With the expected rise in HCTs, the number of survivors will more than double by 2020. This growth is due to advances in HCT techniques, new disease indications, and the use of umbilical cord blood, which continue to improve survival for HCT recipients [9]. Be The Match® operates the Office of Patient Advocacy/ Single Point of Access for the C. W. Bill Young Transplantation Program. Be The Match® Patient and Health Professional Services (PHPS) department provides tailored information and support to HCT patients, families, caregivers, and health professionals, to promote informed decision-making and optimal wellbeing from time of diagnosis with blood cancer through survivorship. Examples of PHPS programs include educational resources on topics including but not limited to specific diseases (e.g., leukemia and lymphoma), treatment options, questions to ask the medical care team, clinical trials, and what to expect before, during, and after HCT. In addition, PHPS provides oneto-one support and financial assistance for out-of-pocket costs, as well as conducts health services research (HSR) to improve access to and outcomes of HCT. Program evaluation plays a critical role in PHPS strategic resource allocation and program planning, improvement, and dissemination efforts.

Designing an Internal Evaluation Program With the anticipated growth in the number of HCTs performed each year and the subsequent increased need for education and support, PHPS leadership recognized that a formal evaluation program was necessary to (1) measure the effectiveness of its programs and resources with more certainty and (2) practice

quality improvement by applying evaluation results to program development and revisions. It was also determined that an internal (vs external) evaluator was required given that insider knowledge of the complex process of HCT, organizational objectives and workflow processes, as well as internal and external stakeholder needs was critical for long-term impact. A major challenge to designing an internal evaluation program was the lack of information in the literature on how to structure the evaluation program and navigate the inherent issues that come with the development and implementation processes. To begin, PHPS leadership designated an evaluation specialist; candidate qualifications included knowledge of HCT education needs of the target audiences and formal training in health program planning and evaluation. The position is housed within the HSR program, which has expertise in survey research, data management and analysis, measurement, and dissemination. The team is hereafter known as the “evaluation team . The evaluation team identified six major steps in designing the evaluation program: (1) establish a formal mentorship with an expert evaluator; (2) identify and customize an evidencebased, theory-driven evaluation framework appropriate for the context; (3) conduct an evaluation need assessment; (4) define goals and objectives for the evaluation program; (5) address identified gaps; and (6) assess evaluation program performance. This paper will discuss these steps in more detail and provide a case study of how the program was operationalized. Mentorship Formal mentorship for the evaluation specialist was identified as a critical component to ensure that the program met both industry standards and expert recommendations, as well as engaged internal stakeholders [3, 8, 19]. Specifically, we sought doctoral-level mentorship on how to align evaluation planning with internal workflow processes and generate “buyin from program managers with evaluation planning. An Evidence-Based, Customized Evaluation Model Based on a review of the literature and with the help of the mentor, the evaluation team identified and adopted the evidence-based Centers for Disease Control and Prevention’s (CDC’s) Framework for Program Evaluation in Public Health as the foundation for the customized evaluation model [15]. To customize the CDC’s Evaluation Framework, internal program planning and workflow processes were aligned with the major steps of evaluation (Fig. 1). For example, the initial program planning steps, Ideation and Kickoff, should involve the first step of evaluation, Engage Stakeholders. Our customized model provides a visual aid and training resource for PHPS program managers who may be unfamiliar with the appropriate incorporation of evaluation in the program development process.

J Canc Educ

Operationalizing a Customized Evaluation Model The customized evaluation model was operationalized by the following: (1) conducting a baseline assessment of the PHPS evaluation program; (2) establishing evaluation program goals and objectives; (3) implementing process changes to address gaps identified in the assessment; and (4) reassessment of the evaluation program after two years.

and reported this data to its organizational leadership and external funders. However, the self-assessment showed that departmental gaps existed with regard to setting programspecific objectives, performance measures, and targets; considering external evidence in our performance analyses; and implementing a formal process for using analysis results to achieve program improvement. Specifically, the department lacked a standard process for applying evaluation data to guide program changes.

Evaluation Program Need Assessment We conducted a baseline assessment of the PHPS evaluation program utilizing the Public Health Foundation’s Plan-DoCheck-Act (PDCA) self-assessment tool, which includes 17 criteria rated on a 4-point scale of effectiveness [13]. In order to ensure relevance of the measures to our program, we changed community health improvement plan (CHIP) to health education and advocacy plan (HEAP) to describe the program. The results of the need assessment were used to inform evaluation program goals and objectives. PHPS has historically set strong performance standards and goals, measured the progress and outcomes of its programs,

Establishing Evaluation Program Goals and Objectives The overarching goal of the evaluation program is to conduct timely and useful evaluations in order to improve programs and resources for HCT patient and health professional audiences. To accomplish this goal, we identified five goal areas with specific, measurable, achievable, relevant, and time-bound (SMART) objectives [16] (Table 1) that guide current evaluation efforts (Goal Areas 1–3). There is also a plan to increase capacity over the next three years (Goal Areas 4–5).

Evaluation Standards Utility Feasibility Propriety Accuracy

Legend Adapted by H. Moore and E. Denzen, November 2012, from U.S. Department of Health and Human Services,

Introduction to program evaluation for public health programs: A self-study guide. Atlanta, GA: Centers for Disease Control and Prevention.

Fig. 1 Patient and Health Professional Services Evaluation Model

Internal Program Planning Steps Centers for Disease Control and Prevention (CDC) Evaluation Steps

J Canc Educ

Addressing Identified Gaps Goal Area 1—Demonstrate Impact and Effectiveness of Programs and Resources Objective 1.1 The evaluation team revised measures to more closely align with program objectives using concept indicator mapping and standardized measures across resource-specific Table 1 Patient and Health Professional Services evaluation program goals and objectives Overarching goal: To conduct timely and useful program evaluation Goal Area 1: Demonstrate impact and effectiveness of services and programs Objective 1.1: Within 2 years, demonstrate validity of all evaluation plans/methods Objective 1.2: Within 2 years, increase survey response rates to ≥33 % Objective 1.3: Within 4 years, reduce disparities in outreach by 15 % Objective 1.4: Within 1 year, document that objectives have been met for each resource/service Goal Area 2: Streamline services by showing what does or does not work Objective 2.1: Provide information on service use and delivery to program staff biannually Objective 2.2: Within 2 years, implement a process to improve usability of evaluation results among program staff and policy makers Goal Area 3: Promote staff engagement and development Objective 3.1: Within 1 year, instill basic level of knowledge of evaluation planning among all program managers and lead staff Objective 3.2: Within 2 years, increase application of evaluation planning methods among program managers and lead staff by 50 % Objective 3.3: Inform annual staff development opportunities using evaluation results and/or performance measures Goal Area 4: Provide evidence of service use, effectiveness, and demand Objective 4.1: see Objective 1.4 Objective 4.2: Within 2 years, implement and report on process evaluation for appropriate services and programs Objective 4.3: Within 2 years, conduct systematic need assessments as the first step in program planning for all new programs and major program revisions Objective 4.4: Within 4 years, diagnose factors which contribute to successful or unsuccessful outcomes Goal Area 5: Strengthen the evaluation program’s capacity to contribute evidence-based knowledge to the field Objective 5.1: Within 3 years, increase rigor of evaluation results (see Goal area 1) to meet evaluation and research industry standards Objective 5.2: Within 3 years, improve reliability of evaluation results to meet evaluation and research industry standards for population sampling Objective 5.3: Increase the number of evaluation-related dissemination activities (oral presentation at national conference or peerreviewed publication) to ≥5 annually

evaluations. Standardized measures include the following: a patient satisfaction index [14], balanced 5-point Likert response scales, demographic questions, and inclusion of qualitative inquiries about information needs. Objective 1.2 Previously, survey response rates were low (as low as 3 % response), and respondents did not fully represent the population; therefore, results lacked validity. To ensure accuracy of evaluation efforts, increase response rates, and reduce nonresponse bias, the evaluation team adopted the Tailored Design Method (TDM) of survey administration [5]. Specifically, we developed a process to send confidential surveys directly to HCT patient audiences using personalized correspondence and conduct follow-up with nonrespondents as recommended in the TDM. Objective 1.3 We included six demographic questions, recommended by the US Department of Health and Human Services Office of Management and Budget and Office of Minority Health, in every evaluation instrument. These questions characterize the respondents’ HCT role (i.e., patient, main caregiver, family member, friend, or other), sex, age, ethnicity, race, and highest level of education. These revisions identify disparities in outreach and increase the capacity for comparisons of responses across programs and resources. Objective 1.4 We developed two comprehensive evaluation plans, including logic models, for PHPS’ annual strategic plan. The first evaluation plan focused on activities geared for patient audiences. The other plan focused on activities targeting health professionals, which included disease- and transplant-specific organizations and professional societies. The logic models were especially helpful in (1) determining how well proposed activities met the organization’s mission as well as the department’s strategic objectives and (2) identifying any redundancy or gaps in our activities, based on the identified needs of our audiences. Goal Area 2—Show What Does or Does Not Work Objective 2.1 Data on use and delivery of services are reported to program managers annually and in the interim upon request. Objective 2.2 The evaluation team provides accessible and user-friendly Utilization-focused Evaluation Uf-E) reports to program managers [15, 11]. The first page of the Uf-E report template includes an executive summary and three to five major takeaways from the evaluation. The remainder of the report is formatted to clearly show the following: (1) the degree to which evaluation objectives are being met; (2) supporting measure(s); (3) a succinct description of the results with meaningful visualization of data; and (4) a narrative thoroughly explaining the data and recommended strategies for applying data to program development and improvement.

J Canc Educ

The report template concludes possible strategies for implementing the recommendations. The evaluation specialist meets one-to-one with the program manager(s) to review the report and reaffirm how the intended use of the results can be incorporated into program planning and improvement efforts.

Goal Area 3—Promote Staff Engagement and Development Objective 3.1 The evaluation specialist designed and conducted two in-service trainings to engage program managers in the PHPS evaluation process and help them better understand its benefits. The learning objectives of the first in-service were to: (1) define program evaluation; (2) discuss its purpose and benefits; (3) describe SMART objectives and (4) practice developing SMART objectives. The second in-service focused on demonstrating the intended use of the customized evaluation model, the evaluation process, and the program evaluation worksheet (Appendix). The worksheet was created as a tool to collect the information about a new program or resource required to develop an evaluation plan. This information includes background information on the need for the program and how it supports department objectives. Attendees completed a posttest to reinforce learning objectives and evaluation to provide feedback on the helpfulness of the in-services. Participants indicated that they were very satisfied with both (88 %, N=7, and 100 %, N=8, respectively). Objective 3.2 In addition to the in-service trainings, the evaluation specialist coached program managers one-toone on the worksheet and the benefits of implementing the TDM for surveys. After multiple discussions to confirm objectives and measures, program-specific evaluation plans were created which successfully aligned with the industry standards. The plans ultimately included a protocol, logic model, evaluation instruments, and a survey administration process, if needed. The number of completed worksheets was tracked as evidence of applied evaluation planning among program managers. Objective 3.3 Evaluation results from in-service trainings and needs assessments of program evaluation topics for PHPS staff were used to inform educational and professional development planning.

Evaluation Case Study: Caregiver Companion Program The Caregiver Companion Program (CCP) was the first largescale PHPS program to which the new evaluation program standards and processes were applied. With the guidance of the mentor and use of the worksheet, the evaluation specialist and program managers collaboratively developed and implemented a comprehensive evaluation plan. A cancer caregiver is someone who provides medical, physical and emotional care to the patient on a daily basis [20, 21]. A designated caregiver is critical for HCT recipients, with recovery typically lasting up to a year or longer [22]. Potential caregiver participants are referred to the CCP by health professionals (usually a social worker) at selected US HCT centers. Trained coaches provide one-to-one telephone support to participants by discussing challenges experienced by the caregivers and identifying coping strategies. The coaches target six 1-hour sessions with each participant. The CCP also includes a self-care toolkit, composed of a self-care book, journal, water bottle, and pedometer. The overarching goal of the CCP is to improve the emotional health, self-care behaviors, and coping skills of caregivers. Specific objectives are to: (1) achieve overall satisfaction among major stakeholders (i.e., caregiver participants and referring transplant center staff); (2) provide a convenient and easy-to-access support program for caregivers; (3) decrease caregiver stress levels; and (4) increase their coping skills. Summative and formative evaluation data are collected from participants and referring HCT center staff via interviews and validated assessments [1, 2]. Examples of process measures include: ease of registration and perceived program effectiveness for referring HCT center staff. Outcome measures include: change in participants’ coping and stress levels and satisfaction with the overall program, toolkit, and coaching. To determine program impact, participant use of self-care skills and the toolkit is assessed three months after program completion. The evaluation team provides periodic summaries of evaluation results to program managers and a detailed Uf-E report annually. To promote utilization of findings, meetings with program managers are held biannually, which focus on interpreting evaluation results and identifying program quality improvement initiatives.

Conclusions Assessing Evaluation Program Performance The PDCA self-assessment tool will be completed again at the two-year mark to determine the extent to which the program has been successfully implemented. The results will be used to identify gaps and make any necessary improvements to the program.

In one year, PHPS designed and operationalized a customized internal evaluation model, grounded in theory, and modeled on industry best practices. By applying quality data to program development processes, the evaluation program helps improve support and education services for HCT patients from diagnosis through survivorship.

J Canc Educ

A major challenge to this work was gaining buy-in from stakeholders. Initially, staff questioned the benefits of the evaluation program and expressed concern over potential implications of negative results. These issues presented barriers when trying to incorporate evaluation in program planning. However, the trainings and tools proved to cultivate shared ownership of evaluation as part of program planning. The most critical step was the involvement of a mentor, who was especially helpful in providing effective communication strategies for successful collaboration with program managers. Due to the inherent issues that often arise for internal evaluators, particularly with a new program, we recommend:

(1) an evidence-based, customized model; (2) customized tools and ongoing in-service trainings to facilitate program evaluation planning among program managers; and (3) a formal mentorship with an evaluation expert to guide the inception of the program and help navigate challenges. Acknowledgments The authors would like to thank Navneet S. Majhail, M.D., M.S., for his contributions to the evaluation program and resulting manuscript.

Appendix A: Patient and Health Professional Services Program Evaluation Worksheet

J Canc Educ

J Canc Educ

J Canc Educ

J Canc Educ

References 14. 1. American Medical Association (1995) Caregiver self-assessment. Retrieved February 2012 at http://www.ama-assn.org/resources/doc/ public-health/caregiver_english.pdf 2. Carver C (2006) Measure of Current Status (MOCS). Retrieved February 2012 at http://www.psy.miami.edu/faculty/ccarver/ sclMOCS.html 3. Clutterbuck D (2004) Everyone needs a mentor: fostering talent in your organisation. CIPD Publishing, London 4. Conley-Tyler M (2005) A fundamental choice: internal or external evaluation? Eval J Australas 4(1 and 2):3–11 5. Dillman D (2007) Mail and internet surveys: the tailored design methods, 2nd edn. Wiley, Hoboken, NJ 6. Dillman L (2013) Evaluator skill acquisition: linking educational experiences to competencies. Am Jr of Eval 34(2):270–285 7. House ER (1986) Internal evaluation. Am J Eval 7:63–64 8. MacMillan P (2001) The performance factor: unlocking the secrets of teamwork. B&H Publishing Group, Nashville, TN 9. Majhail NS, Tao L, Bredeson C, Davies S, Dehn J, Gajewski JL, Hahn T, Jakubowski A, Joffe S, Lazarus HM, Parsons SK, Robien K, Lee SJ, Kuntz KM (2013) Prevalence of hematopoietic cell transplant survivors in the United States. Biol Blood Marrow Transplant 19(10): 1498–1501. doi: 10.1016/j.bbmt.2013.07.020 10. Pasquini MC, & Wang Z (2011) Current use and outcome of hematopoietic stem cell transplantation: CIBMTR Summary Slides, 2011. Available at: http://www.cibmtr.org 11. Patton MQ (2008) Utilization-focused evaluation, 4th edn. SAGE publications, Thousand Oaks, CA 12. Preskill H, Boyle S (2008) A multidisciplinary model of evaluation capacity building. Am Jr Eval 29(4):443–459 13. Public Health Foundation’s PDCA self-assessment tool. Public Health Foundation. Accessed June 2012 at http://www.phf.org/

15.

16.

17. 18.

19. 20.

21.

22.

resourcestools/Pages/PM_System_PDCA_Self_Assessment_Tool. aspx Reichheld F (2003) The one number you need to grow. Harvard Business Review. Accessed May 2012 at http://hbr.org/2003/12/theone-number-you-need-to-grow/ar/1 U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, Office of the Director, Office of Strategy and Innovation. Introduction to program evaluation for public health programs: a self-study guide. Atlanta, GA: Centers for Disease Control and Prevention, 2005 U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion. Evaluation guide: writing SMART objectives. (n.d.) Accessed July 2012 at http://www.cdc. gov/dhdsp/programs/nhdsp_program/evaluation_guides/docs/smart_ objectives.pdf Volkov BB (2011) Beyond being an evaluator: the multiplicity of roles of the internal evaluator. New Dir Eval 132:25–42 Volkov BB, Baron ME (2011) Issues in internal evaluation: implications for practice, training, and research. New Dir Eval 132:101–111 Yukl G (2010) Leadership in organizations. 7th ed. Prentice Hall, Upper Saddle River N.J Cooke L, Grant M, Eldredge DH, Maziarz RT, Nail LM, (2011) Informal caregiving in Hematopoietic Blood and Marrow Transplant patients. Eur J Oncol Nurs 15(5):500–507 Bishop MM, Beaumont JL, Hahn EA, Cella D, Andrykowski MA, Brady MJ, Horowitz MM, Sobocinski KA, Rizzo JD, Wingard JR (2007) Late Effects of Cancer and Hematopoietic Stem-Cell Transplantation on Spouses or Partners Compared With Survivors and Survivor-Matched Controls. J Clin Oncol 25:1403–1411 Be The Match. Role of The Transplant Caregiver. (n.d.) Accessed July 203 at http://bethematch.org/For-Patients-and-Families/ Caregivers-and-transplant/Role-of-the-transplant-caregiver/

Designing and operationalizing a customized internal evaluation model for cancer treatment support programs.

Be The Match® Patient and Health Professional Services (PHPS) supports patients undergoing hematopoietic cell transplant (HCT) and caregivers by provi...
3MB Sizes 0 Downloads 3 Views