Evaluation and Program Planning 47 (2014) 18–25

Contents lists available at ScienceDirect

Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan

Process evaluation of a statewide abusive head trauma prevention program Meghan Shanahan a,*, Phyllis Fleming a, Maryalice Nocera a, Kelly Sullivan b, Robert Murphy b, Adam Zolotor c a

The Injury Prevention Research Center, The University of North Carolina at Chapel Hill, 137 East Franklin Street, CB#7505, Chapel Hill, NC, United States Duke University School of Medicine, Department of Psychiatry & Behavioral Sciences, 411 West Chapel Hill Street, 10th Floor, Durham, NC 27701, United States c Department of Family Medicine, The University of North Carolina at Chapel Hill, 590 Manning Drive, Chapel Hill, NC 27599, United States b

A R T I C L E I N F O

A B S T R A C T

Article history: Received 2 January 2014 Received in revised form 26 June 2014 Accepted 9 July 2014 Available online 16 July 2014

The current study used four dimensions of the RE-AIM framework (Reach, Adoption, Implementation, and Maintenance) to evaluate the implementation of a statewide abusive head trauma prevention program. Numerous methods, including telephone surveys, paper and pencil questionnaires, site visits, and program administrative data were used to conduct the process evaluation. Results indicate that the intervention was successfully implemented in all birthing hospitals (n = 86) across the state with a high degree of fidelity. Furthermore, the majority of the hospitals reported incorporating the program into unit procedures and employee training. More than three-fourths indicated that they plan to continue the program after the study ends. The RE-AIM framework was applied and served as a useful guide for the process evaluation of a multifaceted, multi-system, universal public health intervention. ß 2014 Elsevier Ltd. All rights reserved.

Keywords: Process evaluation RE-AIM framework Abusive head trauma

1. Introduction

1.1. RE-AIM model

The success of a large-scale public health intervention should not be assumed once positive efficacy outcomes have been found in highly controlled environments. When interventions are taken into real-world settings, implementation is inherently more complex and practitioners implementing the program typically have more competing work priorities than those who serve on the research team. If a program is going to be of high quality, implemented within a reasonable timeframe, and yield effects similar to those reported in the empirical literature, the process of implementation needs to be evaluated rather than just effectiveness alone. Furthermore, if a program is ineffective, process evaluation can help researchers understand if the failure is due to the program or the implementation. The goal of this article is to improve the practice of process evaluation in large scale public health interventions by illustrating the usefulness of the RE-AIM framework for the evaluation of public health programs.

The RE-AIM framework, a well-established framework of multiple evaluation components (Glasgow, Vogt, & Boles, 1999), provided the conceptual basis for the process evaluation of a study of the universal implementation of The Period of PURPLE Crying program as part of a statewide initiative to reduce the incidence of infant abusive head trauma. RE-AIM is an acronym for Reach, Efficacy or Effectiveness, Adoption, Implementation, and Maintenance. The framework was designed to assist in the planning, conduct, evaluation, and reporting of studies with a goal of translating research into practice (Glasgow et al., 1999). The RE-AIM framework has been used extensively to evaluate programs with diverse content areas (e.g., health promotion, chronic disease prevention, education) that have been applied in a variety of settings, including work places, hospitals, schools, communities, and health care practices (Glasgow, Klesges, Dzewaltowski, Bull, & Estabrooks, 2004). Consistent with other studies that apply select domains of the model (see Gaglio, Shoup, & Glasgow, 2013 for a review), the current study uses four components of the model relevant to process evaluation. First, Reach addresses issues of participation. Participation rates among those targeted by the intervention represent an important aspect of overall sample representativeness. Second, Adoption relates to

* Corresponding author. Tel.: +1 919 843 1673. E-mail address: [email protected] (M. Shanahan). http://dx.doi.org/10.1016/j.evalprogplan.2014.07.002 0149-7189/ß 2014 Elsevier Ltd. All rights reserved.

M. Shanahan et al. / Evaluation and Program Planning 47 (2014) 18–25

organizational participation in the program; that is, the proportion and representativeness of settings and organizational decision makers who are willing to start the program. Third, Implementation focuses primarily on the integrity of the intervention including adherence to protocols, and the quality and consistency of its delivery. Fourth, Maintenance examines whether changes made by individuals and organizations continue after contact with the program has ended. It also refers to the extent to which the program is integrated into the adopting organization’s structure and culture (Glasgow et al., 1999). The other domain, Effectiveness, will be addressed elsewhere as a program outcome; this article focuses on the Reach, Adoption, Implementation, and Maintenance phases. 1.2. Intervention description The Period of PURPLE Crying1 (Barr, 2004) is a proprietary program developed by the National Center on Shaken Baby Syndrome to reduce abusive head trauma (AHT) in infants. It is based on three premises: (1) infant crying acts as a trigger to shaking; (2) frustration with inconsolable crying will decrease when infant caregivers know that infant crying is normal and are familiar with appropriate responses to it, and (3) decreased frustration will prevent abusive head trauma (purplecryinginfo.org). It utilizes an educational package that includes a 10 min DVD and an eleven page booklet and is designed to be given to postpartum mothers, partners, and other family members prior to hospital discharge. The program includes six key teaching messages: (1) infant crying is normal; (2) crying peaks in the second month of life; (3) shaking a baby is dangerous; (4) watch the DVD at home; (5) read the booklet at home, and (6) share this information with others who will care for your baby. AHT often results in devastating outcomes for children, including physical disabilities, cognitive disabilities, and death. Estimates of national rates of AHT range from 27.5 to 39.8 per 100,000 children less than 1 year of age (Ellingson, Leventhal, & Weiss, 2008; Niederkrotenthaler, Xu, Parks, & Sugerman, 2013; Parks, Sugerman, Xu, & Coronado, 2012; Shanahan, Zolotor, Parrish, Barr, & Runyan, 2013) and the estimates of rates in North Carolina range from 29.7 to 34.2 per 100,000 infants (Keenan et al., 2003; Shanahan et al., 2013). A previous evaluation of a hospitalbased AHT prevention program conducted in the western region of New York determined that the program resulted in a 47% reduction in the incidence of AHT for that region (Dias et al., 2005). This study in New York was the impetus for more widespread AHT prevention efforts. The PURPLE program was implemented in North Carolina, beginning in 2007, as a five-year statewide initiative called The Period of PURPLE Crying1: Keeping Babies Safe in North Carolina. The primary goal was to deliver the program to all new parents in all maternity hospitals in the state to reduce the incidence of intentional abusive head trauma (AHT) in infants by 50%. The PURPLE program in North Carolina was implemented in three doses. The first dose was hospital-based delivery of the PURPLE program. The second dose provided a reinforcement of the key messages delivered in community settings, primarily during well baby and sick visits in physicians’ offices and public health clinics up to one month of age. The third dose was a media and public relations campaign that utilized paid and earned media to reinforce the PURPLE program’s messages as well as educate the general public about the normalcy of infant crying. This article focuses on the first dose, universal bedside teaching following the birth of an infant. The postpartum hospital stay immediately following delivery was the primary venue for implementation in North Carolina because it offered universal access to mothers and the opportunity

19

for program delivery by nurses, regarded as trusted and ethical sources of information by the American public (Gallup, 2013). Postpartum hospital stays have been used to educate new mothers on a variety of topics including umbilical cord care, sleep positioning, infant feeding, car seat installation, and maternal care and recovery. Methods used to address these topics have included pamphlets, booklets, and/or videos, sometimes accompanied by a verbal presentation. Although education during postpartum hospital stays is convenient for program practitioners, it is not clear that it is the optimal learning time for new mothers (Buchko, Gutshall, & Jordan, 2012). Program implementation began in each hospital when at least 80% of the maternity staff who would provide the bedside education completed training. This training was initially provided in person by program staff and later was made available by webinar and as online education modules. Staff was trained to discuss the program’s six key messages with all new parents, separate from all other pre-discharge patient education. Teaching actions involved showing the DVD, preferably watching it with them; discussing and answering questions; and providing the parents individual copies of the DVD and booklet to take home. 1.3. Implementation and evaluation The process evaluation of the PURPLE program focused on quality of implementation of the intervention in a real-world setting; that is, when delivered to a demographically heterogeneous population in a birth hospital setting by nurses and other staff with a variety of perspectives, experiences, and levels of enthusiasm. A primary focus of process evaluation in this context is the observation and documentation of the intervention that is actually delivered, compared with how the protocol specifies implementation (Glasgow, Lichtenstein, & Marcus, 2003). We anticipated that actual implementation would be different from the implementation protocols for several reasons. With the extensive scope of the project, that is, 86 hospitals or birthing centers that differed from one another in size, organizational structure and culture, population demographics of the areas served, and availability of equipment and technology, we anticipated that the implementation protocol specified by the National Center on Shaken Baby Syndrome would be modified by hospitals to suit their existing nursing practices. In other words, there existed the potential for program delivery in widely different ways. Finally, there were likely to be restrictions placed on the delivery of the intervention based on the workload of nurses and the availability of new mothers whose hospitalizations, for the most part, would be brief. It was apparent when designing the process evaluation that single data collection strategies would not be adequate to address these concerns because it was impossible to observe the bedside delivery of the intervention. Instead, a triangulation strategy was adopted (Bickman & Rogers, 2009). Similar process evaluation questions were addressed by collecting data from more than one source using more than one strategy to compensate for the inherent limitations in any single data collection strategy. Results from the multiple sources and strategies were compared to develop a more complete picture to describe delivery of the intervention. Our measures were limited to administrative data and self-report from nurses and mothers. The evaluation team worked closely with the implementation team throughout the project. As results of the process evaluation were made available, they were shared with the implementation team so they could make necessary improvements to the program’s implementation. To maintain the integrity of the data, as well as the evaluation team’s relationships with the hospitals,

20

M. Shanahan et al. / Evaluation and Program Planning 47 (2014) 18–25

the results were always shared in aggregate so it was not possible for the implementation team to identify the hospitals reflected in the results. The implementation team would tailor the technical assistance they provided to all of the hospitals based on these process evaluation results. 1.4. Adaptation of the RE-AIM framework to the Period of PURPLE Crying This project had two Reach targets. The first sought to recruit all 86 North Carolina hospitals or birthing centers to implement the intervention. Three of the 86 hospitals were pilot sites and therefore recruitment occurred differently in these sites. Our analysis of this Reach target will focus on the 83 sites we actively enrolled. The second Reach target sought to provide the intervention to all mothers who delivered babies in these hospitals from April 2008 through September 2012. This number of births per year in North Carolina ranged from approximately 121,000 to 131,000. Since hospitals provided the setting for this intervention, the definition of Adoption was expanded to include not only willingness to implement the program but also modifications and adjustments in hospital systems, including procedures and policies related, primarily, to training and medical records to support the program. These adjustments and modifications served as indicators of the extent to which the PURPLE program was integrated into the routine operations of the nursing staff, unit procedures, or organization as a whole. Implementation in this evaluation was concerned primarily with fidelity, that is, the level of adherence to or compliance with implementation protocols. The former focused on nurse feedback and knowledge acquisition following their training, as well as descriptions of their intervention delivery during a brief, semistructured interview. The latter was reflected in mothers’ selfreported behaviors as expressed through their behavioral intentions with regard to the program’s key messages. Maintenance for this project was defined in terms of sustainability at the hospital level by examining the capacity and willingness of the hospital to continue to provide the intervention beyond the funded project period of this study. Maintenance over an extended timeframe was beyond the scope of this study. 2. Methods 2.1. Operationalizing reach Reach in this process evaluation was operationalized in three ways: (1) the number and percent of hospitals that were recruited successfully over time, (2) the length of time it took for hospitals to complete each of six enrollment steps, and (3) the number and percent of postpartum mothers who received the PURPLE program education. 2.1.1. Number of hospitals enrolled over time The enrollment of hospitals typically took place in six steps: (1) an initial information meeting with hospital decision makers and project implementation staff; (2) signing a Memorandum of Agreement by hospital decision makers; (3) onsite training of hospital staff; (4) completion of training by at least 80% of the hospital staff; (5) placement of the first order for program materials; and (6) the beginning of program delivery at the bedside. While the six steps were intended to be separate and sequential, at some hospitals they occurred concurrently or in different sequences. Enrollment of a hospital was considered complete when the hospital completed all of the six steps.

2.1.2. Length of time to complete each enrollment step The dates of completion of each step in the enrollment process were tracked. The mean time it took to enroll hospitals, as well as the time between each step, was calculated. When the steps occurred concurrently or out of sequence, the number of days in between the steps was coded, respectively, as none or negative. All negative numbers were converted to scores of zero. Three hospitals, considered to be pilot sites with different enrollment and recruitment processes, were not included in this analysis. 2.1.3. Postpartum mothers receiving PURPLE program education Two data sources were considered in order to determine how many mothers received PURPLE program education. The first was a paper/pencil questionnaire administered to hospital unit staff during annual site visits. Nurses involved in delivering the PURPLE program who attended the hospitals’ yearly PURPLE program site visits were asked to estimate the percentage of postpartum mothers in their unit who received the PURPLE program materials and teaching before discharge from the unit (answer categories were 90%). The second data source was the number of program material packages distributed per month compared with the number of monthly births on that unit. Hospitals were asked to report, on a monthly basis, the number of births in their unit and the number of program materials remaining in the unit’s inventory. Monthly data were aggregated into six-month sets, with completion of enrollment as the start date. The monthly count of materials at the end of each six-month time period was subtracted from the known inventory (the number of PURPLE program packages that were shipped to the unit over the six months period plus the number of packages of materials that were remaining in the stock) served as the numerator. The denominator consisted of the number of births during each six-month period. These numbers were used to calculate six-month percentages. If a hospital did not provide a monthly report, birth numbers obtained from the North Carolina State Center for Health Statistics were used for that month. If a hospital did not provide a report for the sixth month, it was not possible to calculate that six month percentage for that hospital because the number of remaining program materials was not known. 2.2. Operationalizing adoption Measures of Adoption entailed documentation of the delivery of the program and the integration of program implementation into their existing systems. Integration of the program into existing systems was operationalized by whether the delivery of the PURPLE program education to the mother was formalized as a routine process or protocol (e.g., reflected in medical record charting, included in new staff orientation). These data were collected during phone interviews with nurses at 6, 18, and 30 months post-implementation; the survey also addressed implementation and is described more fully in that section. 2.3. Operationalizing implementation Assessing Implementation was essential to the project’s evaluation to determine if the program was implemented as intended. Fidelity required that each teaching episode include three components: (1) discussion of the PURPLE program’s six key messages by a nurse with the mother, (2) mothers’ viewing of the DVD while in the hospital, and (3) receipt by the mother of the DVD and booklet to take home with her. The implementation of these components was measured using multiple methods to increase overall accuracy, as each measure had its own strengths and weaknesses. The methods included telephone interviews at

M. Shanahan et al. / Evaluation and Program Planning 47 (2014) 18–25

six-month intervals with unit nurses who delivered the education, annual unit site visits by evaluation staff, and maternal report. 2.3.1. Telephone interviews Six months after a unit began implementing the PURPLE program, 10-min phone interviews with nurses were conducted to determine how the unit and the particular nurse provided the program to patients, as reflected in the nurse’s discussion of their most recent education session. A research assistant called the unit and asked to speak with a nurse who taught the PUPRLE program and then conducted the interview with whoever was available to participate at that time. Nurses could participate in the interview multiple times, but were not asked to report on the same teaching session more than once. Nurses were asked if they discussed each of the six key teaching points during the teaching session. Additionally, nurses were asked if they completed two key teaching actions: directly handing the mother the PURPLE program materials and having the mother watch the video while in the hospital. These interviews took place yearly, which corresponded to 6, 18 and 30 months after program implementation.

21

program, hospital staff were instructed to hand the survey to parents at a time separate from program delivery. One-third of hospitals did not contribute parent survey data at the first site visit. In order to increase participation in the second year, we altered the protocol by providing posters and Post-It notes to remind nurses to hand out the surveys and incentivizing the return of the questionnaires (if a unit returned surveys from at least 50% of their discharges they received a gift card to a local restaurant). These changes increased participation with only 5.8% of hospitals who participated in the second site visit failing to provide parent survey data. A one-time telephone survey was conducted in 2010–2011 with of a randomly selected sample of North Carolina parents of children under the age of nine months (identified by backmatching publicly available phone numbers from birth certificate data). This survey included questions about whether parents recalled receiving the PURPLE program education and its six messages during their hospital stay, whether the family had shared the information with anyone else, and used the program materials at home. 2.4. Operationalizing maintenance

2.3.2. Annual site visit discussions Two site visits, one within two months of the unit’s first year implementation anniversary and the second a year later, were conducted by evaluation staff. Nurses who delivered the PURPLE program and the hospital’s PURPLE program coordinator typically participated in the site visits. During the first site visit, nurses were asked to describe how their unit provides the education to families. They were asked who was responsible for providing the teaching at their hospital, how they knew if a family had received the teaching and program materials, and if all parents were expected to watch the video while in the hospital. During the second site visit, the description of implementation collected during the first site visit was provided to participants who were asked to describe any program delivery changes that may have occurred.

Maintenance was measured through a web-based questionnaire conducted in the fall of 2012 with the nurse manager or administrator in each hospital who had decision-making authority for the day-to-day operation of the unit that was providing the program. The purpose of the questionnaire was to learn about a hospital’s intent to continue program implementation, factors that influenced their decision whether to continue the program, and modifications or adjustments they anticipated to accommodate their organization’s needs. The RE-AIM constructs and how they were operationalized are summarized in Table 1. This study was approved by the University of North Carolina at Chapel Hill Institutional Review Board. 3. Results

2.3.3. Maternal education Fidelity to the program model was also measured by querying mothers about their experiences in the hospital. Two methods were used to collect these data: a paper and pencil survey and a telephone survey. The first was a hard copy survey provided to every mother who delivered a baby in a hospital’s unit during a two-week period that corresponded to an annual site visit to the unit by project evaluation staff. This survey asked whether the mothers had received, and retained, the six key messages of the program. In an effort to reach all parents, not just those who received the PURPLE

3.1. Reach 3.1.1. Number of hospitals enrolled over time Fig. 1 tracks the enrollment completion of 83 hospitals over time, as three hospitals served as pilot implementation sites and used different recruitment and enrollment processes. Enrollment began in December of 2007, and all hospitals were enrolled by June 2009. The small number of hospitals enrolled during the first eight months reflected the time needed for program start-up and engagement with hospitals that ultimately were early adopters.

Table 1 Summary of key aspects of constructs measured by the process evaluation. RE-AIM framework

Construct

Data source

Time point

Reach

# and % of hospitals recruited Length of time to complete enrollment # and % of postpartum mothers who received education

Administrative data Administrative data 1. Nurse survey 2. Administrative data

Enrollment Enrollment 1. Year 1 and Year 2 2. From ‘‘Go Live’’ onward

Adoption

Charting program in medical record Training included in new staff orientation

Nurse phone interview Nurse phone interview

6, 18, and 30 months 6, 18, and 30 months

Implementation

Discussion of six key messages

Nurse phone interview; maternal survey; maternal phone interview Nurse phone interview; site visit

6, 18, and 30 months; 1 and 2 years post ‘‘Go Live’’; 2010–2011

View DVD in hospital

Maintenance

Mother received program materials

Nurse phone interview; site visit; maternal phone interview

6, 18, and 30 months; 1 and 2 years post ‘‘Go Live’’ 6, 18, and 30 months; 1 and 2 years post ‘‘Go Live’’; 2010–2011

Intent to continue program

Nurse manager/administrator survey

Fall 2012

M. Shanahan et al. / Evaluation and Program Planning 47 (2014) 18–25

22

# Hospitals

80 70 60 50 40 30 20 10 0 Oct- Nov- Dec- Jan- Feb- Mar- Apr- May- Jun- Jul-08 Aug- Sep- Oct- Nov- Dec- Jan- Feb- Mar- Apr- May- Jun07 07 07 08 08 08 08 08 08 08 08 08 08 08 09 09 09 09 09 09 Fig. 1. Enrollment of 83 hospitals from October 2007 to June 2009.

The numbers of enrolled hospitals increased during the next eleven months. All 83 hospitals or birthing centers completed enrollment within an 18-month period. 3.1.2. Time to complete each enrollment step Table 2 describes the mean number of days that was required to move hospitals from one step in the enrollment process to the next. The lengthiest part of the process, approximately one and a half months, was moving from the first information meeting about PURPLE to the signing of the Memorandum of Agreement (MOU). This step involved identifying, scheduling meetings with, and explaining the program to decision makers within the hospital with authority to accept the program. Scheduling the first training after the MOU was signed and completing training of 80% of the hospital staff took approximately one month each. The complexities of scheduling trainings to accommodate over 80% of the unit staff who would deliver PURPLE extended the time that the process took. It took an average of 135.2 days from first contact to program implementation for each hospital. 3.1.3. Postpartum mothers receiving the PURPLE program education During year 1 site visits, 795 responses were collected; year 2 site visits resulted in 628 responses. Nurses who responded to the paper–pencil questionnaire that was administered during annual site visits to hospitals estimated that over 80% of the eligible mothers received the teaching during their postpartum hospital stay. A greater percentage of eligible mothers (>90%) received the

program DVD and booklet (Table 3). These estimated percentages remained constant over the two years of annual site visits. These data drawn from questionnaires were supported in the face-toface discussions that were part of the site visits. Similarly, the second strategy for estimating the number of mothers receiving the program materials indicated that the majority of families received them in the hospital. The ratio of births to program materials ranged from a high of 91% during the first period to a low of 85% during the third period. During the second sixth month period, program material distribution corresponded to 87.1% of the births. Nearly one fifth (18.2%) of the hospitals were missing at least one six-month period and 2.3% were missing all data on material distribution versus birth numbers for this strategy of estimating. Site visit discussions revealed staff occasionally provided materials to friends and family. Nevertheless, nurses’ estimates for program material distribution were slightly higher than those based on program material counts. During site visit discussions with nurses, they often asserted, ‘‘We give the materials to everyone,’’ and appeared surprised if the program material data suggested otherwise. 3.2. Adoption All 86 hospitals or birthing centers in North Carolina enrolled and participated, at varying levels, in the program. Implementation of the program was recorded in the patient medical record at most Table 3 Estimate of eligible mothers who received program materials.

Table 2 Enrollment time between stages. Time between stages

N

Mean # days (SD)

Information meeting to MOU signing Signing MOU to 1st training session 1st training session to 80% trained 80% trained to order DVDs Order DVD to PURPLE begins Enrollment to PURPLE begins

77 75 80 81 81 82

47.4 27.7 35.1 12.2 19.1 135.2

(65.3) (27.3) (49.5) (16.4) (16.0) (79.5)

Percent received materialsa

Year 1 (n = 795) N (%)

Year 2 (n = 628) N (%)

>90% 50–90% 25–49%

Process evaluation of a statewide abusive head trauma prevention program.

The current study used four dimensions of the RE-AIM framework (Reach, Adoption, Implementation, and Maintenance) to evaluate the implementation of a ...
362KB Sizes 0 Downloads 4 Views