Adm Policy Ment Health DOI 10.1007/s10488-015-0663-8

ORIGINAL PAPER

Developing a Quality Assurance System for Multiple Evidence Based Practices in a Statewide Service Improvement Initiative Georganna Sedlar1 Andrea Negrete1



Eric J. Bruns1 • Sarah C. Walker1 • Suzanne E. U. Kerns1



 Springer Science+Business Media New York 2015

Abstract Efforts to implement evidence based practices (EBP) are increasingly common in child-serving systems. However, public systems undertaking comprehensive improvement efforts that aim to increase availability of multiple practices at the same time may struggle to build comprehensive and user-friendly strategies to develop the workforce and encourage adoption, faithful implementation, and sustainability of selected EBPs. Given that research shows model adherence predicts positive outcomes, one critical EBP implementation support is systematic quality, fidelity, and compliance monitoring. This paper describes the development and initial implementation of a quality assurance framework for a statewide EBP initiative within child welfare. This initiative aimed to improve provider practice and monitor provider competence and compliance across four different EBPs, and to inform funding and policy decisions. The paper presents preliminary data as an illustration of lessons learned during the quality monitoring process and concludes with a discussion of the promise and challenges of developing and applying a multi-EBP quality assurance framework for use in public systems.

Keywords Evidence-based practice  Policy  Implementation science  Fidelity  Quality indicators  Academic-state partnership

& Georganna Sedlar [email protected] 1

Division of Public Behavioral Health and Justice Policy, Department of Psychiatry and Behavioral Sciences, University of Washington, 2815 Eastlake Ave E., #200, Seattle, WA 98102, USA

Health and social service systems are under increasing pressure to implement treatments that have been demonstrated through research to be safe and effective (Aarons and Palinkas 2007; Barth et al. 2011; McHugh and Barlow 2010; Weisz et al. 2006). In response, the ‘‘evidence based practice (EBP) movement’’ continues to progress and mature. Research on interventions continues to grow, which increases programmatic options; federal initiatives and state legislation focused on EBP continue to proliferate; and administrators and field practitioners are now largely familiar with the case for using approaches that reflect the best available science on ‘‘what works’’. For those who believe it is paramount to base public investment decisions on research evidence, the establishment of the EBP movement is encouraging. It is also widely acknowledged, however, that building an ‘‘evidence based system’’ is far more complex than supporting implementation of a single manualized treatment (Chorpita et al. 2011; Chorpita and Daleiden 2014). Although the advent of ‘‘implementation science’’ as a discipline has spawned multiple implementation support frameworks, over 60 by one account (Damschroder et al. 2009), most provide guidance for implementing only a single practice or program. Meanwhile, federal initiatives are encouraging and funding system-level strategies that promote the use of multiple EBPs (US Department of Health and Human Services 2013) in order to ensure adequate behavioral and mental health services for the diverse needs of youth and families. This creates a conundrum because the field lacks a framework for monitoring the progress of multiple programs in a standardized way that communicates consistency, facilitates cross-program comparisons, and allows public systems to identify which providers are functioning well within the specific requirements of each program. To accommodate the management

123

Adm Policy Ment Health

of multiple programs, supportive technologies are needed to design and carry out quality assurance and intervention fidelity monitoring plans. These plans need to be adequately comprehensive as well as feasible and coordinated. Further, they must also balance local information needs with widely varying requirements for documentation of model adherence outlined by the individual program purveyors. The current paper describes a State Child WelfareUniversity collaboration (Children’s AdministrationUniversity of Washington EBP Partnership) to develop and implement a standardized, cross-intervention quality assurance strategy capable of providing information on multiple types of quality indicators needed to manage a comprehensive implementation of EBPs for a statewide child welfare system. We describe the policy and theoretically-based rationale for development of a cross-EBP, multi-component quality monitoring system, the description of the system that was developed and conclude with lessons learned and recommendations and implications for other public systems attempting to address similar implementation challenges.

Washington State’s Child Welfare EBP Initiative The Washington State Children’s Administration (CA) is the child welfare arm of the Washington Department of Social and Human Services (DSHS). Beginning in 2012 the Children’s Administration collaborated with the University of Washington, Evidence Based Practice Institute (within the Division of Public Behavioral Health and Justice Policy) to oversee and administer trainings, intervention fidelity monitoring, and quality assurance for practitioners contracted to provide a set of four behavioral health EBPs in Washington State that were identified as being relevant to achieving child welfare outcomes, such as reduction in child abuse and neglect, out-of home placement, improvement in child safety, placement stability, and family preservation. Four of these programs specifically are the focus of this paper and are described in more detail below. Public child welfare systems may be particularly amenable to application of EBP (Barth 2008; Chaffin and Friedrich 2004) for several reasons. First, the outcomes of interest—such as reduced rates of future abuse, placement stability, improved home environments, and improved child functioning—are relatively unambiguous and explicitly stated by funding entities to which systems are accountable, such as states and the federal government. Such clarity facilitates consistent communication of expectations from funders and administrators, aids the design of effectiveness research studies (and thus the identification of new EBPs), and guides the development of

123

federal initiatives to promote relevant EBPs (US Department of Health and Human Services 2013). The way in which child welfare systems structure and provide service delivery in these areas may further be conducive to EBP. Child welfare service systems often implement large-scale programmatic initiatives, through which they purchase defined and consistent service types (such as those listed above) that do not vary widely across individual practitioners or provider agencies (Barth 2008; Chaffin and Friedrich 2004). In such initiatives, requests for qualifications and contract language may identify the specific evidence-based interventions to be delivered and/ or stipulate that services used must be based on evidence for effectiveness. Programmatic initiatives are also conducive to the use of manuals, broad-based training and coaching, consistent intervention fidelity and quality monitoring, and development of other program- or systemwide implementation supports. As another example of how the EBP movement has matured and become established, states are increasingly investing in centers of excellence, often located within state-university partnerships, to manage the complex process of facilitating the above mentioned implementation drivers and effectively move science into large-scale practice (Bruns and Hoagwood 2008; Bumbarger and Campbell 2012). In Washington State, the Evidence-Based Practices Institute is one such center. Housed at the University of Washington, this Institute actively partners with State agencies to support a variety of EBP-related efforts. The Children’s Administration-University of Washington EBP Partnership (hereafter referred to as the CA-UW Partnership) was guided by a framework that is based on the conceptual model of implementation research developed by Proctor et al. (2009). As shown in Fig. 1, the model consolidates a range of perspectives on how to conduct uptake and quality improvement activities, and distinguishes but links key implementation processes and outcomes. Among the most basic but important assumptions presented in this theory is that there are two required ‘‘core technologies’’. The first is the evidence-based prevention and intervention strategies themselves; the second are the separate strategies used to support implementation of those interventions in a usual care context such as a public child welfare system (Proctor et al. 2009). The theory proposes that, when the activities within these two technologies are both undertaken in a coordinated and synergistic manner, positive outcomes will occur. These include system and organizational outcomes (e.g., motivated and ready providers with clear incentives to refer to EBPs); service outcomes (e.g., high-fidelity implementation of accessible and effective services); and child and family outcomes (e.g., enhanced safety, improved functioning, reduced out-of-

Adm Policy Ment Health

Fig. 1 Implementation Framework for the Children’s Administration-University of Washington Evidence Based Practice Partnership

home placements). Figure 1 presents examples of system, organizational, and individual levels of action for the CAUW Partnership.

Monitoring Quality in a Statewide EBP Initiative For a large-scale child welfare EBP initiative to succeed and achieve broad public health impact, scale-up efforts are needed that align and direct the efforts of multiple stakeholders (e.g., policy makers, funders, managers, providers, and consumers) and facilitate a range of ‘‘implementation drivers’’ (Fixsen et al. 2009). These include provider selection (at an organizational and individual level), provider training, consultation and coaching, staff evaluation and fidelity monitoring, and program evaluation (Fixsen et al. 2005). Among these tasks, collecting and managing quality and fidelity data is a central aspect of any comprehensive implementation strategy. The growing field of implementation science (Aarons et al. 2011; McHugh and Barlow 2010; Proctor et al. 2009) recognizes that a range of system, organizational, and individual factors contribute to faithful implementation of evidence based programs, or treatment fidelity. There is increasing recognition that fidelity needs to extend beyond what happens in the therapy room. As indicated in two important papers, addressing a more comprehensive picture

of the components that are associated with high-quality implementation is warranted (McLeod et al. 2013; Schoenwald et al. 2011). In addition to therapist adherence to treatment protocols (stated simply, ‘‘are they doing what they are supposed to be doing’’), therapist competence (‘‘are they doing it well’’) and treatment differentiation (‘‘are they not doing what they are not supposed to be doing’’) are important to address. As described above, if collecting and reporting on relevant quality indicators for a single EBP is a complicated undertaking (Schoenwald 2011); doing so consistently for multiple EBPs becomes vexingly complex. Moreover, there is limited research, theory, or published examples of effective methods. In their prominent review of implementation research, Fixsen et al. (2005) found hundreds of articles describing provider- and organizational-level fidelity measures as well as many studies of research on the relationship between fidelity and client outcomes. However, they found no studies evaluating methods of standardizing fidelity information across multiple programs. While no known precedent exists for a cross-intervention fidelity or quality monitoring platform, the National Implementation Research Network did identify the following core components of fidelity monitoring as outlined in the implementation literature (Fixsen et al. 2005): context, compliance, and competence. Thus, the CA-UW Partnership sought to develop an effective monitoring

123

Adm Policy Ment Health

framework that simultaneously preserved important features of multiple individual program models while also allowed for straightforward reporting of a range of indicators, such as pre-service training, provider compliance, and provider competence, across all models. Benefits of Quality and Fidelity Monitoring Intervention fidelity refers to the degree to which a treatment as delivered adheres to techniques prescribed by a particular model (Bond et al. 2009; Kaye and Osteen 2011; Schoenwald 2011; Waltz et al. 1993). As described above, fidelity infrastructure, monitoring, and feedback are considered key drivers of successful and sustained EBP implementation (Aarons et al. 2009; Fixsen et al. 2005). Furthermore, both quality assurance and intervention fidelity monitoring have been linked to positive client outcomes (Barnoski 2009; Huey et al. 2000; Smith-Boydston et al. 2014), thus highlighting the importance of monitoring and supporting both concepts in any successful EBP implementation effort. Intervention fidelity measurement arose as a method to answer research questions such as how various therapeutic orientations (e.g., psychodynamic, client-centered, behavioral) actually differed in practice and to distinguish treatment effects from general effects in controlled trials of psychiatric treatments (Schoenwald et al. 2011). However, as specified treatment models have increasingly become integrated into real world practice, these same methods were applied to quality monitoring. Consequently, many programs integrate intervention fidelity measurement into their suite of program services not solely or even primarily for the purpose of research or evaluation but for the specific purpose of ongoing quality assurance. For example, treatment supervisors can use such information to provide needed coaching and skills development (Bond 2000; Kaye and Osteen 2011). Some research even suggests that the activity of fidelity measurement itself serves to improve treatment and decrease employee turnover (Aarons et al. 2009). In addition to uses for research and quality assurance, fidelity measurement is also increasingly being used as an accountability tool as many states move towards mandating the use of evidence-based practices of mental and behavioral health services. Through these mandates, state governments often require that providers demonstrate adherence to approved practices. Lack of adherence or other measured shortcomings may result in the reallocation of contracts, making such measurement a potentially ‘‘high stakes’’ function within accountability-based public systems. As fidelity and quality monitoring protocols and practices can vary widely across EBPs, challenges exist when there is a need to collect and use data across multiple empirically based practices, such as for the current EBP

123

initiative. While creation of a single quality indicator (i.e., intervention fidelity) tracking tool to use across programs within a similar category of treatment (e.g., parent management training) may seem like a potential solution, such efforts have proven challenging. For example, Schoenwald et al. (2011) found such differences among 11 different adherence measures for a single class of treatments for disruptive behavior disorders that they were unable to identify common terminologies or a common frame of reference across all of these measures. While recent efforts to design measures to study ‘‘services as usual’’ in diverse settings (Garland et al. 2010) and develop frameworks of ‘‘common elements’’ of evidence-based treatments (Barth et al. 2011) provide potential routes to cross-program fidelity measurement, the development of such approaches for a statewide system is clearly in a nascent stage. While the current acceptance of the importance of fidelity and quality monitoring represents an advance over traditional service delivery in which quality assurance activities are nearly nonexistent (Fixsen et al. 2005), the range of fidelity domains and approaches creates two potential difficulties for large scale implementation initiatives involving multiple EBPs that go beyond the perennial challenge of finding financial resources to monitor program quality. First, programs vary in their specification of rigorous and tested fidelity measurement protocols. Some programs, like Multisystemic Therapy (MST), are far along in the development and validation of tools and specification of a fidelity monitoring process. Other programs may have a manual and/or ongoing consultation but no developed tool for observation of practitioner skills in vivo such as through review of audio or video recordings. The second challenge of an implementation effort involving multiple programs is the lack of consistency in fidelity domains and requirements, even among those programs with fairly sophisticated quality assurance systems. Consequently, work must be done to align various quality assurance systems into a digestible framework that also preserves important measurement parameters that may illuminate specific areas in need of improvement.

Designing a Quality Monitoring Framework Evidence Based Interventions The QA framework was applied specifically to four child welfare relevant parenting programs focused on child wellbeing and safety: The Incredible Years (IY), Parent–Child Interaction Therapy (PCIT), Triple P Positive Parenting Program (Triple P), and SafeCare (see descriptions and citations below). In addition to fidelity monitoring, the CAUW Partnership administered training and coaching for

Adm Policy Ment Health

these four programs. At the initiation of the CA-UW Partnership, four additional program services were included in the array of EBPs in child welfare: Homebuilders (Kinney et al. 1991), Project KEEP (Price et al. 2008), Multidimensional Treatment Foster Care (Leve et al. 2009) and Functional Family Therapy (Alexander et al. 1998). These programs were not included in the CA-UW Partnership’s quality assurance strategy due to reasons such as the existence of a longstanding separate monitoring system by a local purveyor (Homebuilders) or relatively low level of utilization in the state (KEEP, MTFC, FFT). The following section provides a brief overview of each EBP’s monitoring strategy. Incredible Years The IY consists of three separate training curricula targeted for parents, teachers, or children to prevent and treat early onset of oppositional defiant disorder and conduct disorder in children by addressing parenting, child, and school risk factors (for further program description, see WebsterStratton 2005; Marcynyszyn et al. 2011). Providers in the present project were trained only in the parent-level intervention, which is typically delivered in a group setting by one to two group leaders. Prospective IY providers participate in 2 days of training for IY Parents and Baby Program and a 3-day combined training for IY Toddler Basic Program and Preschool Basic Program. During training, providers must complete a role play and live skill rehearsals prior to being certified. IY developers require that ongoing fidelity monitoring consist of peer group supervision (including videotape review) and expert consultation through a certification process (Webster-Stratton 2012). IY requires review of video/audiotapes to ensure an 85 % fidelity rate per national standards. Parent–Child Interaction Therapy PCIT is a dyadic intervention designed to address the parenting needs of children who have moderate to significant behavior problems. PCIT consists of a child-directed interaction phase and a parent-directed interaction phase in which parents learn a new set of parenting skills aimed at disrupting coercive parent behaviors and improving parent–child interactions (for further descriptions of PCIT, see Chaffin et al. 2011; Urquiza and McNeil 1996). Providers must attend a 40 h of training and receive ongoing training for up to 1 year to actively practice skills and demonstrate proficiency. Fidelity monitoring for PCIT providers involves demonstrating skill mastery at the conclusion of a workshop training, followed by twice monthly consultation with an expert PCIT provider or Master Trainer. Four videotapes per year are required to be submitted and

monitored for fidelity to protocol. Providers are also required to attend an annual booster training. Triple P Positive Parenting Program The Triple P Positive Parenting Program (typically termed Triple P) is a multi-level public health-oriented parent consultation program designed to support parents in effective management of child emotional and behavioral concerns. As a Triple P system, there are five varying service intensity levels available to meet diverse needs of families (for a detailed description, see Sanders et al. 2003). In this project, providers were trained in Level 4 Standard and Teen, and Level 5 Pathways (a program variant for parents at risk for child maltreatment). To be trained in this suite of interventions, providers attended an initial 5-day training event. Triple P has an ‘‘accreditation process’’ in which providers are required to role-play specific competencies associated with program delivery. An expert trainer determines if the competencies meet minimum requirements. Currently Triple P does not have requirements around ongoing fidelity monitoring. Provider agencies are encouraged to develop a peer support strategy to encourage adherence to the model and provider selfinitiated growth in using the Triple P model (Mazzucchelli and Sanders 2010). SafeCare SafeCare is designed to support parents with children between the ages of birth through age 5 who are at risk for or who have a history of child maltreatment, especially neglect. The home-based program addresses the various ecological factors that place many parents at risk for child maltreatment via behavioral skills training delivered in three modules: (1) child health; (2) safety in the home; and (3) parent–child or parent–infant interactions and basic parenting skills (Chaffin and Friedrich 2004; Chaffin et al. 2012; Edwards and Lutzker 2008). Trained home visitors deliver services weekly in the parent’s home for 18–20 weeks. Due to the program’s focus on neglect, SafeCare serves a particularly important role in child welfare settings. SafeCare requires providers to attend a 4-day training to become a home visitor and perform live in-training role-plays and quizzes. To reach certification, providers must deliver SafeCare with fidelity with a minimum of three sessions across the three modules. SafeCare employs a coaching model to support practitioner fidelity. One provider from each agency attends an additional day of training and completes required role-plays to become a coach in order to provide on-site coaching to home visitors Post-training, certification requires proficiency in fidelity ratings of Home Visitors, leading team meetings and being

123

Adm Policy Ment Health

this project, context refers primarily to whether a provider: (a) receives the necessary pre-service training and ongoing training (e.g., ‘‘booster’’ sessions); and (b) has a sufficient number of cases for the Trainer/Consultant to make a reasonable and reliable assessment of quality and fidelity. We believe that it was important to capture case information as a separate contextual category, so as to not unfairly ‘‘penalize’’ providers for failing to meet quality criteria when the primary shortcoming was an inadequate number of referrals. In addition, assigning ‘‘number of cases’’ as a distinct monitoring criteria across EBPs yielded unique, critical, and actionable information available to the child welfare system to use in toward implementation efforts. As an aside, this strategy has the potential to enhance understanding of regional differences in EBP utilization, increase focus on outreach and education efforts, and guide policy and funding decisions around investment in specific EBPs. Compliance refers to whether a provider followedthrough with the necessary activities to establish fidelity. For example, if the fidelity protocol requires an expert rater to review session videotapes, the provider needs to supply the tapes to be coded. If the fidelity protocol includes an expert consultant verifying that a provider discusses their therapeutic interventions in a model-consistent way, the provider needs to attend the consultation calls on which these discussions occur. Without participation in these activities, it is unknown the extent to which a given provider is operating with fidelity to the model. The different compliance requirements in our model are indicated in Table 1. Competence refers to requisite skills demonstrated by the provider in using the intervention, or how well they perform the intervention. Within this broad category of Competence, subcategories were established across all EBPs that consultants considered when rating providers: (a) dosage (whether the provider used an adequate amount of the intervention), (b) quality (whether they demonstrated

successfully observed (live or video) of providing coaching to Home Visitors. Following certification, practitioners submit monthly audio or videotapes to a SafeCare coach, who monitors fidelity. They are also required to attend an annual booster training. Common Constructs of the Cross-EBP Quality Assurance Framework Given the goals of the CA-UW Partnership and the important links between quality assurance, intervention fidelity, and positive outcomes, we devised a framework that aimed to satisfy the need for consistent and interpretable fidelity and quality information across EBPs yet also met the unique fidelity requirements within each. Although the identified EBPs share common goals (e.g., improving family relationships, reducing problematic behaviors), as shown in Table 1, individual adherence requirements vary widely. Creating a structured yet individualized system presented unique considerations. First, the system had to strike a balance between the relevant information needs of the various stakeholders, including a large public system such as child welfare, in some cases the treatment purveyors, and the individual consultants working with providers. Second, the system had to be efficient and not overburden its users. Finally, the system had to be flexible enough to accommodate changes (e.g., adding another EBP, change in programmatic requirements), able to be implemented at a large scale (i.e., statewide, hundreds of providers), and be sustainable over time. The resulting framework includes three areas of quality that could be measured across multiple EBPs: context, compliance, and competence. Context pertains to factors that must exist in order for the program to be implemented, such as sufficient provider training, and organizational climate/support. For the purposes of the quality monitoring framework developed for

Table 1 Intervention fidelity requirements for the statewide service improvement initiative Context

IY

Compliance

Competence

Pre-service Training

Competency criteria to start

Required booster sessions

Expert consult/coaching required

Peer supervision required

Tape review required

Ongoing competencies

X

X

X

X

X

X

X

Safe Care

X

X

X

X

O

X

X

PCIT

X

X

X

X

O

X

X

Triple P

X

X

?

?

X

O

?

Data collection strategy

Purveyor documents attendance

Purveyor documents status

Purveyor or T/C documents attendance

T/C documents attendance

Providers ‘attest’ to attendance

Coaches or T/C document compliance

T/C documents status

X required element, ? added requirement for CA-UW EBP initiative, O not required and not tracked

123

Adm Policy Ment Health

adequate skills in using the intervention), and (c) adherence (whether they used the essential protocol elements of the EBP that are not subject to adaptation). Implied within this definition of competence is that by using the essential EBP protocol elements providers are also not providing elements that are not considered essential (i.e., differentiation). These competency areas were selected because they have been demonstrated in the literature to pertain to relevant aspects of intervention fidelity (Carroll et al. 2007). Items were then developed by project faculty and staff that captured these competency areas. For three of the four EBPs being tracked, provider competence was assessed by audio- or video-taped review of sessions in accordance with pre-established competency benchmarks set forth in clinical research trials and/or by the treatment developers. One EBP, the Triple P Positive Parenting Program (Triple P), does not currently have a standardized intervention fidelity tool. Therefore, the Triple P consultant, who was a certified trainer in Triple P, developed a brief provider report survey (13 questions) that assessed competency areas in accordance with the Triple P priorities of a selfregulatory framework. For example, this survey prompted providers to do a critical self-assessment, identifying areas of strength and areas for improvement, and then develop personalized goals for improvement. The overall quality assurance framework provided a consistent structure for documenting competency across practices while recognizing that specific skills and active ingredients across these practices will vary.

delivering the model. Compliance was measured as a yes/ no, indicating if a provider participated in the necessary monitoring activities. Competence was measured through Trainer/Consultant assessment (yes/no) that the provider (a) delivered the intervention with fidelity, (b) demonstrated the requisite skills, and (c) applied a reasonable dosage of the intervention. A ‘‘no’’ response to any of the compliance or competency criteria triggered an additional step of developing a quality improvement plan (as described in the next section). In order to maximize efficiency of the system and reduce time demands on the Trainers/ Consultants, answers were pre-populated with responses from the previous month. Thus, the Trainer/Consultant would only need to confirm these responses if the clinician’s fidelity status remained unchanged from 1 month to the next. Client-related information was not entered into the ‘‘Toolkit’’, reducing concerns about data security. Each month, information from the ‘‘Toolkit’’ was provided to CA in a user-friendly report that listed individual provider intervention fidelity status (i.e., information on whether they had enough cases, compliance quality indicators, and competence quality indicators). If necessary, information was also provided if quality improvement plan was started for a provider, which is described in more detail in the following section. A breakdown of the number of providers and Trainer/ Consultants (identified by letters) across the four EBPs is shown in Table 2.

Cross-EBP Data Collection

Using the Monitoring Framework and Toolkit to Address Quality and Fidelity Concerns

Having developed a cross-EBP quality assurance framework, we next developed a mechanism by which to capture quality data collected by EBP consultants and consolidate information in a way that rendered information across the common categories. This mechanism took the form of a web-based platform termed the ‘‘Toolkit’’ through which consultants accessed and entered information for each individual provider on a monthly basis. This system employed a series of steps that aligned with the framework. First, Trainer/Consultants documented whether a provider had enough cases on which to answer questions about their use of the EBP for that month (i.e., refers to the context in which an EBP is delivered, as described earlier. If not, the Trainer/Consultant did not continue to enter the remaining quality indicator data because there was insufficient basis on which to make a reliable assessment. As described earlier, such information is of unique interest to CA leadership, allowing them to address barriers to referrals. Assuming sufficient cases, the Trainer/Consultant documented each provider’s compliance with fidelity monitoring requirements and perceived competence in

As described by Schoenwald (2011),‘‘fidelity monitoring in the real world’’ has multiple applications and potentially weighty implications, not the least of which are ensuring provider accountability and providing feedback to individual practitioners. In order to ensure a consistent and transparent approach to achieving these aims, we worked with CA and several EBP developers to design a uniform ‘‘EBP Quality Assurance Plan’’ through which to apply data to remedy any deficits in performance or quality indicators. As shown in Fig. 2, the response process consisted of two phases: (1) development of a quality

Table 2 Count of EBP treatment providers monitored by consultant 2013–2014 PCIT

Triple P

IY

SafeCare

Total

Consultant

A

B

C

D

D



Count

21

30

80

69

47

247

IY and SafeCare shared the same consultant

123

Adm Policy Ment Health

Fig. 2 Cross Program Quality Indicator Reporting and Quality Improvement Planning

improvement plan (referred to as a TA Support Plan) and (2) a Formal Improvement Plan. The TA Support Plan was the first step in the quality improvement process. If a provider did not meet compliance or competence requirements at the monthly review, the Trainer/Consultant initiated and developed a TA Support plan to provide implementation support and identify solutions to the specific fidelity and quality concerns. The Trainer/Consultant outlined in narrative form the specific actions a provider would take to address any concerns raised by data collected. For example, a TA Plan was developed for one provider whose ratings by the Trainer/ Consultant did not meet criteria for quality and adherence to essential EBP elements under the Competence definition/domain. This plan consisted of a targeted exercise to identify the provider’s skill level, and provision of a short ‘‘booster’’ session that consisted of reviewing treatment protocol and conducting role-plays. A practitioner could be on a TA Support Plan for up to 3 months. If concerns remained unresolved at the end of the 3-month time period, a Formal Improvement Plan was initiated, at which point CA would concurrently initiate its own internal action plan with the provider. The Formal Improvement plan could last up to another 3 months, for a total of six consecutive months during which a provider could be engaged in remedial technical assistance support. If at any point during this 6 month period a practitioner resolved quality concerns, the plan would be marked as ‘‘resolved’’ and quality assurance as usual resumed. If a provider failed to resolve concerns after six consecutive

123

months, CA leadership engaged in decision making about the provider’s status as a contracted provider.

Application of the Framework Refining the Framework and the Data System A key step in developing the CA-UW Partnership quality assurance framework involved engagement and communication with relevant stakeholders, including EBP developers, consultants, child welfare leadership, and direct providers. Particularly we recognized the importance of reaching out to the EBP developers to ensure that our approach to designing a cross-EBP monitoring framework aligned with recognized intervention fidelity standards used in the initial clinical trials/research studies. This activity was particularly important for Triple P, given that this EBP did not have pre-existing operationalized fidelity measures. Second, university staff communicated regularly with an identified ‘‘point person’’ within CA leadership to report on progress, seek feedback, and troubleshoot anticipated challenges and obstacles. In addition, Trainers/ Consultants provided feedback on the usability and features of the web based system, and feedback was incorporated into further refining the web based platform. Once the system was finalized, it was introduced on a broad scale to EBP providers. Providers were allowed a 3-month ‘‘grace period’’ to become familiar with the new process. A ‘‘cheat sheet’’ was eventually developed to aid the

Adm Policy Ment Health

Trainers/Consultants in how to complete the monthly fidelity status reports via the Toolkit.

Benefits and Lessons Learned Although the QA framework focus has shifted to sustainability of EBPs, and the CA-UW Partnership has undergone significant evolution over time, we have seen many benefits and learned valuable lessons that we anticipate will improve future implementation efforts and be instructive for other states and sites. Salient benefits and lessons learned thus far are described below. Data Guides Policies and Promotes Actions Figure 3 displays information in each of the three primary quality assurance categories aggregated over 7 months for a total of 186 providers across the four EBPs. The actual names of the EBPs have been removed and replaced with random letters given that the purpose of the data is to illustrate certain points rather than evaluate specific EBPs. Albeit preliminary, these examples of data reporting resulted in actionable items to strategize around and to inform new policies and procedures. One such item highlighted by the quality assurance system was the lack of sufficient EBP referrals. Only 57 % of EBP F practitioners and 47 % of EBP X practitioners had adequate client referrals to permit further evaluation. Even the EBP with the highest number of providers with sufficient cases (EBP R at 76 %), still demonstrated a substantial rate of insufficient cases upon which to assess

their competence (see Fig. 3). Such data underscored the need to examine potential barriers in the referral process (e.g., enhancing social worker knowledge of EBPs) that could be addressed and remedied. By separating out caseload from an assessment of a provider’s skills, providers and/or agencies that were struggling with having sufficient referrals for a particular EBP could be identified and the reason for low referrals could be problem solved directly. This documentation of caseload resulted in increased interaction between the child welfare leadership and the provider agencies and subsequent development of a communication protocol for handling insufficient cases. In addition, this documentation of the lack of cases for some programs resulted in improved outreach efforts. The data presented in Fig. 3 also illustrate differential rates of compliance across the EBPs. Despite most practitioners receiving adequate referrals, EBP R had a considerably lower compliance rate (47 %) compared to the other three practices. These data prompted us to initiate questions regarding what might account for this difference, such as whether specific fidelity requirements for this EBP presented challenges, or whether a certain agency accounted for most of the low compliance, which would suggest organizational factors (e.g., poor support for EBP use, competing time demands) were involved. Finally, although competency ratings of providers remained quite high across all EBPs, a meaningful minority of practitioners providing two EBPs showed insufficient competence that pointed to the need for corrective action. We were also able to examine changes in provider status for a specific EBP. Figure 4 presents fidelity information for a single EBP over a 7-month time period. EBP cases

Fig. 3 Sample report showing mean percent of providers meeting criteria by quality indicator category and EBP over 7 months

123

Adm Policy Ment Health

showed an increase in June and by July, which was expected following the initial training in May. Interestingly, compliance dropped off in January 2014. We were able to use this information to explore reasons for the decline. We learned that certain providers were requiring many reminders to complete the monthly requirements in order to maintain compliance with delivering the EBP. This lapse in compliance could be addressed at the Trainer/ Consultant level, as well as at the regional and agency level. Recognizing and Appreciating Tension Between State Partners’ Goals and Clinical Perspectives Child welfare officials face unique constraints and demands at federal, state and local levels, which are often in contrast to the more circumscribed concerns of a clinical provider or Trainer/Consultant. While child welfare and mental health providers share the common goal of achieving positive outcomes for youth and families (e.g., improved well-being), their approaches and perspectives on achieving these goals often diverge. Whereas child welfare often has an eye toward provider performance in order to inform decision making around contracts and funding, trainers/consultants focus on provider performance in large part because of the known fidelity-outcome connection. The information obtained from the quality assurance framework allowed child welfare leadership to use providers’ status as a factor in decision making about renewing or discontinuing contracts. However, Trainers/ Consultants were concerned about the implications of their role in this decision making process as they did not want to inadvertently penalize clinicians, especially so early in training. Thus, quality monitoring had the potential to be a punitive practice or a supportive one. This challenge was addressed by maintaining contract concerns separate from the improvement plan within the quality assurance framework. We developed a two pronged differentiated

notification system to inform CA about lack of cases immediately but allowed Trainers/Consultants to work with clinicians on compliance before reporting concerns. If a provider moved from receiving technical assistance support to a formal improvement plan, child welfare initiated its own concurrent internal action process, upon which contract decisions would be made. This plan also allowed Trainers/Consultants discretion to extend a provider’s technical assistance period based on individualized circumstances (e.g., if clinician experienced a personal emergency, went on leave, etc.). When divergent goals such as those noted above exist, it is important for both sides to learn about and appreciate the constraints and needs of the other, while recognizing that solutions will likely involve compromise and flexibility from both sides. Inviting all partners to the table early and often yields multiple opportunities for engagement and buy-in, which ultimately leads to a system that has relevance and utility. Quality Data Illuminated Actual Costs of Sustainment of Quality EBP Programming Developing this quality assurance framework highlighted the ongoing costs of EBP sustainment. These costs not only included the expected fiscal expenditures but also included more hidden costs in terms of effort and time from Trainers/Consultants. A certain level of accountability was built into monthly quality assurance reporting as the Trainers/Consultants were regularly aware of those clinicians who needed support on an ongoing basis. Provision of this support required development of a technical assistance support plan and follow-up that required additional time from the Trainers/Consultants, thereby prompting ongoing discussions about the additional time needed to provide this support. In an attempt to minimize the burden of these unexpected costs, we included Trainers/Consultants in the development of the fidelity system in order to

Fig. 4 Sample report showing mean percentage of providers meeting criteria by quality indicator category for one specific EBP over 9 months

123

Adm Policy Ment Health

enhance buy-in and to build a system that had the highest degree of relevance to their efforts. More broadly, as state systems (such as child welfare) adopt EBPs on a large scale, it is essential that all possible project upfront costs and ongoing resources (e.g., financial, time) needed to sustain it are considered. This calculation may prove particularly challenging for state systems whose funding allocations are limited or fluctuate from year to year. It is important for state systems and partners to think systematically, flexibly, and creatively when developing a long term plan of EBP implementation, including cost projections at each phase of system development; exploring different ways to systematically collect information on quality indicators in a way that is sustainable; and anticipating the need for additional resources and prepare for unexpected costs.

Future Directions State-academic partnerships continue to yield mutual benefits beyond the inherent challenges of developing and fostering them, including promoting dissemination and implementation of EBP on a large scale, answering questions relevant to a range of stakeholders, and providing a rich naturalistic environment in which to examine key processes within the field of translational research (Bruns and Hoagwood 2008). Partnerships, such the CA-UW Partnership, also provide opportunities to monitor and ensure ongoing fidelity to EBP, which is central to successful sustainment. As noted earlier, the primary objective of developing the aforementioned cross-EBP quality assurance framework was to capture and provide key quality and fidelity information across different EBPs within a centralized source. The utility of this framework lies in its ability to yield information that is relevant to multiple stakeholders, including child welfare leadership, Trainers/Consultants, organizational leadership, and providers. The regular and consistent collection and distribution of pertinent intervention fidelity and quality assurance information can be used to inform decision making and improve efforts at various levels of implementation (e.g., state level, organization level). The framework’s utility is demonstrated in the ability to examine patterns within and across EBPs, provider agencies, and individual practitioners. Development of the cross-EBP quality assurance framework and monitoring system described in this paper has proven to be an iterative process, with identified next steps to improve upon existing implementation efforts. A next step for the team is to evaluate the external validity of the framework by examining relationships between quality indicators as measured by this framework and outcomes of interest for

child welfare, such as permanency, safety, and well-being. Also, it would be helpful to further examine what factors might account for particular providers’ struggles in achieving a certain level of adherence and quality and develop strategies for overcoming them. While initially this framework was viewed mainly as a mechanism by which to help child welfare leadership to guide decision making related to funding and policy matters, it was recognized that there is ‘‘value added’’ for the consultants to have a centralized place to keep track of quality data for individual providers. Future efforts might examine ways to utilize this framework to improve the training and consultation process, such as via ongoing reminders or prompts for consultants and providers, and a centralized repository where providers could ideally upload and share information that would enhance feedback during the consultation process. In general, it will be critical to evaluate the feasibility and usefulness of the quality assurance framework for end users, given that its ultimate value is predicated on its ability to meet information needs of a multiple and diverse set of stakeholders invested in using EBP to improve child outcomes.

References Aarons, G. A., Hurlburt, M., & Horwitz, S. M. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 4–23. doi:10.1007/s10488-010-0327-7. Aarons, G. A., & Palinkas, L. A. (2007). Implementation of evidencebased practice in child welfare: Service provider perspectives. Administration and Policy in Mental Health, 34(4), 411–419. Aarons, G. A., Sommerfeld, D. H., Hecht, D. B., Silovsky, J. F., & Chaffin, M. J. (2009). The impact of evidence-based practice implementation and fidelity monitoring on staff turnover: Evidence for a protective effect. Journal of Consulting and Clinical Psychology, 77(2), 270–280. doi:10.1037/a0013223. Alexander, J., Barton, C., Gordon, D., Grotpeter, J., Hansson, K., Harrison, R., & Sexton, T. (1998). Functional Family Therapy: Blueprints for violence prevention, Book Three. Blueprints for Violence Prevention Series (D.S. Elliott Eds.). Boulder: Center for the Study and Prevention of Violence, Institute of Behavioral Science, University of Colorado. Barnoski, R. P. (2009). Providing evidence-based programs with fidelity in Washington State juvenile courts : Cost analysis. Olympia: Washington State Institute for Public Policy. Barth, R. P. (2008). The move to evidence-based practice: how well does it fit child welfare services? (Cover story). Journal of Public Child Welfare, 2(2), 145–171. doi:10.1080/15548730802312537. Barth, R. P., Lee, B. R., Lindsey, M. A., Collins, K. S., Strieder, F., Chorpita, B. F., & Sparks, J. A. (2011). Evidence-based practice at a crossroads: The timely emergence of common elements and common factors. Research on Social Work Practice, 22(1), 108–119. doi:10.1177/1049731511408440. Bond, G. R. (2000). Introduction to a special section: Measurement of fidelity in psychiatric rehabilitation research. Mental Health Services Research, 2(2), 73.

123

Adm Policy Ment Health Bond, G. R., Drake, R. E., McHugo, G. J., Rapp, C. A., & Whitley, R. (2009). Strategies for improving fidelity in the national evidencebased practices project. Research on Social Work Practice, 19(5), 569–581. Bruns, E. J., & Hoagwood, K. E. (2008). State implementation of evidence-based practice for youths, Part I: Responses to the state of the evidence. Journal of the American Academy of Child and Adolescent Psychiatry, 47(4), 369–373. Bumbarger, B. K., & Campbell, E. M. (2012). A state agency-university partnership for translational research and the dissemination of evidence-based prevention and intervention. Administration and Policy in Mental Health and Mental Health Services Research, 39(4), 268–277. doi:10.1007/s10488-011-0372-x. Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J., & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Science, 2(1), 40. Chaffin, M. J., & Friedrich, B. (2004). Evidence-based treatments in child abuse and neglect. Children and Youth Services Review, 26(11), 1097–1113. doi:10.1016/j.childyouth.2004.08.008. Chaffin, M. J., Funderburk, B., Bard, D., & Valle, L. A. (2011). A combined motivation and parent–child interaction therapy package reduces child welfare recidivism in a randomized dismantling field trial. Journal of Consulting and Clinical Psychology, 79(1), 84–95. doi:10.1037/a0021227. Chaffin, M. J., Hecht, D. B., Bard, D., Silovsky, J. F., & Beasley, W. H. (2012). A statewide trial of the SafeCare home-based services model with parents in Child Protective Services. Pediatrics, 129(3), 509–515. Chorpita, B. F., Bernstein, A., & Daleiden, E. L. (2011). Empirically guided coordination of multiple evidence-based treatments: An illustration of relevance mapping in children’s mental health services. Journal of Consulting and Clinical Psychology, 79(4), 470–480. Chorpita, B. F., & Daleiden, E. L. (2014). Structuring the collaboration of science and service in pursuit of a shared vision. Journal of Clinical Child and Adolescent Psychology, 43(2), 323–338. Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation science. doi:10.1186/1748-5908-4-50. Edwards, A., & Lutzker, J. R. (2008). Iterations of the SafeCare model: an evidence-based child maltreatment prevention program. Behavior Modification, 32(5), 736–756. Fixsen, D. L., Blase, K. A., Naoom, S. F., & Wallace, F. (2009). Core implementation components. Research on Social Work Practice, 19(5), 531–540. Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation Research: A Synthesis of the Literature. Tampa: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network. Garland, A. F., Hurlburt, M. S., Brookman-frazee, L., Taylor, R. M., & Accurso, E. C. (2010). Methodological challenges of characterizing usual care psychotherapeutic practice. Administration and Policy in Mental Health and Mental Health Services Research, 37(3), 208–220. doi:10.1007/s10488-009-0237-8. Huey, S. J, Jr, Henggeler, S. W., Brondino, M. J., & Pickrel, S. G. (2000). Mechanisms of change in multisystemic therapy: Reducing delinquent behavior through therapist adherence and improved family and peer functioning. Journal of Consulting and Clinical Psychology, 68(3), 451–467. doi:10.1037/0022006X.68.3.451. Kaye, S., & Osteen, P. J. (2011). Developing and validating measures for child welfare agencies to self-monitor fidelity to a child

123

safety intervention. Children and Youth Services Review, 33(11), 2146–2151. Kinney, J., Haapala, D., & Booth, C. (1991). Keeping families together: The homebuilders model. New York: A. de Gruyter. Leve, L. D., Fisher, P. A., & Chamberlain, P. (2009). Multidimensional treatment foster care as a preventive intervention to promote resiliency among youth in the child welfare system. Journal of Personality, 77(6), 1869–1902. doi:10.1111/j.14676494.2009.00603.x. Marcynyszyn, L. A., Maher, E. J., & Corwin, T. W. (2011). Getting with the (evidence-based) program: An evaluation of the Incredible Years parenting training program in child welfare. Children and Youth Services Review Children and Youth Services Review, 33(5), 747–757. Mazzucchelli, T. G., & Sanders, M. R. (2010). Facilitating practitioner flexibility within an empirically supported intervention: Lessons from a system of parenting support. Clinical Psychology: Science and Practice, 17(3), 238–252. McHugh, K. R., & Barlow, D. H. (2010). The dissemination and implementation of evidence-based psychological treatments. A review of current efforts. The American psychologist, 65(2), 73–84. McLeod, B. D., Southam-Gerow, M. A., Tully, C. B., Rodrı´guez, A., & Smith, M. M. (2013). Making a case for treatment integrity as a psychosocial treatment quality indicator for youth mental health care. Clinical Psychology: Science and Practice, 20(1), 14–32. doi:10.1111/cpsp.12020. Price, J. M., Chamberlain, P., Landsverk, J., Reid, J. B., Leve, L. D., & Laurent, H. (2008). Effects of a foster parent training intervention on placement changes of children in foster care. Child Maltreatment, 13(1), 64–75. doi:10.1177/1077559507310612. Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C., & Mittman, B. (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research, 36(1), 24–34. doi:10.1007/s10488-008-0197-4. Sanders, M. R., Markie-Dadds, C., & Turner, K. M. T. (2003). Theoretical, scientific and clinical foundations of the triple P-Positive Parenting Program: A population approach to the promotion of parenting competence. Queensland: The Parenting and Family Support Centre. Schoenwald, S. K. (2011). It’s a bird, it’s a plane, It’s… fidelity measurement in the real world. Clinical Psychology: Science and Practice, 18(2), 142–147. doi:10.1111/j.1468-2850.2011.01245.x. Schoenwald, S. K., Garland, A. F., Southam-Gerow, M. A., Chorpita, B. F., & Chapman, J. E. (2011). Adherence measurement in treatments for disruptive behavior disorders: pursuing clear vision through varied lenses. Clinical Psychology: Science and Practice, 18(4), 331–341. doi:10.1111/j.1468-2850.2011.01264.x. Smith-Boydston, J., Holtzman, R., & Roberts, M. (2014). Transportability of multisystemic therapy to community settings: can a program sustain outcomes without MST services oversight? Child and Youth Care Forum, 43(5), 593–605. doi:10.1007/ s10566-014-9255-0. Urquiza, A. J., & McNeil, C. B. (1996). Parent-child interaction therapy: An intensive dyadic intervention for physically abusive families. Child Maltreatment, 1(2), 134–144. US Department of Health and Human Services, A. f. C. a. F., Administration on Children, Youth and Families, Children’s Bureau. (2013). Child Maltreatment 2012. Available from http://www.acf.hhs.gov/programs/cb/research-data-technology/ statistics-research/child-maltreatment. Waltz, J., Addis, M. E., Koerner, K., & Jacobson, N. S. (1993). Testing the integrity of a psychotherapy protocol: Assessment of

Adm Policy Ment Health adherence and competence. Journal of Consulting and Clinical Psychology, 61(4), 620–630. doi:10.1037/0022-006X.61.4.620. Webster-Stratton, C. (2005). Treating conduct problems and strengthening social and emotional competence in yough children: The Dina Dinosaur Treatment Program. In M. Epstein, K. Kutash, & A. J. Duchowski (Eds.), Outcome for children and youth with emotional and behavioral disorders and their families: Programs and evaluation best practices (2nd ed., pp. 597–623). Austin: Pro-Ed Inc.

Webster-Stratton, C. (2012). Achieving Fidelity of IY Program Delivery Content, Dosage and Clinical Processes through Certification/Accreditation. Retrieved November 25, 2012 from http://www.incredibleyears.com/certification/certification-accred itation_FAQ.pdf. Weisz, J. R., Jensen-Doss, A., & Hawley, K. M. (2006). Evidencebased youth psychotherapies versus usual clinical care: a metaanalysis of direct comparisons. The American Psychologist, 61(7), 671–689.

123

Developing a Quality Assurance System for Multiple Evidence Based Practices in a Statewide Service Improvement Initiative.

Efforts to implement evidence based practices (EBP) are increasingly common in child-serving systems. However, public systems undertaking comprehensiv...
1MB Sizes 0 Downloads 7 Views