Research in Developmental Disabilities 35 (2014) 3689–3697

Contents lists available at ScienceDirect

Research in Developmental Disabilities

Review article

Education programmes for young children with Autism Spectrum Disorder: An Evaluation Framework Jennifer McMahon a,*, Veronica Cullinan b,1 a b

Centre for Social Issues Research, Department of Education & Professional Studies, University of Limerick, Ireland Department of Psychology, Faculty of Arts, Mary Immaculate College, University of Limerick, Ireland

A R T I C L E I N F O

A B S T R A C T

Article history: Received 25 August 2014 Received in revised form 1 September 2014 Accepted 2 September 2014 Available online 10 October 2014

Autism researchers have identified a common set of practices that form the basis of quality programming in ASD yet little is known regarding the implementation of these practices in community settings. The purpose of this paper was to outline an Evaluation Framework for use in evaluating ASD programmes of education that will provide valuable information as to the sensitivity of programmes to best practice, establish how programmes are operating and the programme effect on students and their families. The move towards more rigorous evaluation will provide quality information as to the degree of adoption of research led practices in the community setting which heretofore has been largely unavailable. ß 2014 Elsevier Ltd. All rights reserved.

Keywords: Autism Spectrum Disorder Comprehensive education programmes Programme evaluation Treatment integrity

Contents 1. 2.

3.

4.

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Programme evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Programme evaluation theory . . . . . . . . . . . . . . . . . 2.1. Guidelines to evaluating educational programmes. 2.2. The Evaluation Framework . . . . . . . . . . . . . . . . . . . . . . . . . The logic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1. Process evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. Autism program quality indicators . . . . . . 3.2.1. Outcome-impact evaluation. . . . . . . . . . . . . . . . . . . 3.3. Programme outcomes . . . . . . . . . . . . . . . . 3.3.1. Programme impact . . . . . . . . . . . . . . . . . . 3.3.2. Stakeholder evaluation. . . . . . . . . . . . . . . . . . . . . . . 3.4. Social validity. . . . . . . . . . . . . . . . . . . . . . . 3.4.1. 3.4.2. Family variables. . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

* Corresponding author. Tel.: +353 61 202663. E-mail address: [email protected] (J. McMahon). 1 Psychologist in Private Practice, Main St., Castlelyons, Fermoy, Co. Cork, Ireland. http://dx.doi.org/10.1016/j.ridd.2014.09.004 0891-4222/ß 2014 Elsevier Ltd. All rights reserved.

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

3690 3691 3691 3691 3692 3693 3693 3694 3694 3695 3695 3695 3695 3696 3696 3696

3690

J. McMahon, V. Cullinan / Research in Developmental Disabilities 35 (2014) 3689–3697

1. Introduction The prevalence of Autism Spectrum Disorder (ASD) has increased at a startling rate over the last few decades (Blaxhill, 2004; Matson & Kozlowski, 2011). Currently estimated at 1 in 68 (Centre for Disease Control, CDC, 2014) ASD is a lifelong pervasive disorder, affecting communication, adaptive and social skills and often overall cognitive functioning (Matson & Wilkins, 2009). In addition the condition is often co morbid with a range of motor problems, challenging behaviours, and psychopathology (Matson, Hess, & Boisjoli, 2010; Matson & Rivet, 2008; Sipes, Matson, & Horovitz, 2011). Although the reason for the increase in prevalence rates is under debate (Blaxhill, 2004) the reality is that the demand for specialised educational services has grown in tandem with this increase. This upward pressure on educational services is even more significant when you consider that intervention and management of ASD has become synonymous with education (McMahon, 2012). In fact researchers have attempted to remediate the symptoms of ASD by developing comprehensive educational programmes that target multiple features associated with the developmental delay present in children with the diagnosis. In an examination of educational practices for children with ASD the National Research Council (NRC) identified a range of comprehensive programmes of intervention, which they defined as a set of practices designed to achieve a broad learning or developmental impact on the core deficits of ASD (Hume et al., 2011; NRC, 2001). Practices range from techniques derived from applied behaviour analysis to parents as therapists in an effort to improve social, emotional, communicative and cognitive skills. Further to this Odom, Boyd, Hall, and Hume (2010) identified 30 comprehensive programmes of educational intervention that have been developed and tested and seek to provide quality programming to young children with ASD. Examples include the UCLA Young Autism Project (The Lovaas model) (Lovaas, 1987), Treatment and Education of Autistic and Communication Handicapped Children (TEACCH) (Mesibov, Shea, & Schopler, 2005), the LEAP model (Hoyson, Jamieson, & Strain, 1984), and the Denver model (Rogers et al., 2006), all of which have published reports of the design and nature of the programmes. There continues to be wide ranging discussion in the literature as to the outcome benefits for young children with ASD participating in these programmes although programmes based on behavioural approaches have consistently demonstrated better effectiveness (Eldevik et al., 2009, 2010; Makrygianni & Reed, 2010; Peters-Scheffer, Didden, Korzilius, & Sturmey, 2011; Virue´s-Ortega, 2010). Though no review has yielded one single practice or programme as superior it is accepted that several programmatic features have been demonstrated as efficacious. These core elements are (a) individualised supports and services for students and families, (b) systematic instruction, (c) comprehensible/structured learning environments, (d) specialised curriculum content, (e) functional approach to problem behaviour, and (f) family involvement (Iovanonne, Dunlap, Huber, & Kincaid, 2003). There is an important responsibility on schools and government to translate this evidence in ASD and education into practice. This is situated in the context of the evidence based practice movement that has been heavily promoted in education in the last decade, and coincides and is connected to calls for accountability in education and improved teaching practice amongst professionals (Slavin, 2002). However researchers and providers are often frustrated at the gap between research and practice (Bondy & Brownell, 2004). Schools and school districts frequently seem unwilling or unable to utilise evidence-based practices to provide quality treatment. Undoubtedly numerous barriers exist to adopting best practice including limited opportunities for training, a lack of autism-specific support, large caseloads and a belief that practices developed in research settings are not appropriate to the children in their programmes (Stahmer, 2007). Yet bridging this gap is of paramount importance as failure to implement practices as they were intended likely leads to poorer outcomes for those enrolled in programmes and may have unintended side effects. For example in efficacy research Durlak and DuPre (2008) summarised that programmes with stronger adherence resulted in mean effect sizes that were 2–3 times higher than programmes with poorer adherence. Hume et al. (2011) postulate that this direct relationship between degree of treatment integrity and treatment outcome is applicable to the ASD population and related intervention research. One method of examining the disparity between community practice and efficacious practice is to evaluate service provision across a district or region against best practice guidelines. However frequently such evaluations are undertaken by governmental bodies or associated organisations without sufficient redress to best practice in the area of ASD (Department of Education and Science, Ireland, 2006; Department of Education, DES, UK, 2005). Lacking a coherent framework such evaluations often fail to benchmark service provision against markers that have been demonstrated to be of importance to educating children with ASD and fail to demonstrate improved student outcome (Buckley, 2014). For example an evaluation of service provision for children with ASD conducted by the Department of Education and Science in Ireland (DES, 2006) fails to address best practice indicators such as functional behavioural assessment and systematic instruction and does not report on measures of student outcome/progress. Similar trends have been reported internationally (Buckley, 2014; DES, UK, 2005). The implication of this is that community educational programmes continue to be developed and implemented without attention to whether they are meeting the specialised requirements of students with ASD or improving interim and long term outcomes for individual students. In addition evaluations are rarely grounded in a theory of evaluation and tend to be guided by the experience (or lack of experience) of the evaluators. Given that evaluation of community educational provision for children with ASD can lead to better programme development and implementation the logical next step is to develop a broad framework for evaluating programmes that will be useful to schools, service providers and funding organisations. To our knowledge, no such framework currently exists and current evaluations fall short of providing the necessary information required in making accurate judgments as to the effectiveness of current community educational provision for young children with ASD. The purpose of this paper is to outline a framework for the evaluation of programmes of education for

J. McMahon, V. Cullinan / Research in Developmental Disabilities 35 (2014) 3689–3697

3691

young children with ASD that will yield valuable information for parents and relevant stakeholders as to the alignment of such programmes with best practice in the field and the effect on participating students and their families. 2. Programme evaluation Programme evaluation is considered a relatively new phenomenon although it has been used as a mechanism of gaining knowledge for at least 150 years. Although, initially, it was situated within various disciplines drawn predominantly from the social sciences, in recent decades it has expanded rapidly as a field in itself (Williams & Morris, 2009). Programme evaluation is a complex process and as such a variety of definitions have been forwarded in an effort to capture its meaning. For example Patton (1997) defines programme evaluation as ‘the systematic collection and analysis of information about programme activities, characteristics and outcomes to make judgements about the programme, improve programme effectiveness and/or inform decisions about future programming’. Rossi, Lipsey, and Freeman (2004) describe it as ‘the systematic application of social research procedures for assessing the conceptualisation, design, implementation and utility of programmes’ whilst Posovas and Carey (2007) understand it ‘as a process of gathering information that allows decisions to be made about the design and/or modification of a programme and an evaluation of its usefulness, value, implementation, quality and impact’. Whilst each definition differs in their nuances they are consistent in the view that programme evaluation is a mechanism to increase programme knowledge in the pursuit of improved programme development and implementation. A programme in this sense is an, ‘organised, planned and usually on-going effort designed to ameliorate a social problem or improve social conditions’ (Rossi et al., 2004). 2.1. Programme evaluation theory Theory has become central to contemporary evaluation. This development is linked to an increased emphasis on accountability and transparency within all types of social programmes and a greater demand for evidence-based programme development and implementation. The requirement and desire for accountability presents a need for evaluation theory (Alkin and Christie, 2004) and is particularly important in programmes that are supported by government and have significance for vulnerable members of society. This development has been led by influential organisations such as the W.K. Kellog Foundation (1998, 2000) and the United Way of America in the United States (1996) who have espoused theory driven forms of evaluation to evaluate the community change initiatives they fund. Theory driven approaches are also being more widely promoted by international organisations such as the United Nations Evaluation Group (UNEG) and the Independent Evaluation Group (IEG) of the World Bank to evaluate funded humanitarian efforts (Carvalho & White, 2004). Although evaluation theory is not an essential component of conducting a programme evaluation, for example, eminent evaluators such as Scriven (2004) have claimed that it is very possible to do good programme evaluation without getting into evaluation theory or programme theory, grounding an evaluation in theory has several benefits. One of the key reasons for the growing interest in theory-driven evaluation over the past two decades was ‘the usual inability of even the most sophisticated experimental evaluations to explain what factors were responsible for the program’s success—or failure’ (Weiss, 1997). As such evaluation theory can primarily be used as a guide to practice (Shadish, Cook, & Leviton, 1991), a means of illuminating the assumptions and mechanisms behind a programme and a tool to identifying and controlling extraneous sources of variance (Williams & Morris, 2009). In an analysis of approximately 140 evaluations conducted in the field of early childhood programmes major evaluation theories include: (a) experimental, (b) monitoring-oriented, (c) naturalistic, (d) cost-benefit or cost-effect analysis, (e) objectives oriented, (f) expert-oriented and (g) participatory. Other approaches included needs assessment, consumer oriented and a utilisation-oriented approach (Lee & Walsh, 2004); they note that evaluation of these programmes was largely devoted to measuring outcomes, especially cognitive outcomes of participating children. As a result there was a distinct lack of programme understanding and process knowledge that would explain how and why children progressed within these programmes. They conclude that the underlying theory of evaluation should be outlined in future evaluations of early childhood programmes so that this ‘black box’ view of programmes is addressed. 2.2. Guidelines to evaluating educational programmes The practice of monitoring and documenting the implementation of educational programmes is complex and multifaceted. Governmental organisations have led the way in developing standards and principles for use in evaluating educational programmes. The Centre for Disease Control (CDC, 1999) has drawn up guidelines that serve as direction for effective evaluations. These guidelines (Fig. 1) comprise of six steps in programme evaluation practice and denote the accepted standards for effective programme evaluation. These provide guidance on (i) engaging stakeholders, (ii) describing the programme, (iii) focusing the evaluation design, (iv) gathering credible evidence, (v) justifying conclusions and (vi) sharing lessons learned (see Appendix A). In addition to the CDC guidelines, guidelines by the US Joint Committee on Standards for Educational Evaluation (JCSEE) should also be considered (JCSEE, 1994) when developing a framework of evaluation for ASD educational programmes, Fig. 1. The JCSEE standards span five domains: utility standards; feasibility standards; proprietary standards; accuracy standards; and evaluation accountability standards. Utility Standards are designed to increase the extent to which programme stakeholders find evaluation processes and products valuable in

3692

J. McMahon, V. Cullinan / Research in Developmental Disabilities 35 (2014) 3689–3697

Fig. 1. CDC and JCSEE guidelines to programme evaluation. This is information is adapted from the Center for Disease Control web page. For more detail and information, refer to the website: www.cdc.gov/eval/ index.htm.

meeting their needs. Feasibility Standards are intended to increase evaluation effectiveness and efficiency; Proprietary Standards support what is proper, fair, legal and just in evaluations whilst Accuracy Standards are intended to increase the dependability and truthfulness of evaluation representations, propositions and findings. Evaluation Accountability Standards encourage adequate documentation of evaluations and a meta-evaluative perspective focused on improvement and accountability for evaluation processes and products (Yarbrough, Shulha, Hopson, and Caruthers, 2011). 3. The Evaluation Framework Taking account of the aforementioned guidelines and principles (Section 2.2) the Evaluation Framework consists of four evaluation aspects, which are portrayed in Fig. 2. The first aspect is to develop a logic model of the programme that clearly delineates the cause and effect relationship within the programme. The second aspect is to conduct a process evaluation that assesses programme adherence to quality indicators in the field. The third aspect examines the effect of the programme on students engaged in the programme. Finally the fourth aspect assesses the programme from a variety of perspectives to

Fig. 2. The Evaluation Framework for ASD educational programmes.

J. McMahon, V. Cullinan / Research in Developmental Disabilities 35 (2014) 3689–3697

3693

verify the social significance and importance of the programme to key stakeholders. The Evaluation Framework rests on the premise that using this process yields in-depth information that is can be utilised in developing and implementing high quality programming for young children with ASD. 3.1. The logic model The logic model of programme evaluation has also been described as a theory of change model and has increased in popularity over the last decade. The logic model approach is based on the premise that every programme begins with some notion of cause-effect expectations about the programme intervention (Rogers, 2000). According to the logic model developing programme theory can be a top-down process where evaluation may begin with a causal hypothesis or bottomup where evaluation illuminates the link between cause and effect. Logic models explore relationships between features of a programme and the outcomes of the programme. In the context of ASD programme evaluation this model can provide valuable information as to how a programme is operationalised as well as the social context in which the programme is embedded. Logic models are typically represented as graphical diagrams that specify relationships amongst programmatic actions, outcomes and other factors. The elements used to build the logic model most often (but not always) include inputs, activities, outputs, and outcomes which are intended to represent a program impact theory. Inputs refer to the various resources available for implementation of the programme (e.g. financial, human, physical). Activities refer to the actions undertaken to bring about change for the individuals involved (e.g. therapies, interventions, strategies, staff training). Outputs are the immediate result of the action (e.g. the number of students or people who received the service) and outcomes are the anticipated changes (direct and indirect) that occur as a result of the inputs, activities and outputs. Outcomes are expressed in several temporal modes to reflect initial outcomes (e.g. knowledge of greetings), intermediate outcomes (e.g. increased use of greetings with family members/friends) and long-term outcomes (e.g. improved social communication). Environmental contexts are also reflected in the model and account for extraneous pressures/influences on the programme. Models are usually expressed in a simplistic linear model as illustrated in Table 1 (but can be represented in a more contextualised, cyclical and comprehensive mode). A key advantage of developing a logic model at the outset of an evaluation is that if the programme is effective the approach should identify which elements are necessary for widespread replication. Equally if a programme is deemed ineffective or is not fully achieving its intended outcomes a theory-driven evaluation should be able to discover the factors contributing to breakdowns within the programme (Coryn, Noakes, Westine, & Schroter, 2010). 3.2. Process evaluation As noted in the introduction there is an increasing awareness that the challenge in educating young children with autism is to close the gap between the quality of model programs (as outlined in the literature) and the reality of most publicly funded early educational programmes (NRC, 2001). Increasingly comprehensive evaluation studies require linking programme ingredients and outcomes and the explicit monitoring of key intervention components (Chen, 1990; Donaldson, 2007; Scheirer, 1987 in Zvoch, 2012). This has led evaluators to identify essential programme components and develop measures that capture the extent to which providers deliver and recipients receive and adhere to a treatment protocol (Zvoch, 2012). In the field of autism education several researchers have documented the critical aspects of early intervention for young children with autism (Iovanonne et al., 2003; NRC, 2001) and in the last decade efforts have been made to develop these components into indicators for the review and improvement of community educational programmes.

Table 1 Example of a basic logic model for a comprehensive programme of education for young children with ASD. Inputs

Activities

Outputs

Short-term outcomes

Longer-term results

School provides teachers, classroom assistants and administration staff Programme provides access to specialist support Government funding Student characteristics

Evidence based practices in teaching young children with autism - Pivotal response treatment - Picture exchange communication system - Modelling - Precision teaching

Number of students in programme Student attendance Parent training events Teacher training events

Student improvement on specific goals and objectives outlined in their individual educational plans

Student improvement in broad areas of cognitive, social emotional and behavioural domains

Environmental contexts Geographic location, such as rural versus urban Availability of access to early intervention teams Supportive resources e.g. Saturday clubs and summer programming

3694

J. McMahon, V. Cullinan / Research in Developmental Disabilities 35 (2014) 3689–3697

3.2.1. Autism program quality indicators One such set of quality indicators has been developed by the University of the State of New York (NY State Department of Education, 2001) in conjunction with the New York Autism Network in response to a request from the New York Department of Education. The Autism Program Quality Indicators (APQI) are a compilation of research-based components that have been linked to high quality and effective educational programmes for students with autism across the lifespan (see Table 2). The items on the APQI are derived from a variety of sources including a review of the scientific literature, professional experience and input and have been reviewed by experts in the field of autism. The APQI is a useful measure for approaching any evaluation of ASD educational programmes as it does not adopt one approach to teaching students with ASD for its basis; rather it is based on the features of high quality programmes for students with an ASD. The APQI are organised into 14 areas with seven categories relating to the specific aspects of the educational process for students, and seven categories referring more broadly to programme characteristics and supports, see Table 2. Each broad indicator is comprised of a set of subindicators, which is used to determine the score for each area. A rating of between 0 and 3 is given to each element being assessed in the APQI. Each area is individually scored and the overall range of the measure is 0–240 with a higher score reflecting a stronger presence of indicators associated with best practice in the area of autism education. A triangulated approach to determining the presence of indicators in a programme is recommended by the authors to include methods such as observation, checklists, interview and analysis of supporting documentation. Whilst originally intended as a self-evaluation instrument it provides a sound basis for independent evaluations, although evaluators will have to make some decisions about the usefulness of particular categories within specific evaluations in relation to early childhood programmes. In addition none of the indicators are weighted which can be problematic where schools score poorly on a category deemed by evaluators as pivotal and as such interpretation of the findings needs to be conducted by experienced evaluators in ASD education. Also reliability and validity of the instrument has yet to be reported and this is an area that would benefit from further investigation. Finally procedural fidelity does not officially form part of the APQI but in assessing instructional methods fidelity checks (either external or internal) should be considered by evaluators particularly where manualised strategies are being implemented (e.g. PECS, discrete trial training, pivotal response training, etc.). 3.3. Outcome-impact evaluation Equally important to describing programme implementation is defining the outcomes of the programme and how those outcomes might be measured. The risk with comprehensive programmes of education is that desired outcomes may be so numerous that none of the potential outcomes are achieved at a detectable level. Nevertheless evaluators should attempt to establish programme outcomes as well as programme impact. Programme outcomes relate to all the aspects the programme expects will change for the better in the lives of the participating students whereas programme impact refers to a more rigorous assessment of measuring programme effects. Table 2 Autism programme quality indicators (APQI, 2001). Autism Programme Quality Indicators (APQI) Area of Assessment Individual Characteristics

Area of Assessment Program Characteristics

Individual Evaluation Thorough diagnostic, developmental, and educational assessments using a comprehensive, multidisciplinary approach are used to identify students’ strengths and needs. Development of the Individual Educational Plan (IEP) Assessment focuses on the incorporation of the appropriate goals for children with an ASD based on individual needs and family concerns. Curriculum Assessment focuses on the implementation of the optimal curriculum for those with an ASD as well as adhering to the national curriculum. Instructional Activities Assessment focuses on the provision of developmentally and functionally appropriate activities, experiences, and materials that engage students in meaningful learning. Instructional Methods Assessment focuses on the use of proven teaching methods in the instruction of those with an ASD and an ability to adapt these methods to the needs of the individual student. Instructional Environments Assessment focuses on the provision of the optimal environment for learning for the individual student. Review and Monitoring of Progress and Outcomes Assessment is based on evidence of a collaborative, ongoing and systematic process for assessing student progress.

Family Involvement and Support Assessment is based on evidence of parental involvement in the IEP process and appropriate training and communication of the child’s program to the parent. Inclusion Opportunities for interaction with nondisabled peers are incorporated into the program. Planning the Move from One Setting to the Next Assessment focuses on the provision of the necessary arrangements for effective inclusion in mainstream to take place. Challenging Behaviour Assessment looks for evidence that behaviour is addressed utilising functional behaviour assessments (FBA’s) and that positive behaviour supports are put in place. Community Collaboration The program links with community agencies to assist families in accessing supports and services needed by students with autism Personnel Assessment is based on evidence that staff members are knowledgeable and skilled in relation to the education of children with an ASD. Program Evaluation Systematic examination of program implementation and impact is conducted, including the aggregation of individual student outcomes and consumer satisfaction.

J. McMahon, V. Cullinan / Research in Developmental Disabilities 35 (2014) 3689–3697

3695

3.3.1. Programme outcomes In relation to comprehensive programmes of education for young children with ASD programme outcomes can be largely evaluated based on information from standardised measures utilised to develop student’s individual curricula. Although currently the use of measures varies widely from programme to programme Gould, Dixon, Najdowski, Smith, and Tarbox (2011) have identified 27 common measures that are routinely used for a variety of purposes that underpin good programme design. These measures assess developmental/educational goals; social skills; motor function; speech and language/communication; daily living skills; play skills; academic/achievement; and intelligence. The authors identify four key measures that are most useful in developing a comprehensive student profile; the Verbal Behavior Milestones Assessment and Placement Program (VB-MAPP), the Brigance Diagnostic Inventory of Early Development-II, (Brigance IED-II), the Vineland Adaptive Behavior Scales-Second Edition (VABS-II) and the Brigance Diagnostic Comprehensive Inventory of Basic Skills-Revised (CIBS-R). Programmes should be encouraged to include these although Gould et al. (2011) highlight that no one measure comprehensively addresses the range of criteria necessary for developing best quality programming. In most cases evaluating outcomes for children engaged in ASD programmes will involve developing child profiles based on the battery of assessments used by the programme and as such the experience of the evaluators is critical. Additional measures should only be added where it is determined that the measures used have created an unbalanced view of student’s development. 3.3.2. Programme impact Assessing programme impact attempts to identify a causal relationship between the programme and improvements in student functioning. Causation can be determined by comparing participants against some benchmark that is representative of what they would be had they not participated. Gilliam and Leiter (2003) outline typical ways that this can be achieved: (a) drawing comparisons with a group of children who were eligible to receive the intervention but were randomly selected to not receive it, (b) drawing comparisons with children who did not receive the intervention but are similar in important characteristics to those that did receive the intervention or (c) drawing comparisons with the participants own baseline performance on the desired outcomes. Whilst methods (a) and (b) offer the most robust assessment of programme impact, for most programmes method (c) is the most pragmatic option given that random assignment and denying children services is at odds with providing mandated educational services. Gould et al. (2011) note that good assessment tools in ASD programmes should be useful to track children’s skills over time and that students typically undergo indepth testing at programme intake in order to determine their needs. As such programmes should have the means to assess programme impact but where they do not determining impact may be more difficult. In this situation programmes should be encouraged to revisit programme goals and objectives and determine the requirements to ensure that student progress towards these are benchmarked and regularly assessed. 3.4. Stakeholder evaluation It is important to recognise that stakeholders of a programme can be valuable sources of information. Programme stakeholders are those that have a stake in an evaluation or its results, typically ranging from the beneficiaries, or recipients, of programme services to administrators and funding agencies (Brandon & Fukunago, 2014). Given the value laden nature of programme quality (Lee & Walsh, 2004) ascertaining the attitudes and opinions of stakeholders should be a primary concern of evaluators seeking to understand the intervention process and its effects on those involved in receiving and delivering the programme. Numerous elements may be of interest to evaluators (e.g. teacher efficacy, teacher beliefs) but establishing the social validity of the programme and ascertaining the effect of broader family variables has been identified as being of importance. 3.4.1. Social validity To date the experimental literature in the area of ASD education has focused on child variables and child outcomes. The process of social validation is an important aspect of validating effective educational or therapeutic outcomes yet to date it has received minimal attention in the research literature (Callahan, Shukla-mehta, McGee, & Wie, 2010; Humphreys & Parkinson, 2006). Social validation refers to the satisfaction of consumers with the goals, procedures and outcomes of a programme or intervention (Alberto & Troutman, 2008; Wolf, 1978). Whilst generally arising from literature relating to behavioural programmes it is equally applicable to non-behavioural oriented programmes as widespread social validation can determine the extent to which any intervention or model is adopted or implemented within school programmes. Indeed Callahan et al. (2010) found that the most socially valid components of a programme as identified by parents, special education teachers and administrators were those that were inherent to competing models/approaches. Equally evaluators must also be sensitive to the social validation of practices not considered best practice but that have gained validation in special education (Callahan, Henson & Cowan, 2008). Callahan et al. (2008) provides a comprehensive list of items to be considered in social validation measures and these can be combined with satisfaction surveys based on factors demonstrated to be important in the delivery of effective programming for young children with ASD such as the Autism Specific Program Evaluation Survey (ASPES, McMahon, 2012) which is based on the APQI (NY State Department of Education, 2001).

3696

J. McMahon, V. Cullinan / Research in Developmental Disabilities 35 (2014) 3689–3697

3.4.2. Family variables In addition to satisfaction with programme variables an evaluation of ASD education programmes should collect information on family variables that might affect child progress. For example it is acknowledged that factors such as family structure, socioeconomic status, parent education and occupation, formal and informal support, additional stressors on the family can impact on programme delivery (Humphreys & Parkinson, 2006). For variables such as parental stress, significant amongst parents of children with ASD (Gallagher & Hannigan, 2014), standardised measures such as the Parental Stress Scale (Berry & Jones, 1995) or the parental stress index (Abidin, 1995) should be used. 4. Summary The research informing the development and implementation of educational programmes for young children with autism is complex and multi-dimensional. Undoubtedly, translating such research into practice is extremely challenging for educators and to date, assessment of the quality of applied programmes remains lacking. As Simpson (2005) points out there are basic elements of effective programming that should be incorporated into educational programmes whilst questions as to the optimal intervention for individual children are being debated. Given that the current focus in education is to develop and implement educational programmes derived from research literature it is imperative that researchers detail what happens inside ASD classrooms and begin the process of developing standardised protocols for evaluating existing educational programmes against best practice. This paper has described and discussed an Evaluation Framework for the evaluation for comprehensive programmes of education for young children with ASD. Widespread evaluation of existing programmes and dissemination of findings will be useful in detailing the current operations and processes of applied comprehensive educational programmes for children with ASD that is currently lacking. The central feature of the framework is that it allows for the collection of a range of evaluation information on various aspects of ASD programmes that can drive forward greater adherence to best practice guidelines and bridge the evidence/practice divide in the education of young children with ASD. Appendix A See Table 3. Table 3 CDC guidelines for effective evaluation of programmes. (i) Engage stakeholders (ii) Describe the programme

(iii) Focus the evaluation design

(iv) Gather credible evidence (v) Justify conclusions (vi) Ensure use and share lessons learned

Stakeholders should be engaged where possible in order to understand their perspective and to decrease the likelihood that evaluation findings might be ignored, criticised or resisted. Program descriptions convey the mission and objectives of the program being evaluated. Descriptions should be sufficiently detailed to ensure understanding of program goals and strategies. The description enables comparisons with similar programs and facilitates attempts to connect program components to their effects. In describing a programme the following aspects may need to be included: a needs assessment, expected effects, activities of the programme, resources available, stage of development, context and logic model. The direction and process of the evaluation must be focused to assess the issues of greatest concern to stakeholders whilst using time and resources as efficiently as possible. Focusing the evaluation should include the following: purpose of evaluation, users and uses of the evaluation, an outline of the questions being asked by the evaluation and the methods selected for use in the evaluation. Persons involved in an evaluation should strive to collect information that will convey a well-rounded picture of the program and be seen as credible by the evaluation’s primary users. Evaluation conclusions are justified when they are linked to the evidence gathered and judged against agreed-upon values or standards set by the stakeholders. Deliberate effort is needed to ensure that the evaluation processes and findings are used and disseminated appropriately.

References Abidin, R. R. (1995). Parenting stress index (3rd ed.). Odessa, FL: Psychological Assessment Resources. Alberto, P. A., & Troutman, A. C. (2008). Applied behavior analysis for teachers (8th ed.). Columbus, OH: Pearson/Merrill Prentice Hall. Alkin, M., & Christie, C. A. (2004). An evaluation theory tree. In M. Alkin (Ed.), Evaluation roots: Tracing theorists’ views and influences (pp. 12–65). Thousand Oaks, CA: Sage. Berry, J., & Jones, W. (1995). The parental stress scale: Initial psychometric evidence. Journal of Social and Personal Relationship, 12(3), 463–472. Blaxhill, M. (2004). What’s going on?. The question of time trends in autism. Public Health Reports, 119, 536–551. Bondy, E., & Brownell, M. T. (2004). Getting beyond the research to practice gap: Researching against the grain. Teacher Education and Special Education, 27, 47–56. Brandon, P. K., & Fukunaga, L. L. (2014). The State of the empirical research literature on stakeholder involvement in program evaluation. American Journal of Evaluation, 35(1), 26–44. Buckley, B. (2014). Really?: A Review of Report on support provided for autism diagnosis and services and the potential for further reforms resulting from the National Disability Insurance Scheme and the National Plan for School Improvement Author. Callahan, K., Henson, R. K., & Cowan, A. K. (2008). Social validation of evidence-based practices in autism by parents, teachers, and administrators. Journal of Autism and Developmental Disorders, 38, 678–692. Callahan, K., Shukla-Mehta, S., Magee, S., & Wie, M. (2010). ABA versus TEACCH: The case for defining and validating comprehensive treatment models in autism. Journal of Autism and Developmental Disorders, 40, 74–88. Carvalho, S., & White, H. (2004). Theory-based evaluation: The case of social funds. American Journal of Evaluation, 25, 141–160.

J. McMahon, V. Cullinan / Research in Developmental Disabilities 35 (2014) 3689–3697

3697

CDC (1999). Framework for program evaluation in public health. Morbidity and Mortality Weekly Report RR-11. CDC (2014). Prevalence of Autism Spectrum Disorder among children aged 8 years – autism and developmental disabilities monitoring network, 11 sites, United States. Morbidity and Mortality Weekly Report, 63, 1–21. Coryn, C. L., Noakes, L. A., Westine, C. D., & Schroter, D. C. (2010). A systematic review of theory-driven evaluation practice from 1990 to 2009. American Journal of Evaluation, 32, 199–226. Department of Education, UK (2005). Evaluating provision for autistic spectrum disorders. The Education and Training Inspectorate. Department of Education Science. (2006). An evaluation of educational provision for children with autistic spectrum disorders. Dublin: Stationary Office. Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350. Eldevik, S., Hastings, R. P., Hughes, J. C., Jahr, E., Eikeseth, S., & Cross, S. (2009). Meta-analysis of early intensive behavioral intervention for children with autism. Journal of Clinical Child & Adolescent Psychology, 38, 439–450. Eldevik, S., Hastings, R. P., Hughes, J. C., Jahr, E., Eikeseth, S., & Cross, S. (2010). Using participant data to extend the evidence base for intensive behavioral intervention for children with autism. American Journal on Intellectual and Developmental Disabilities, 115, 381–405. Gallagher, S., & Hannigan, A. (2014). Depression and chronic illness in parents of children with and without developmental disabilities: The Growing Up in Ireland cohort study. Research in Developmental Disabilities, 35(2), 448–454. Gilliam, W. S., & Leiter, V. (2003). Evaluating early childhood programs: Improving quality and informing policy. Zero to Three, (July), 6–13. Gould, E., Dixon, D. R., Najdowski, A. C., Smith, M. N., & Tarbox, J. (2011). A review of assessments for determining the content of early intensive behavioural intervention programs for Autism Spectrum Disorders. Research in Autism Spectrum Disorders, 5, 990–1002. Hoyson, M., Jameson, B., & Strain, P. S. (1984). Individualised group instruction of normally developing and autistic like children: The LEAP Curriculum Model. Journal of the Division for Early Childhood, 8, 157–172. Hume, K., Boyd, B., McBee, M., Coman, D. C., Gutierrez, A., Shaw, E., et al. (2011). Assessing implementation of comprehensive treatment models for young children with ASD: Reliability and validity of two measures. Research in Autism Spectrum Disorders, 5(4), 1430–1440. Humphrey, N., & Parkinson, G. (2006). Research on interventions for children and young people on the autistic spectrum: A critical perspective. Journal of Research in Special Educational Needs, 6(2), 76–86. Iovanonne, R., Dunlap, G., Huber, H., & Kincaid, D. (2003). Effective educational practices for students with Autism Spectrum Disorders. Focus on Autism and Other Developmental Disabilities, 18(3), 150–165. Joint Committee on Standards for Educational Evaluation (1994). The program evaluation standards (2nd ed.). Thousand Oaks, CA: Sage. Kellogg Foundation, W. K. (1998). W.K. Kellogg Foundation evaluation handbook. Battle Creek, MI: Author. Kellogg Foundation, W. K. (2000). Logic model development guide. Battle Creek, MI: Author. Lee, J. H., & Walsh, D. (2004). Quality in early childhood programs: Reflections from program evaluation practices. American Journal of Evaluation, 25, 351–373. Lovaas, O. I. (1987). Behavioral treatment and normal educational an intellectual functioning in young autistic children. Journal of Consulting and Clinical Psychology, 55, 3–9. Makrygianni, M., & Reed, P. (2010). A meta-analytic review of the effectiveness of behavioural early intervention programs for children with Autistic Spectrum Disorders. Research in Autism Spectrum Disorders, 4, 577–593. Matson, J. L., Hess, J. A., & Boisjoli, J. A. (2010). Comorbid psychopathology in infants and toddlers with autism and pervasive developmental disorders-not otherwise specified (PDD-NOS). Research in Autism Spectrum Disorders, 4, 300–304. Matson, J. L., & Kozlowski, A. M. (2011). The increasing prevalence of Autism Spectrum Disorders. Research in Autism Spectrum Disorders, 5, 418–425. Matson, J. L., & Rivet, T. T. (2008). Characteristics of challenging behaviors in adults with autistic disorder, PDD-NOS, and intellectual disability. Journal of Intellectual and Developmental Disability, 33, 313–329. Matson, J. L., & Wilkins, J. (2009). Psychometric testing methods for children’s social skills. Research in Developmental Disabilities, 30, 249–274. McMahon, J. (2012). Measuring up: Developing a protocol for the evaluation of educational programmes for young children with an Autism Spectrum Disorder (ASD) 0–8 years (Unpublished doctoral thesis) Ireland: University of Limerick. Mesibov, G., Shea, V., & Schopler, E. (2005). The TEACCH approach to Autism Spectrum Disorders. New York: Plenum Press. New York State Department of Education (2001). Autism program quality indicators. Albany: Author. National Research Council (2001). Educating children with autism. Washington, DC: National Academy Press. Odom, S. L., Boyd, B. A., Hall, L. J., & Hume, K. (2010). Evaluation of comprehensive treatment models for individuals with Autism Spectrum Disorder. Journal of Autism and Developmental Disorders, 40, 425–436. Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Thousand Oaks, CA: Sage. Peters-Scheffer, N., Didden, R., Korzilius, H., & Sturmey, P. (2011). A meta-analytic study on the effectiveness of comprehensive ABA-based early intervention programs for children with Autism Spectrum Disorders. Research in Autism Spectrum Disorders, 5, 60–69. Posovas, E., & Carey, R. (2007). Program evaluation: Methods and case studies (7th ed.). New Jersey: Prentice Hall. Rogers, P. J. (2000). Program theory evaluation: Not whether programs work but how they work. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (pp. 209–232). Boston, MA: Kluwer. Rogers, S. J., Hayden, D., Hepburn, S., Charlifue-Smith, R., Hall, T., & Hayes, A. (2006). Teaching young nonverbal children with autism useful speech: A pilot study of the Denver model and PROMPT interventions. Journal of Autism and Developmental Disorders, 36, 1007–1024. Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage. Scriven, M. (2004). Practical program evaluation: A checklist approach. Claremont Graduate University Annual Professional Development Workshop Series. Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Thousand Oaks, CA: Sage. Simpson, R. L. (2005). Evidence-based practices and students with Autism Spectrum Disorders. Evidence-based practices with Autism Spectrum Disorders. Focus on Autism and Other Developmental Disabilities, 20(3), 140–149. Sipes, M., Matson, J. L., & Horowitz, M. (2011). Autism Spectrum Disorders and motor skills: The effect on socialization as measured by the Baby and Infant Screen for Children with aUtism traits (BISCUIT). Developmental Neurorehabilitation, 14(5), 290–296. Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educational Researcher, 31(7), 15–21. Stahmer, A. C. (2007). The basic structure of community early intervention programs for children with autism: Provider descriptions. Journal of Autism and Developmental Disorders, 37, 1344–1354. United Way of America (1996). Measuring program outcomes: A practical approach. Alexandria, VA: Author. Virue´s-Ortega, J. (2010). Applied behavior analytic intervention for autism in early childhood: Meta-analysis, meta-regression and dose-response meta-analysis of multiple outcomes. Clinical Psychology Review, 30, 387–399. Weiss, C. H. (1997). How can theory-based evaluations make greater headway? Evaluation Review, 21, 501–524. Williams, A. P., & Morris, J. C. (2009). The development of theory-driven evaluation in the military: Theory on the front line. American Journal of Evaluation, 30, 62–79. Wolf, M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203–214. Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage. Zvoch, K. (2012). Does fidelity of implementation matter? Using multilevel models to detect relationships between participant outcomes and the delivery and receipt of treatment. American Journal of Evaluation, 33, 547–565.

Education programmes for young children with Autism Spectrum Disorder: an Evaluation Framework.

Autism researchers have identified a common set of practices that form the basis of quality programming in ASD yet little is known regarding the imple...
668KB Sizes 0 Downloads 9 Views