Journal of Primary Prevention, 2(2), Winter, 1981

An Evaluation System for School- Community Prevention Programs CHARLES A. MAHER A B S T R A C T : A management-oriented system for the evaluation of school-community

prevention programs is described and examples of how the system has been applied to serve program management decisions with primary, secondary, and tertiary prevention programs are provided. The approach, termed "Program Analysis and Review System (PARS)," emphasizes a cooperative relationship between a program evaluator and prevention program manager in order that informed judgments can be made about program development and improvement. PARS, which was developed by the author in response to a perceived need for management-oriented approaches to prevention program evaluation, has been field tested with school-community prevention programs in Bergenfield, New Jersey, and Somerville, New Jersey, and has been adapted for use in other communities. PARS consists of three interrelated steps: Program Specification, Program Documentation, and Program Outcome Determination.

During the past decade, program evaluation in human service settings has expanded from a sole focus on policy research issues such as summative evaluation of large-scale demonstration projects (e.g., Head Start}, to include application of evaluation technology to serve program management decision-making at the local level (Demone, Schulberg, & Broskowski, 1978}. Increasingly, program evaluation has been used by human service program managers to help in the assessment of client needs and strengths, the design of "evaluable" programs, and the determination of the degree to which program goals have been attained. As management-oriented approaches to program evaluation have developed, increased importance has been given by legislators and professionals to prevention and remediation of school-community problems such as drug and alcohol abuse, truancy, school failure, and vandalism(Bloom, 1977; Harper & Balch, 1975}. Significant amounts Charles A. Maher is affiliated with the Department of School Psychology, Rutgers University. Reprint requests should be sent to the author, Graduate School of Applied and Professional Psychology, Rutgers University, P.O. Box 819, Piscataway, NJ (}'354. 0278-095X(8I)1600-0101500.95

10 1

@)1981Human Sciences Press

102

Journal of Primary Prevention

of federal and state funds have been allocated to public schools, community mental health centers, and juvenile justice systems to develop a range of primary, secondary, and tertiary prevention programs (U.S. Senate Committee on Human Resources, Note 1}. Most recently, the enactment of Public Law 94-142 has mandated that all public school districts in the United States develop procedures and programs for prevention of school maladjustment (U.S. Department of Health, Education, & Welfare, 1976}. Increasingly, prevention has been recognized as a desirable way to promote the well being of children and youth. This recognition has resulted in managers of prevention programs needing evaluative information, derived from appropriate evaluation strategies and systems, to aid in the development of new prevention programs, modification of existing ones, and termination of unproductive ones (Berberian, Gross, Lovejoy, & Paperella, 1976}. However, such management-oriented procedures have not been discussed in the literature (Kelly, Snowden, & Munoz, 1977}. This paper reports on a management-oriented program evaluation system for schoolcommunity prevention programs. The approach, termed the "Program Analysis and Review System (PARS}," has been developed by the author, used with a range of school-community prevention programs in Bergenfield, New Jersey, and Somerville, New Jersey, and adapted for use in several other communities.

Background to the Evaluation System PARS has four distinctive characteristics: (1} it (PARS} places emphasis on a close working relationship between a program evaluator and prevention program manager, in order to obtain timely and technically adequate evaluation information, useful for program decision making purposes; (2) it requires that the type and purpose of the program be specified, thereby increasing the "evaluability" of the program; {3) it focuses on the operations (process} of an evaluable program, in order to detect discrepancies which might indicate program modification; and (4) it utilizes multiple measures and perspectives for making determinations of program outcome. Thus, PARS allows a prevention program to be analyzed as to its design and purpose and reviewed relative to its process and outcome, so that the program can be developed or improved upon. Within the context of PARS, prevention is seen as an active process of creating programmatic conditions that seek to promote the

Charles A. Maher

103

psychological, social, and emotional well-being of a client group (e.g., primary grade pupils). Primary prevention programs are ones that seek to facilitate the well being of a client group which has not exhibited maladaptive behavior. Secondary prevention programs seek to limit mild or moderate maladaptive behavior that already has been manifested by a client group. Tertiary prevention programs are those that seek to reduce maladaptive behavior of persons already experiencing severe difficulty. Table 1 provides brief descriptions of a sample of school-community primary, secondary, and teritary prevention programs with which PARS has been used. Table 1 A Sample of School-Community Prevention Programs Which Have Been Evaluated with PARS a Primary' Prevention Programs

A Primary Mental Health Program, which systematically identifies students in the elementary school grades who are perceived as candidates for developing social or emotional problems, and which utilizes community volunteers trained by the school psychologist to offer counseling to the identified children on a regular basis. A Parent Education Project, designed to help parents in the community to improve their effectiveness and skills in parenting in order to resolve difficulties they might have with their children or adolescents. A special program has been designed for single parents. Humanistic Drug Education, which uses police officers trained in humanistic approaches to drug education to conduct a series of exercises with students in all fifth grades in the school system to develop an appreciation of the police officer's potential as an understanding resource person.

Secondary Prevention Program

A Teacher Training Program, entitled "Early Intervention and Prevention of Academic and Social Problems in the Classroom," conducted by the school psychologist, which is provided to ten middle school faculty members, who each work with a student exhibiting a classroom adjustment problem.

A Senior Citizen Program, which trains and supervises older resktents of the community to provide help with "high risk" children in the elementary schools. A Group Counseling Program, designed for adolescents experiencing adjustment problems in high school. aAdditional information about the design, process, and impact of these programs can be found in Kavanagh (19791, Kavanagh and Maher (Note 2), or by writing directly to this author. Also, more detailed program descriptions can be obtained from the author.

104

Journal of Primary Prevention

Tertiary Prevention Programs

A n Outward Bound Program, which takes disruptive adolescents and peer models into a wilderness experience from one to seven days, and provides them with group challenges such as mountaineering, white-water canoeing, cave exploring, and group problem solving. The program concludes with a community service project. An Outreach Program, which utilizes social workers and family aides to visit the homes of children experiencing community or school adjustment problems, who provide the family with information and guidance in the use of available community resources. A Crisis Home Project, which services runaway and "disenfranchised" adolescents and their families, providing them with crisis counseling and temporary foster-home placement, utilizing professional resources and host families within the community.

PARS consists of three interrelated steps: Program Specification, Program Documentation, and Program Outcome Determination. Each step is discussed below with respect to primary, secondary and tertiary prevention programs (for a more detailed discussion of each step, not possible here due to space limitations, see the PARS Procedure Manual which is available on request from the author}.

Program Specification Step The purpose of this step is to specify an "evaluable" prevention program. The step is based upon an assumption that, before evaluation of program process or outcome can occur, certain programmatic conditions must exist: (1) the need for the prevention program must be established, (2) program goals and goal indicators must be identified, (3) program components must be elucidated, (4} assumptions which link programmatic activities to program goals must be explicated, and (5) a design for evaluation of program outcome must be set forth. The program evaluator and program manager work together, as well as in concert with program staff, to design an evaluable prevention program. This task, which takes place prior to the beginning of the school year, consists of clarifying seven programmatic elements which are considered the criteria for an evaluable program {seen in Table 2). These evaluability criteria enable the evaluator and manager to ensure that not only are the needs of the present client group identified but

Charles A. Maher

105

Table 2 Criteria for an Evaluable Prevention Program a

(1) Program client--a description of the particular client served by the program {e.g., high school students with truancy problems}, including salient characteristics such as age, sex, socio-economic status, school grades, juvenile justice system contracts, etc. (2) Client needs and strengths--a description of those areas of performance which are seen by school and community members, as well as by the clients themselves, as being deficiets, potential deficients, and strengths. Prevention needs and strengths can be described along several modallties: behavioral {e.g., drug abuse, completion of homework); affective {e.g. temper tantrums, sense of humor); cognitive (e.g., irrational belief systems, competent problem solving skills); and interpersonal {e.g., teacher compliance, helpful to others}. 3) Program goals--those statements of intent which are derived from the needs of the program client. For example, if it was found that clients had "high rates of noncompliance with teacher requests," a prevention goal might be "to prevent future noncompliance with teachers" or stated alternatively, "to increase compliance." 4) Goal indicators--the evaluative criteria that are used to judge the extent to which goals are attained. Goal indicators are observable and measurable, and are attached to program goals. For example, if the prevention goal was "to prevent non-compliance with teacher requests", a goal indicator might be "as determined by less than a frequency of 10 teacher discipline referrals per month." {5} Program components--the resources which will be used as part of the program. These resources include human resources {e.g., professional and paraprofessional staff), financial resources {e.g., state and local fundsl, and activities {e.g., methods and materials}. (6} Validity assumptions--the reasons which link the programmatic activities to program goals and which explicate the "theory of action" of the program. For example, in a teacher training program, where middle school teachers were trained in behavior management skills by means of inservice sessions, it was assumed that development of such skills would help prevent pupils from engaging in disruptive behavior. (7) Evaluation design--that kind of evaluation strategy which will be used in determining {evaluating) p r o g r a m outcome during the P r o g r a m Outcome Determination Step. This might be an experimental, quasi-experimental, or nonexperimental design {Campbell & Stanley, 1966}. aAdapted from the PARS ProcedureManual.

that the strengths (assets} of that group are identified and taken into consideration in program development and implementation. Also, the step forces the evaluator and manager to determine whether particular non-client groups who might be at risk but who were not initially identified for the program also would benefit from the program. The evaluation strategy used to specify the seven elements of a prevention program is termed evaluability assessment (Wholey, 1977}, and utilizes two methods: reviews of programmatic documents such as written descriptions of activities and goals which might be available;

106

Journal of Primary Prevention

and interviews with program managers, executives and staff in individual and group formats. To date, experience with PARS indicates that evaluability assessment is a complex activity and differs in certain respects for primary prevention programs compared with secondary or teritary ones. For example, with primary prevention programs, it has been found to be difficult to identify specific client group needs since, often, it is difficult to specify a particular client group, and identified clients may not see themselves as needy. For example, ninth grade pupils may not see themselves as in need of a drug abuse prevention education program. Also, primary prevention goals often are difficult to specify. Thus, the evaluator must spend a considerable amount of time in identification of client needs and strengths, and specification of program goals, with primary prevention program managers. With respect to secondary prevention programs, it often is difficult to obtain agreement on the characteristics and priority needs of the client group. Tertiary prevention programs tend to present the least problems in evaluability assessment. The outcome of the evaluability assessment is an "Evaluable Program Design", usually a written document, which is presented to the program manager, and which serves as the basis for the development of a program evaluation contract between the evaluator and manager. The written Program Design, which details information about the seven programmatic elements, can be used by the evaluator to measure the extent to which the program is operant as intended as part of the Program Documentation Step, and to determine the impact of the program on the client, as part of the Program Outcome Determination Step.

Program Documentation Step The purpose of this step i s to measure the extent to which the prevention program, as designed, has actually occurred. It is based upon an assumption that, once an evaluable prevention program has been designed and agreed upon by the evaluator and manager, it is necessary to document the extent to which that program has been implemented. This kind of information is seen as critical to prevention program management since, without knowledge of degree of program implementation, it will be extremely difficult to relate any program outcomes to the program processes {Rutman, 1977}. During this step, the evaluator delineates and describes the nature, scope, and

Cha~esA. Maher

107

frequency of the prevention program activities which have occurred. In addition, factors which appear to influence the operations of the program are identified such as loss of motivation of program staff, attrition of participants from the program, and the introduction of school district policies (e.g., budget freezes) which might prohibit the program from continuing as planned. The evaluation strategy used during this step is commonly referred to as process evaluation. Two process evaluation questions serve to focus the evaluation: (1} To what extent has the prevention program described in the Evaluable Program Design been implemented; and (2} Have any negative side-effects occurred as a result of program implementation? Two kinds of evaluation methods can be used to answer these questions: retrospective monitoring, and naturalistic observation. Retrospective monitoring involves obtaining self-report information from prevention program managers and staff about the manner in which the program has been operationalized. Naturalistic monitoring involves the evaluator's direct observation of program activities. To date, experience with PARS indicates that some differences exist with respect to conducting process evaluations of primary prevention programs as compared to process evaluations of secondary and tertiary programs. For example, client groups in primary prevention programs are subject to less variability in the behaviors {e.g., level of school attendance) which disrupt full program implementation than are clients for secondary and tertiary prevention programs who may exhibit greater variability {e.g., higher rates of absenteeism from school). However, in primary prevention programs, it is sometimes more difficult to specify the exact nature of the best program activities to implement since program goals may be broad-based and client needs or potential needs may be difficult to define. The outcome of this step is a written report, submitted to the program manager, which provides detailed information about the degree to which the prevention program has been implemented as well as the specific ways in which the program has deviated from the Program Design. This report can serve as a basis for modification of program operations.

Program Outcome Determination Step The purpose of this step is to obtain information about the degree to which the prevention program has been successful in attaining its goals and, when possible, to determine the cost-effectiveness of the

108

Journal of Primary Prevention

program relative to another program or a non-program condition. This step is based on an assumption that, once a prevention program has been documented as having been implemented, the evaluator and manager are better able to relate program outcomes to program processes {Riecken & Boruch, 1974}. The confidence that the evaluator can place in any particular program outcome determination statement, using PARS is seen as being related to several factors: (a) the evaluability of the prevention program, as clarified in the Program Specification Step; {b) the amount of information available on the nature of the implemented prevention program, as obtained in the Program Documentation Step; and tc) the number of threats to internal and external validity which can be ruled out, which is contingent upon the kind of evaluation design (e.g., experimental, quasi-experimental, case study) that was agreed upon during the Program Specification Step. The evaluation strategy used in this step is referred to as outcome evaluation. The specific kinds of outcome evaluation methods to be employed, however, depend upon the nature of the program and the type of evaluation design which can be used, given the nature of the program. For example, in those situations where no other treatment or non-treatment groups are available, it has been possible to use social indicators and other kinds of unobtrusive measures as measures of the "events" which are to be prevented. However, it is important that these indicators be seen as valid in relation to the constructs which underlie prevention program goals. The amount of progress program clients make toward program goals can also be evaluated by a goal-based approach. In this method, the program evaluator uses goals and goal indicators outlined in the Program Design. Then, in a systematic manner, the evaluator engages in several interrelated tasks: la) collects baseline data on the goal indicators; {b) documents that the programmatic activities which are assumed to lead to goal attainment have occurred; (c} collects progress data on goal attainment on a periodic basis; and {d) makes evaluative judgments, at specific time intervals, about the degree of goal attainment. Another kind of evaluation which can be undertaken during this step is consumer satisfaction evaluation. This kind of evaluation enables an evaluator to obtain information on the perceptions of program participants about the program. To date, experience with PARS indicates that useful evaluation methods already exist for evaluating secondary and tertiary prevention program outcomes. For example, evaluation controls are

Charles A. Maher

109

usually accomplished through time series and multiple baseline designs. For primary prevention programs, however, it is more difficult to specify valid goal indicators and to identify representative normative comparison groups, and it becomes important to use multiple goal indicators (which are useful for all three types of prevention programs). With regard to dissemination of program evaluation outcome information, primary prevention managers require the greatest amount of timely information since, for that type of program, it usually is more difficult to determine degree of program outcome. The result of the Program Outcome Determination Step is a written report which is presented to the program manager, and which provides information such as: (a) the degree to which the program has been successful in fulfilling its purposes; (b) the extent to which the program is "internally valid", that is, the confidence with which one can attribute program outcomes to program process; (c) the extent to which the program is "externally valid", that is, the extent to which an evaluator can generalize about the program to other clients and settings; and {d) specific recommendations, based upon the evaluative information, about how the program manager can improve upon the design of the program for the ensuing school year.

The Evaluation System: General Observations PARS has been field-tested with a range of school-community prevention programs, some of which are described in Table 1. From the field tests, several general observations can be made along three dimensions: (a) types of management concerns addressed by the System; (b) kinds of data obtained; and (c) social validity of the System, as judged by prevention program managers. Each dimension is discussed below.

Types of Management Concerns Addressed by the System Many of the prevention programs that have been evaluated, to date, have been school-based programs that were funded almost entirely by local boards of education, that is, programs that were internallyfunded. Usually, there were no formal demands for evaluation of these programs. For the most part, the managers of these programs were concerned about making program decisions such as: (a} how to develop new prevention programs in other schools; {b} how to assess the needs

110

Journal of Primary Prevention

of a school as a basis for further program development; and (c) how to detect problems with personnel and procedures in programs that were already implemented. Less frequently was a concern expressed about deciding whether a program should be terminated or even modified based on outcome data (e.g., degree of goal attainment). Initially, very few of the school-based prevention programs were in a format conducive to process evaluation and outcome evaluation. In almost all instances, to date, the internally-funded school based prevention programs had to be placed into an evaluable format (Table 2). In this regard, the evaluators spent a considerable amount of time in the Program Specification Step with these programs. In contrast to the school programs, the majority of communitybased programs were funded by federal and state grants, that is, programs that were externally-funded ones. Thus, the managers of these programs, although expressing interest in program development and implementation, were concerned primarily with program outcome/impact information. The community-based managers requested that evaluation information be obtained for external accountability purposes. These requests included a need for information about: (a) the number of clients served by the program; (b) the degree to which individual program clients have progressed as a result of participation in the program; (c) the extent to which overall program goals have been attained; and (d) the cost of the program in relation to program outcome. As compared to the internally-funded school-based programs, most of the externally-funded programs were in an evaluable format. The evaluators reported this to be so since the project grant proposals usually included, to some extent, written program goals, evaluative criteria, descriptions of program components, and a plan for program evaluation. Thus, with externally-funded programs, the evaluators were able to spend more time in the Program Documentation and Program Outcome Determination Steps.

Kinds of Data Obtained by the System The kinds of data obtained from the use of the System have been largely a function of the step of the System employed by the evaluator. For the Program Specification Step, the data obtained have been primarily descriptive and narrative in nature. This is not surprising, since the purpose of this step is to provide a written, evaluable p r o g r a m description. On several occasions, however, program

Charles A. Maher

111

managers have requested that the program description be in an outline form, supplemented with visual aids such as graphs, figures, and flow charts, with more detailed narrative descriptions included as appendices. For the Program Documentation Step, which focuses primarily on process evaluation, the data have consisted of: la) nominal data, such as data on the amount of services planned vs. services rendered; (b) ordinal data, such as the degree (amount) of services provided by various staff members; and (c) narrative descriptions, such as listings and notations of the positive and negative side effects as perceived by program staff and clients. For the Program Outcome Determination Step, which focuses on outcome evaluation, the data have consisted of descriptive statistical data of an ordinal and interval nature, such as the degree to which various program clients have attained individual goals. In addition, for some programs, goal attainment scaling (Kiresuk & Sherman, 1968) has been introduced as a way of providing goal attainment indices for individual program goals and overall program goals.

Social Validity of the System Besides obtaining information on experimental validity through field tests of an intervention (i.e., PARS), it is important to assess the social validity of the intervention as perceived by persons who have not been part of the intervention (Kazdin, 1977). In order to determine whether PARS would be viewed as desirable by school-community prevention program managers, a social validation study was conducted. Five school prevention program managers from five different school districts that were not involved with PARS, were shown one randomly selected Evaluable and NonEvaluable Program Design of a prevention program. All managers were not aware of the designation of each program. They were asked to review each document and to select the one which would be most useful to them in evaluation of the prevention program. Results indicated that all five managers rated the Evaluable Program Design as being potentially more useful to them than the Non-Evaluable Program Design. In addition, the five superintendents of schools from the same five school districts were asked to review t h e PARS Procedure Manual and to indicate, in writing, their reactions to the System. All five superintendents indicated that PARS was a practical approach to evaluation and that they would be wilting to have it implemented with their school district's prevention programs.

112

Journal of Primary Prevention

Summary and Conclusions This paper has presented information about a management-oriented approach to program evaluation. During the two years that the Program Analysis and Review System {PARS} has been in operation, it has proven to be a practical, cost-effective approach to the evaluation of school-community prevention programs in several communities. This assertion is based upon field tests, as well as verbal and written feedback from prevention program managers and agency executives {e.g., school superintendents, board of education members, mental health administratorsi. In addition, the project has been funded for further development and dissemination purposes.

Reference Notes 1. United States Senate Committee on Human Resources. Hearings before the Subcommittee on Alcoholism and Drug Abuse of the Committee on Human Resources. United States Senate. Ninety-Fifth Congress. March 24 and 25, 1977. Washington, D.C.: U.S. Government Printing Office, 1977. 2. Kavanagh, T.E., & Maher, C.A. A s y s t e m s approach to the development, implementation, and evaluation of school-community prevention programs in Bergenfield, New Jersey: Project year 1. A report submitted to New Jersey State Department of Health, Division of Alcol~ol, Narcotics, and Drug Abuse, Trenton, New Jersey, February, 1979.

References Berberian, R.M., Gross, C., Lovejoy, J., & Paperella, S. The effectiveness of drug education programs: a critical reivew. Health Education Monographs, 1976, 4, 377-398. Bloom, B.L. Community mental health: A general introduction. Monterey, CA: Brooks/Cole, 1977. Campbell, D.T., & Stanley, J.C. Experimental and quasi-experimental designs for research. Chicago: Rand McNally College Publishing Company, 1966. Demone, H.W., Schulberg, H.C., & Broskowski, A. Evaluation in the context of developments in human services. In C.C. Attkisson, W.A. Hargreaves, M.J. Horowitz, & J.E. Sorensen {Eds.}, Evaluation of human service programs. New York: Academic Press, 1978. Harper, R., & Balch, P. Some economic arguments in favor of primary prevention. Professional Psychology, 1975, 6, 17-25. Kavanagh, T.E. Whither prevention: One community's efforts. School Psychology in New Jersey, 1979, 20, 5-11. Kazdin, A.E. Assessing the clinical or applied performance of behavior change through social validation. Behavior Modification, 1977, 1, 427-452. Kelly, J. G., Snowden, L. T., & Munoz, R.F. Social and community psychology. Annual Review of Psychology, 1977, 28, 323-361.

Charles A. Maher

113

Kiresuk, T.J., & Sherman, R.E. Goal attainment scaling: A general method for evaluating comprehensive community mental health programs. Community Mental Health Journa~ 1968, 4, 443-453. Riecken, H.W., & Boruch, R.F. Social experimentation: A method for planning and evaluating social intervention. New York: Academic Press, 1974. Rutman, L. Planning and evaluation study. In L. Rutman (Ed.), Evaluation research methods: A basic guide. Beverly Hills, CA: Sage Publications, 1977. U.S. Department of Health, Education & Welfare. Education of handicapped children and incentive grants program. Federal Register. December 30, 1976, 56966-56998. Wholey, J.S. Evaluability assessment. In L. Rutman (Ed.), Evaluation research methods: A basic guide. Beverly Hills, CA: Sage Publications, 1977.

An evaluation system for school-community prevention programs.

A management-oriented system for the evaluation of school-community prevention programs is described and examples of how the system has been applied t...
741KB Sizes 0 Downloads 0 Views