How Evaluation Findings Can Be Integrated into Program Decision Making Judith Blanton* Sam Alley

ABSTRACT: The paper describes difficulties in integrating the findings of research and evaluation studies back into the ongoing process of decision making. The paper suggests how to present information in such a way as to improve its intellectual understandability and to decrease affective resistances to the acceptance of new information. The paper also describes common blocks that are met in the attempt to implement recommendations and describes methods to overcome them. An evaluation study involves formulating the question(s), acquiring data, interpreting it, and making recommendations for action. Data are basic to informing personnel of the results of their action; interpretation provides meaning to data; recommendations for action are necessary if feedback is to influence change. The effectiveness of the feedback is related to the ability of those affected to hear and understand the data. Decision makers then generate and implement the recommendations for change. This article will discuss the obstacles to "hearing" information and those to implementing recommendations. It will also describe methods found effective in overcoming obstacles. DEFINITION OF INTEGRATION OF FEEDBACK Integration of feedback refers to the structural change in an organization as a result of the feedback inquiry into its behavior or direction. OBSTACLES TO INTEGRATION OF FEEDBACK Two obstacles interfere with the integration of feedback. The first difficulty involves blocks to "hearing," attending to, or understanding feedback. The second is a staff's unwillingness to implement recommendations for program change. These obstacles must be overcome if the recommendations of a program evaluation are to produce action. "HEARING" OBSTACLES Hearing obstacles might be classified as cognitive and affective. Cognitive difficulties refer to problems in making intellectual sense out of feedback. Affective difficulties are emotional resistances toward the acceptance of n e w information. *Ms. Blanton is affiliated with the Wright Institute, 2728 Durant Avenue, Berkeley, California 94704. Mr. Alley is affiliated with the Social Action Research Center, Berkeley, California. Community Mental Health Journal, Vol. 14 (3), 1978 0010-3853/78/1500-0239500.959 1978 Human Sciences Press

239

240

Community Mental Health Journal

Cognitive Difficulties Cognitive difficulties are diminished by making the presentation of information appropriate and relevant.

Presentation of feedback. A person's ability to absorb feedback is determined, in part, by the style of the presentation. Findings are usually prepared by one individual, or group, for presentation to another group or individual. These findings should be presented in a way that addresses the specific needs and interests of the audience. If the presentation is esoteric or overly technical, the audience will lose interest and may not understand the report. There is evidence (Weiss, 1974) that staff members are more likely to learn from information disseminated through workshops or discussions than through written reports. Many people find written material more difficult to assimilate than information presented orally, particularly when the material is written in professional jargon familiar only to initiates. If the information is to be integrated, it should be presented in a manner that will make it easily understood. Visual displays and charts can be useful to supplement the narrative. Relevance. The information must not only be readable and understandable, but must also be relevant to the audience. Different individuals within an organization are interested in different types of feedback. A policy maker will be interested in feedback on whether to expand, contract, or change the program, whereas a program manager may want to know about methods, structure, technique, and staff patterns. It is useful to present written feedback in a separate report to each unique group, rather than to rely on a single, unwieldy report that discusses all the findings. Short summary statements of data, selected to answer particular questions relevant to specific groups, are more likely to be read and attended to.

Interpretation. Findings do not speak for themselves. Generally there will be several ways to interpret data. The data from a feedback system should be easily interpreted. An ambiguous measurement will provide a poor basis for making interpretations. Suppose the occupancy rate in a housing project drops. How should this be regarded? Do the findings suggest the project is so successful that clients no longer need low-cost housing, or it is so unappealing that dients prefer to move? Unless other information is provided, we are not sure what this "indicator" indicates. The "meaning" of possible findings should be explored w h e n the measuring tools are being developed. If findings are technical, the individuals responsible for the data should explain any necessary warnings about the interpretation of the data. (Were there difficulties with the methodology, the sample, the analyses, the data

Judith Blanton and Sam Alley

241

collection?) Such explanations may be useful to the staff in developing and interpreting alternative inferences from the data.

Affective Difficulties Affective difficulties are best met by reducing the causes of emotional resistance and enhancing openness to new ideas. Techniques to do this include making feedback credible, making information acceptable, avoiding polarization between evaluators and other staff, dealing with ideological resistance, pacing findings, and reducing disruption.

Making feedback credible. The believability of the study is always important, particularly if the feedback contradicts the intuitive ideas of the staff or community. Believability will be enhanced by 'maximizing the quality and precision of the methodology used to gather, process, and integrate information. Although any action research program will fall short of classical experimental design in its methodological precision, it is worthwhile to make the question asking, information gathering, and analysis as systematic and careful as possible. Believability can also be affected by the credibility of those individuals gathering, or interpreting, the information. Formerly credibility tended to be assessed in terms of professional credentials which were assumed to correlate highly with competence. Recently credibility has also been linked to personal qualities such as ethnic background, sex, and age. It is a common joke that doctors are reluctant to take the word of anyone but another doctor. Chicanos, blacks, native Americans, and w o m e n have recently pointed out, with good reason, the distortions in certain data gathered and integrated by white, male researchers. If an organization has a subgroup, which finds it difficult to accept information from outsiders, it is often helpful to set up a feedback system in which members of this group are directly involved in the data collection and analysis of information that will affect them. Making information acceptable. An organization can be viewed as an organism. The input of information helps it to grow and develop. Information that is preceived as "negative" or "critical" can be used to make the organization more intelligent about its behavior. Yet the organization may reject this information as unpalatable, as a child might reject nourishing vegetables. Parents develop numerous and subtle strategies to get their children to consume nutritious, but unappealing, food. Similarly, an organization can create a setting to increase the incorporation of critical information. Research has shown positive reinforcement to be more effective in changing behavior than negative feedback. Nevertheless the forms of organizational feedback sessions often make evaluation synonomous with punishment. Individuals become rigid w h e n they interpret feedback as critical. Rather than attacking or attempting to break through these defenses, it is a

242

Community Mental Health Journal

good strategy to create a context in which negative feedback can be more acceptable. One method is to talk first about what is going right. An atmosphere that says, "This is good; let's see how it can be done better" is more conducive to growth than one that moves people to defend themselves against what they perceive to be an attack. Rather than focusing on the judgmental quality of the information, evaluation should focus on the use of the information to improve the intelligence of the behavior.

Avoiding polarization between evaluators and other staff. Difficulty in hearing information can be increased by the attitude of those providing the feedback. Much of the resistance that program staff may have to traditional evaluation is based on previous experiences with outside evaluators. The feeling is often expressed that evaluators treated the staff and clients as things rather than as people; that they were used rather than helped. Wide gaps in status and salary between the evaluators and the program staff have also contributed to ill will. Program staff complain that while they do the work, evaluators get the credit by publishing the results. Studies have suggested that there are personality differences between the evaluators and the staff providing direct services (Weiss, 1972). Other evidence (Rodman & Kolodny, 1965) suggests that differences in role definition are important. To be effective in their jobs, the service staff must be "committed," and the evaluation staff must be "skeptical." This skeptical attitude of evaluators may lead the program staff to accuse evaluators of not appreciating program and treatment difficulties and of making "unreasonable" demands for the sake of research. If you are "committed" to a treatment program, then the "testimony" of that program may seem irrelevant. In one case the research staff demanded that the working staff produce numerous written reports on each client based on caseworker interviews. Besides cutting into treatment time, the questions were considered a threat to the therapeutic relationship because they probed into private areas of the client's life, their legal and welfare history. Evaluators complain that the staff members, although committed, are unscientific in their outlook and that they sabotage opportunities to discover useful information for the sake of convenience and short-term benefits. For example, the staff in a pilot project believed in a new treatment method, but little empirical evidence existed about its effects. The evaluators wanted to gather data on the method's effectiveness. Staff initially agreed to cooperate but objected to evaluation procedures using program resources and staff or client time. They believed that the research effort would detract from their treatment of clients. The result: By the end of the pilot project, there was no empirical evidence that the treatment was effective. The lack of proof of effectiveness may keep the funding agent from expanding or continuing the project. In an attempt to bridge these differences we have experimented with a model (Blanton, 1976; Alley, Blanton, Churgin, & Grant, 1973/1975; Blanton &

Judith Blanton and Sam Alley

243

Alley, 1976) which uses staff as the "researchers" in their own self-study. Specialists in techniques of data collection, and its analysis and interpretation, are then used as collaborators in a project's own self-discovery rather than as critics of the program.

Dealing with ideological resistance. If the organization is committed to a specific ideology, then information that conflicts with this ideology will be given scant credence. If the staff has a major investment in a particular type of therapy, and a study indicates that this therapy has less effect than no treatment at all, it is doubtful that the staff will quit their jobs or make a radical departure to a different mode of treatment. If the feedback findings contradict the strong ideological biases of the staff, it is unlikely that these findings will be heard. The best that can be hoped for in this circumstance is to involve those people who support the ideology most in a careful and more sophisticated examination of the issue. Pacing findings. Only a limited amount of information can be assimilated by an individual at any one time. Many staff members doing evaluations make the mistake of collecting too much information about too many areas, resulting in data that have no particular focus. It is advantageous to collect only a few pieces of information--those that are most immediately relevant to decision making--and to collect more information only with an increase in the capacity of the organization to gather and interpret data. Feedback systems should be designed to be continuous and to provide data in small, easily assimilated units. The advantage of giving many different items of information over the course of the program is twofold: It can provide greater information in many areas and prevent any one item from becoming the focus of all the attention and anxiety. The presentation of controversial findings is a common problem for external evaluators. They often report their research without adequately preparing organization staff to hear the results. It is important for key individuals to get an indication of the evaluation's findings before they are made public. This allows these individuals to become accustomed to the n e w ideas, prepare for the necessary changes, and even assume credit for uncovering difficulties and instituting necessary corrective measures. Staff members will not defend themselves against the findings if they, themselves, present the negative feedback and simultaneously initiate strategies to correct them. Reducing disruption. One source of resistance to feedback is staff feelings that data collection causes a disruption of ongoing activities. This is a particular problem in which the service staff feels isolated from the evaluation staff. In such a situation friction often occurs over the selection of the subjects for the evaluation or study. The evaluators may want to set up a control group to test the impact of a specific service on clients. The service staff may not want to deny services to anyone who needs them. If the staff

244

Community Mental Health Journal

agrees that only some subjects will receive services, they may want to select the clients themselves, rather than have it decided by random choice or other criteria established by the evaluators. The exclusion of certain subjects from services may not only draw objections from staff but from the community or clients as well. The demand that the evaluation process places on staff time is another objection to data gathering. Service staff members generally do not like to take themselves or their clients away from valuable service time to be interviewed, tested, or to fill out forms. Staff may be concerned that insensitive questioning can damage the clients' relationships with the agency. This fear has a basis since there have been more than a few instances in which clients have been treated like objects for "research" purposes. Those who have worked in academic settings, with docile sophomore students as subjects, are often surprised when they initially confront a hostile group of subjects who consider the questions he or she is asking irrelevant, insulting, or potentially damaging. If obtaining feedback or getting information for research entails changes or additions in record keeping, there is likely to be resistance, even if the staff is committed to finding answers to these questions. Unobtrusive Measures (Webb, Campbell, Schwartz, & Sechrest, 1966) contains valuable tools for individuals attempting to gather feedback information without making u n d u e demands on those from w h o m information is needed. In dealing with the resistance of the clinical staff to data collection, it is beneficial to use nonservice staff (receptionists, secretaries, data aides) to collect simple pieces of information for the feedback system (Beigel, 1974). If staff members are involved in obtaining answers to questions, however, they should not be overburdened by this work. To avoid or reduce resistance to record keeping, questions should be included that will be of benefit to those people collecting the information. The staff should be assured that this data will be fed back to them in a helpful manner. Disruptions are less irritating if staff understands that evaluative interventions can improve services for clients. It is possible that the gathering of information, alone, can provoke change. A mental health project located within a larger health service organization was concerned about the small number of referrals received from the other medical units within the organization. The staff felt that many of the clients seeking other services also had mental health difficulties, but these problems were overlooked by doctors who focused only on physical disorders. They decided to ask doctors in the other services to give them a weekly count of the number of clients referred to the mental health project. The staff discovered that as the doctors began to keep records, the numbers of referrals from within their organization increased. Asking doctors for the number of people they referred is, in itself, an intervention, since focusing on this activity tends to increase the attention of the doctors to this area and thus increases the number of referrals. An "interfering" measure would not be good for a research study, but as a strategy for increasing internal organizational referrals, it is excellent.

Judith Blanton and Sam Alley

245

DIFFICULTIES IN IMPLEMENTING RECOMMENDATIONS Once information has been heard and understood, there is still the task of using this information to change behavior. Here, too, individual and organizational resistances arise. Methods of reducing resistance to behavior change include decreasing general resistance to change, increasing involvement in the findings, making implications clear, organizing information, timing feedback, saving time and money, and getting adequate support for implementation of findings.

Decreasing General Resistance to Change A Dylansong contains the line, "He who is not busy being born is busy dying." Yet change, even positive change, is upsetting to the organization's equilibrium. If change requires additional work, or departure from a comfortable routine, there will be resistance. In attempting to overcome this inertia, it is important to demonstrate how the change will be beneficial. Resistance may come not only from within the organization, but also from related groups, such as the clients o f t h e organization, the agencies that interface with it, and the funding agency. Even if the relationship between external groups is not completely satisfactory, there will be resistance to change once this relationship has become established. Internal change may lead to changes in relationships with outside groups. It is useful to clarify for those involved the effects of the change and to demonstrate, if possible, the benefits of the change. Resistance to change is decreased when it is seen as reformation, not revolution--a return to the earlier values and principles of the organization rather than a radical change in direction.

Increasing Involvement in the Findings There is overwhelming evidence from action research that individuals and groups change their behavior if they are involved in gathering or interpreting (or both) the n e w information, rather than being the passive recipients. The participation of those groups who are to be affected is encouraged, not only to incorporate their ideas into the planning process, but also to obtain their investment in the program and in the feedback system. The program becomes their own program. If the feedback system is split off and delegated to an outside group, the staff members often see the evaluation as an intrusion of unempathetic, hostile outsiders rather than as a means for increasing their ability to make intelligent decisions.

Making Implications Clear When presenting data, it is useful to spell out their implications. The process of generating inferences from the data can be an educational process for those participating. An example of this educational process might occur in the following way: Feedback data lead to a recommendation that the staff begin to use a n e w method of treatment. Activities that might be

246

Community Mental Health Journal

required to implement the recommendation are spelled out. Staff members begin to perceive that measures, such as an in-service program in the n e w technique, must be implemented.

Organizing Information It is useful to arrange recommendations and their implications into categories related to possible implementation. If the findings have different implications for different groups--such as the clinical staff, the administrative staff, the board of the local program, the umbrella agency or federal funding source---then separate listings of recommendations might be made for each of these groups.

Timing Feedback The timing of the feedback presentation is crucial to its integration. The gathering and analyzing of data must be paced so that people will have access to the information before decisions must be made. Researchers should maintain awareness that a report should be completed before the end of a fiscal year in order to be integrated into the current year's decision making. It should be prepared in time to allow careful examination before the critical decisions are made. If the final report is not available, an interim report might be substituted.

Saving Time and Money Even if the individuals involved are committed to change, if they see the change as costly in terms of time, money, and effort, implementation may not take place. If the staff can be convinced that, in the long run, changes will indeed save time, money, and effort, implementation of the changes will be easier. It is worth taking time to determine h o w program modifications will improve the lot of those giving and receiving services. It is helpful to explain h o w changes will provide the staff with more skills, salary, prestige, or better working conditions; or h o w the services to clients will cost less, be given more rapidly, or be more effective. If the changes are seen as being merely troublesome, they are unlikely to be integrated. CONCLUSIONS Making sense of data turns them into information that can form the basis of meaningful and useful recommendations. Recommendations must be translated into behavioral or structural changes to influence program efficiency, effectiveness, and relevance. To make use of data, it is necessary that those involved in implementing a program hear and understand the information, that they be willing to accept its validity, and that they be able and willing to make behavioral changes on the basis of n e w information and recommendations. Resistances to hearing and understanding, as well as to implementing, behavioral changes may arise. For the staff to appredate and utilize the evaluation, it is helpful for them to have involvement

Judith Blanton and Sam Alley

247

and investment in the findings and open communication from the outset with those gathering, analyzing, and interpreting information. REFERENCES .Alley, S. R., Blanton, J., Churgin, S., & Grant, J. D. Strategies for change in community mental health: N~v careers (Vols. 2 and 3, Trends over projects), Social Action Research Center, September 1974. (JSAS Catalog of Selected Documents in Psychology, 1975, 5.) Beigel, A. Evaluation on a shoestring: A suggested methodology for the evaluation of community mental health services without budgetary and staffing support. In W. A. Hargreaves, C. C. Attkisson, M. H. McIntyre, L. M. Siegel, & J. E. Sorensen (Eds.), Resource materials for community mental health program evaluation. San Francisco: National Institute of Mental Health, 1974. Blanton, J. Self-study of family crisis intervention in a police unit. Professional Psychology, February 1976, 61-67. Blanton, J., & Alley, S. Program development: A manual for organizational self-study. JSAS Catalog of Selected Documents in Psychology, 1976, 6, 26. Rodman, H., & Kolodny, R. L. Organizational strains in the researcher-practitioner relationship. In A. Gouldner & S. M. Miller (Eds.), Applied sociology: Opportunities and problems. New York: Free Press, 1965, 93-113. Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. Unobtrusive measures: Nonreactive research in the social sciences. Chicago: Rand McNally, 1966. Weiss, C. H. Evaluation research. Englewood Cliffs, N.J.: Prentice-Hall, 1972. Weiss, C. H. Between the cup and the lip. In W. A. Hargreaves, C. C. Attkisson, M. H. Mclntyre, L. M. Siegel, & J. E. Sorensen (Eds.), Resource materials for community mental health program evaluation. San Francisco: National Institute of Mental Health, 1974.

How evaluation findings can be integrated into program decision making.

How Evaluation Findings Can Be Integrated into Program Decision Making Judith Blanton* Sam Alley ABSTRACT: The paper describes difficulties in integr...
591KB Sizes 0 Downloads 0 Views