572574 research-article2015

HPPXXX10.1177/1524839915572574Health Promotion PracticeFrancis and Smith / Australian Health Promotion Agencies

Program Planning and Evaluation

Toward Best Practice in Evaluation: A Study of Australian Health Promotion Agencies Louise J. Francis, BHSci (Hons)1 Ben J. Smith, PhD1 Evaluation makes a critical contribution to the evidence base for health promotion programs and policy. Because there has been limited research about the characteristics and determinants of evaluation practice in this field, this study audited evaluations completed by health promotion agencies in Victoria, Australia, and explored the factors that enabled or hindered evaluation performance. Twenty-four agencies participated. A systematic assessment of 29 recent evaluation reports was undertaken, and in-depth interviews were carried out with 18 experienced practitioners. There was wide variability in the scope of evaluations and the level of reporting undertaken. Formative evaluation was uncommon, but almost all included process evaluation, especially of strategy reach and delivery. Impact evaluation was attempted in the majority of cases, but the designs and measures used were often not specified. Practitioners strongly endorsed the importance of evaluation, but the reporting requirements and inconsistent administrative procedures of the funding body were cited as significant barriers. Budget constraints, employment of untrained coworkers, and lack of access to measurement tools were other major barriers to evaluation. Capacity building to strengthen evaluation needs to encompass system, organizational, and practitioner-level action. This includes strengthening funding and reporting arrangements, fostering partnerships, and tailoring workforce development opportunities for practitioners. Keywords:

evaluation; capacity building; health promotion; program management

Background >> Evaluation has long been recognized as a fundamental element of health promotion program management (J. Green & Tones, 2010; L. Green & Kreuter, 2005) and a core competency for practitioners in this field (Allegrante et al., 2009). When undertaken effectively, evaluation allows program planners to gain insights regarding the overall reach of a program, determine how well interventions deliver on objectives, assess return on investment, identify gaps in existing programs, and obtain guidance for future programming (Bauman & Nutbeam, 2013; Kirsh, Krupa, Horgan, Kelly, & Carr, 2005; Pettman et al., 2012). Evaluation findings can offer valuable evidence to decision makers by demonstrating what is achievable in real-world contexts (Bowen & Zwi, 2005), offering insights about “what works for whom in what circumstances” (Cherney & Head, 2010, p. 510). Furthermore, evaluation has been reported to improve staff ownership and commitment to programs (Kirsh et al., 2005) and increase the likelihood of sustainability (ShediacRizkallah & Bone, 1998). A number of writers have discussed the challenges inherent in the evaluation of health promotion initiatives, including documenting the complex and contextdependent implementation of strategies, defining appropriate measures of impact, and establishing a causal relationship between strategies and their outcomes when control groups can rarely be assembled (Hartz, Goldberg, Figueiro, & Potvin, 2008; McQueen, 2001; Nebot, 2006; Nutbeam, 1998; O’Connor-Fleming, Parker, Higgins, & Gould, 2006). However, there have 1

Monash University, Melbourne, Victoria, Australia

Health Promotion Practice September 2015 Vol. 16, No. (5) 715­–723 DOI: 10.1177/1524839915572574 © 2015 Society for Public Health Education

Authors’ Note: Address correspondence to Ben J. Smith, PhD, School of Public Health and Preventive Medicine, Monash University, Level 3, 89 Commercial Road, Melbourne, Victoria 3004, Australia; e-mail: [email protected].

715

Downloaded from hpp.sagepub.com by guest on November 15, 2015

been few formal investigations of evaluation practices in the field of health promotion. One review of articles published in health promotion journals over 10 years found that 5.1% of those published in 1996 reported evaluation findings, increasing to only 6.8% in 2007 (Potvin & McQueen, 2008). This provided a crude indicator of the extent of evaluation, given that many evaluation reports would reside in the unpublished “grey literature.” A study of the evaluation practices of community health services in South Australia found that one third of those reported used one method only (usually a participant follow-up survey), and few reported on response rates, representativeness, or data analysis methods (Jolley, Lawless, Baum, Hurley, & Fry, 2007). Qualitative studies with health promotion practitioners have set out to shed further light on the factors that may affect the extent and quality of evaluation. Interviews with staff from six health promotion institutes in the Netherlands found that respondents were reluctant to allocate funds to evaluation as they did not see it as their core business (Brug, Tak, & Te Velde, 2011). Other constraints were limited time, the capacities of external partners involved in projects, and difficulties in identifying program goals that were measurable. The South Australian study referred to above included seven focus groups with community health practitioners and found that time and resource constraints, absence of an evaluation culture, shortage of skilled practitioners, and limited use of evaluation findings were major barriers (Jolley et al., 2007). In-depth interviews with representatives of 61 community-based organizations undertaking HIV prevention projects in the United States revealed that negative attitudes among staff, insufficient funds, difficulties in collecting data from community members, and lack of confidence about the validity of measures were major barriers to evaluation (Napp, Gibbs, Jolly, Westover, & Uhl, 2002). Another Australian study involving 40 staff and volunteers from youth mental health programs also found that inadequate funding and challenges of undertaking data collection with vulnerable participants were impediments to practice, in addition to limited evaluation skills among staff (Lobo, McManus, Brown, Hildebrand, & Maycock, 2010). While empirical research is limited, there is a welldeveloped body of theory concerning both evaluation capacity and knowledge utilization. The model for evaluation capacity building identifies organizational learning capacity as a key influence on evaluation practice (Preskill & Boyle, 2008). This comprises leadership for evaluation, a culture of inquiry, systems and structures to facilitate evaluation, and channels for communication of evaluation findings. Communities of practice

(Brehaut & Eva, 2012) is a theoretical perspective that highlights the role that social networks play in learning and knowledge utilization within organizations. These networks provide personal support, sharing of expertise, and opportunities for collaborative decision making for evidence gathering and use. The Framework for Evidence-Informed Policy and Practice (Bowen & Zwi, 2005) adds to other theories of knowledge utilization by highlighting the influence that external factors, especially political will, support from opinion leaders, and community pressures, have on efforts by organizations to use evidence in decision making. These perspectives provide a broader understanding of the individual, organizational, and contextual factors that may have a bearing on evaluation in health promotion. Given the importance of evaluation to program management in this field, there is a clear need for further research to assess the strengths and gaps in practice and to identify the major influences on evaluation performance. The study reported here was undertaken to audit evaluation practices in a large metropolitan region in Victoria, Australia, which has well-established health promotion agencies with a substantial history of programs addressing a range of public health priorities. This investigated how evaluation is undertaken, factors that influenced the extent and quality of evaluation activity, and the actions believed necessary to build evaluation capacity.

Method >> Study Design

A sequential mixed-methods study design was used, with the qualitative data collection phase having dominant status over the quantitative phase (Leech & Onwuegbuzie, 2009). This involved a systematic audit of project evaluation reports, followed by semistructured qualitative interviews with health promotion practitioners. Ethics approval for the study was granted by the Monash University Human Research Ethics Committee (No. 201200394). Sampling and Recruitment Health promotion services funded by the Victorian Department of Health and based in the Eastern, North West, and Southern Health Department Regions in metropolitan Melbourne were invited to take part in the study. Together these regions cover a population of 4.1 million people (74% of Victoria). In the first instance, Health Promotion Advisors responsible for each region provided details of all agencies implementing health promotion programs within their region. Agencies were

716  HEALTH PROMOTION PRACTICE / September 2015

Downloaded from hpp.sagepub.com by guest on November 15, 2015

then approached to participate in the study using the contact details provided by the Regional Advisors. Participating organizations were asked to provide all evaluation reports for projects completed between 2008 and 2011. Inclusion criteria were projects of any size and duration on which health promotion evaluation could be carried out. With advice from Regional Health Promotion Advisors, practitioners from these agencies with experience in planning, managing, and evaluating projects over the past 3 years were identified. These practitioners received an e-mail invitation and a followup telephone call to recruit them to the study. Written consent was required prior to study participation. Data Collection A widely used health promotion evaluation framework (Bauman & Nutbeam, 2013; Prevention and Population Health Branch, 2010) that aligns the strategies, objectives, and goals specified in project plans with levels of evaluation (formative, process, impact, and outcome evaluation) was used to develop a checklist for auditing the content of the evaluation reports. The audit also documented the locality of program implementation and the health issue addressed. Prior to completion of the audit (by LF), both authors reviewed a subset of the reports, compared their findings to identify discrepancies, and agreed on the definitions to apply in the audit process. Semistructured telephone interviews were carried out (by LF) with the health promotion practitioners. Encouraging a more open disclosure of information and allowing for a greater degree of latitude in the way interviewees could answer, an interview guide was developed with the majority of questions written in open-ended format (Richards & Morse, 2007; Serry & Liamputtong, 2010). The guide was independently reviewed by a Regional Health Promotion Advisor to determine if it addressed issues relevant to health promotion and evaluation practice within governmentfunded agencies, and was written clearly. The interviews focused on evaluation planning, designs used, perceived benefits, reporting processes, resources, and support used. Interviewees were invited to discuss the barriers to evaluation and steps needed to build capacity and to investigate the role of individual, organizational, and systemic factors that have been suggested by previous research and evaluation capacity theories as being influences on practice. Audiotaped interviews took place between June 2012 and October 2012 and ranged from between 30 to 70 minutes duration. Participants reviewed and granted approval of use of their interview transcripts.

Data Analysis Quantitative data from the audit of evaluation reports were summarized descriptively. The thematic coding of the interview transcripts was undertaken by one of the authors (LF) using QRS International NVivo 9 Software. Three distinct coding categories were undertaken: descriptive, topical, and analytical (Richards, 2009). Descriptive coding highlighted general information about the key informants, such as organization name and qualifications. Topic analysis involved sorting and coding data into topics aligned to the interview themes. The final analysis focused on coding explanations of evaluation practices and the rationale for recommended capacity-building steps. Quotes were selected to provide examples that best reflected the dominant themes that emerged from the interview data (shown in a later table).

Results >> Twenty-four of the 32 organizations invited to take part in the study agreed to participate. Twenty-nine evaluation reports were received (one to two per organization) from 23 of the recruited organizations. Practitioners from 18 organizations participated in semistructured interviews.

Characteristics of Evaluations >> The audit found that formative evaluation was not widely reported, with 7/29 reports documenting this (Table 1). All of the reports presented some form of process evaluation. There was often evaluation of strategy delivery (18/29) and reach (27/29); however, strategy exposure (8/29) and program context (12/29) were evaluated less frequently. The various methods for collecting process evaluation data included audits against program action plans, documentary reviews of meeting minutes, program attendance records, feedback surveys, and collecting qualitative feedback through interviews and focus groups. A high proportion of reports (26/29) showed completion of some form of impact evaluation, which in most cases was the evaluation of a single program objective in line with the minimal reporting requirements of the Department of Health. While methods were not clearly stated in a number of instances, those reported most often were pre- and postsurveys, participant interviews, or focus groups. Respondent numbers were generally small, and the nature and quality of the data collection instruments used were unclear. Outcome evaluation was documented in few instances (3/29).

Francis and Smith / AUSTRALIAN HEALTH PROMOTION AGENCIES  717

Downloaded from hpp.sagepub.com by guest on November 15, 2015

Table 1 Findings From Audit of Evaluation Reports (N = 29) Type of Evaluation Formative Process   Delivery

 Reach  Exposure

 Context

Impact

Outcome

Definitiona

No. Reported

Methods Reported

Assesses suitability of resources and program elements

 7

Pretest surveys, focus groups

Determines whether strategies delivered as planned, using methods and materials as designed Rates of participant recruitment and participation Determines if participants were aware of health issue being addressed and received program components Describes the factors that influenced the quality of program implementation Assesses short- and medium-term impacts of strategies

18

Audit against action plan, document review, project journals

27

Attendance records, participant surveys Participant surveys, in-depth interviews, focus groups, case studies

Determines if program has successfully achieved goals

12

 8

In-depth interviews, focus groups, case studies

26

Prepost surveys, post-only surveys, interviews, focus groups, audits, partnership review, case studies Interviews

 3

a. Adapted from Bauman and Nutbeam (2013).

Influences on Evaluation >> Practice

Respondents strongly endorsed the vital role that evaluation plays in program management, yet a number of factors were discussed as influences on the extent and quality of the evaluations undertaken. Dominant themes concerned program funding, workforce and staff capacity issues, access to tools and resources, and the funding body’s administrative and reporting procedures. To illustrate these issues, Table 2 provides examples of participant commentary. Funding Constraints The majority of interviewees identified access to adequate levels of funding as an impediment to evaluation practice. The focus on this issue was heightened by the fact that the study was carried out during a period when significant budget cuts were being proposed. The dilemma described by many respondents was that to reduce the possibility of funding reductions, they needed to complete better evaluations to demonstrate the value of programs. However, the limited availability of funds was impeding evaluation training for staff and reducing the resources available to develop indicators

and measurement tools. Participants also highlighted that funding constraints hindered the potential to measure the longer term impacts of programs. Workforce Skill Levels The widely varying level of evaluation knowledge and skill among health promotion teams was frequently reported. A challenge discussed by several respondents was that it was quite common for staff employed as health promotion practitioners to have limited prior training in this disciplinary area and to be drawn from other professional groups, such as Dietetics or Occupational Therapy. The need to support less knowledgeable and skilled team members placed an extra burden on practitioners with health promotion qualifications and evaluation experience. Access to Tools and Resources A number of interviewees reported difficulty in accessing academic journals, which were felt to be helpful in the selection and development of evaluation measures. A further issue raised was that even when journals were available, there was found to be a paucity of published program evaluations that could offer

718  HEALTH PROMOTION PRACTICE / September 2015

Downloaded from hpp.sagepub.com by guest on November 15, 2015

Table 2 Comments From Health Promotion Practitioners About Significant Influences on Evaluation Practice Theme Funding constraints   Funding levels impede ability to show the value of work undertaken

 Insufficient resources to show longer term impacts

Quotes From Semistructured Interviews “I think that we could probably avoid the funding cuts that we suffer as well as a result of if we were evaluating better to actually show the outcomes for the money that actually goes in. So I think that’s sort of a double-edged sword with that one as well, to be able to go, OK, well we do fantastic things in health promotion so don’t cut our money but we actually can’t show you what we do. So I think that that’s a huge reason for good evaluation as well, is to continue funding to health promotion.” “I mean the main thing is funding, as much as we would love to get external evaluations conducted we just don’t have the money to be able to do that. So ideally, that’s where we’d love to be, is to be able to externally evaluate our programs instead of doing it internally and having that potential bias.” “It’s about outcome evaluation, like process and impact we can do and we’ve demonstrated that for a number of years, but what they’re saying that they want is that outcome evaluation, they want to know how those chronic diseases—if the problem’s been reduced and we can’t physically measure that with, you know, we have $500,000 and it’s gone down to like—it’s going to be down by 28% over 2 years.”

Workforce skill levels   Widely varying evaluation skill levels in health promotion teams

“So there are a couple of people in the team that are quite highly skilled and experienced in that area and very capable and then there are a few people within the team that certainly aren’t at all and really need further skill development in that area. Then there’s also the people outside of our team with Health Promotion hours and that’s a bigger issue because most of those people are clinical service delivery staff that have no specialist health promotion training or qualifications and that’s, you know, a much, much bigger issue because as you can imagine it’s, you know, a completely—often completely new to them and they’re very much starting from scratch.” “A lot of it is me, I do a lot of internal capacity building within the organization, that we do have that pool of staff who aren’t specifically health promotion qualified but have evaluation skills but they need, you know, support in thinking about, you know, ‘how do I measure how satisfied people were?’ So I have to provide a lot of advice internally.” “The other big hurdle that I find is the skill sets of our health promotion people. It’s probably not up to doing the rigorous evaluation that is actually required for some of the DH requirements. You need someone who is an expert in evaluation or who is a researcher sometimes to actually be undertaking the level that’s required for the DH, and a person coming out with a three year degree doesn’t have that skill set.” Access to tools and resources “Because I lecture and tutor . . . I can access those journals and if you’re not linked into a   Limited access to university, and we have tried and had some conversations with a university about using appropriate tools their library as a resource, as a community based organization, and it didn’t seem possible. and resources, e.g., It makes it very difficult. . . . I’ve been encouraged by my CEO to continue my studies, my academic journals. links simply because I can actually access journals.” “So prevention of violence against women is a good one. Like it’s, you know, the goal is   Challenges in nonviolent, nondiscriminatory agenda, equitable communities. So such long-term goals— identifying so then we identify kind of a range of strategies to get us there. But I think evaluating appropriate that’s really, really hard. Like how do you evaluate that in 3 years? So I think unless evaluation you’ve got a really good set of proxy measures, that it’s a very—yeah, really hard. And indicators and sometimes we just, I don’t know, we don’t always have the data sets that would, or we’re measures. not too sure how to access those data sets.” (continued)

Francis and Smith / AUSTRALIAN HEALTH PROMOTION AGENCIES  719

Downloaded from hpp.sagepub.com by guest on November 15, 2015

Table 2 (continued) Theme

Quotes From Semistructured Interviews

Administration and reporting requirements “They provide a framework but I think the—my problem with them is that they basically   Confusion caused by change every year. In the four years that I’ve been in this role I think there have been, changes and every document has been under review, I think it will be—this year will be the first time ambiguity in that I’ve done a report that is based on basically the same framework and evaluation reporting framework.” requirements “The document is quite lengthy and very much open to interpretation, so a conversation I recently had with another health promotion coordinator we actually interpreted the guidelines very differently and consequently they’re doing a lot more work in terms of their evaluation report. Like anything over 100 pages, compared to I will probably be submitting something like 30, 40 pages.”   Doubt concerning “I mean in some ways it is good that they ask you to reflect and they give you a template use of completed which is really quite in-depth, but I guess the thing that is that when you’re doing the evaluation reports evaluation you know that you are just jumping through hoops for really—that nothing will happen with your evaluation in a way. Look if you’re lucky you might get feedback but the evaluations don’t tend to go anywhere, they don’t get shared, they don’t—you’re not given a snapshot of what happening around the rest of the state. So it’s almost like—you sort of feel like you’re writing a report to sit on somebody’s shelf and that’s probably one of the most negative aspects of doing these reports.” NOTE: DH = Department of Health.

guidance. These factors added to the challenges entailed in identifying indicators and measures, especially for assessing the contribution of strategies to higher level policy change and longer term health outcomes. Some respondents questioned whether it was appropriate for individual agencies to be working independently on the development of evaluation indicators and measures. It was suggested that agencies known to be delivering programs on similar priority issues could work together and examine common proxy measures of longer term impacts and outcomes. Administration and Reporting Requirements The minimal reporting requirements put in place by the Department of Health evoked divergent opinions from the interviewees. Some found that the evaluationreporting template was contributing to an elevated “level of rigor.” Others stated that the periodic changes to the template created confusion and wasted time. Most informants commented on the need for further refinements, suggesting that the templates were basic monitoring and reporting tools but fell short of being adequate for comprehensive evaluation. While the minimal requirement was to report on the evaluation of one program objective, some practitioners felt the completion of

the templates reduced the time available for program management and more comprehensive evaluation. The respondents highlighted recurring issues regarding the ambiguous nature of certain reporting guidelines, with comments made about inconsistent guidance being given to practitioners across the regions. These inconsistencies were particularly detrimental for those agencies whose programs and accountability crossed regional boundaries. It was observed by some that the level of feedback received did not reflect the level of effort required to complete the reporting templates. The lack of dissemination of evaluation reports was acknowledged as something that could be improved.

Strategies for Building >> Evaluation Capacity

A recommended action to allow practitioners easier access to evaluation tools and resources was an online portal storing these materials. Potentially limiting the degree to which they had to “reinvent the wheel,” the central repository could also provide a vehicle for dissemination of practitioners’ evaluations. To increase access to a wider variety of evaluation skills and resources, several respondents suggested developing collaborations between practitioners

720  HEALTH PROMOTION PRACTICE / September 2015

Downloaded from hpp.sagepub.com by guest on November 15, 2015

within and across regions. Forming links with universities was proposed as beneficial for gaining access to academic journals and improving practitioner skill in using rigorous evaluation methods. This could involve one-on-one coaching or mentoring of practitioners by academics. Interviewees called for tailored evaluation training for practitioners with different skill levels, rather than a “one-size fits all” approach. Additional suggestions included ensuring that training presented practical methods, suitable to the contexts and settings in which practitioners were working. Provision of training on evaluation indicators and measures, especially those relevant to longer term program impacts, was recommended as beneficial for achieving a more consistent approach to measuring impacts across regions.

Discussion >> This study found wide variability and clear gaps in the evaluation practices of the participating health promotion agencies, alongside universal endorsement of the importance of evaluation among the practitioners interviewed. Evaluation was recognized as integral to learning, greater effectiveness, and securing funding body support for programs. There was, however, a sense of frustration among the interviewees that there were a number of factors—largely systemic and organizational—that hindered the extent to which they were able to complete the levels of evaluation that would generate these benefits. The very infrequent reporting of formative evaluation was a concern given the obvious benefits that this can bring to program design and planning. Other researchers have suggested that the tendency for formative evaluation to be neglected may be the result of the limited research that has been conducted to demonstrate the value that it can add to program implementation and impact (Brown & Kiernan, 2001). A lack of awareness of the benefits of formative evaluation may have been a contributing factor in the present study, but the interviews with practitioners indicated that budget constraints and a focus on meeting the reporting requirements of the funding body are likely to have been the dominant influences. Process evaluation, largely at the level of delivery and reach, and impact evaluation were most widely reported on because the Department of Health required that evaluation reporting be undertaken for at least one project strategy and one objective. However, the limited scale and quality of impact evaluation, shown by small respondent numbers and unspecified data collection methods, suggested that agencies were making pragmatic decisions

to fulfil reporting requirements without the personnel time and funds to adopt rigorous methods. Outcome evaluation was rare, which is likely because this was not prescribed by the funding agency. Several other studies have identified budget constraints as a major barrier to the extent of evaluation undertaken by health promotion agencies (Brug et al., 2011; Jolley et al., 2007; Lobo et al., 2010). This study revealed that budgetary issues also impaired the quality of evaluation by reducing the time available to develop indicators and measures, and organizational capacity for evaluation by limiting the amount of evaluation training that could be provided to team members. The barrier created by a lack of evaluation skills among team members in some agencies was becoming entrenched by the lack of organizational resources to address this. Budgetary uncertainty was a further factor that was discussed extensively by respondents. Apart from adding to the pressure to focus on the minimal reporting requirements, this reduced morale, created uncertainty about the longevity of projects, and is likely to have detracted from the culture necessary to support evaluation within organizations (Torres, 2001). A significant systemic influence on evaluation performance identified in this study was the administrative and reporting procedures with which the health promotion practitioners were required to comply. Given the time demands entailed in program management, it is reasonable for practitioners to expect easily understood, relevant, and appropriately communicated reporting templates from funding bodies. Similar to that found by Lobo et al. (2010), participants in the present study commented that the evaluation reporting requirements did not adequately take into account the complexity of programs, nor did they reflect the time and resources required to manage these. Evaluation guidelines that offer workable processes that are relevant to the context in which program delivery is taking place are valued by health practitioners (Tavares et al., 2007). In addition, in this study, the time investment made to reporting did not bring a commensurate return to practitioners in terms of learning, through feedback or access to the evaluation reports of other agencies. Others have reported that a lack of clear benefits to practitioners can act as a disincentive to evaluation practice (Stubbs & Achat, 2011). A number of informants cited difficulties in measuring the impact of their work on priority health issues, especially those that entailed long-term change. Evaluating the longer term outcomes of health promotion programs has been identified as a major challenge for this field (Nutbeam, 1998; Yeatman & Nove, 2002). An examination of the evaluations of complex interventions

Francis and Smith / AUSTRALIAN HEALTH PROMOTION AGENCIES  721

Downloaded from hpp.sagepub.com by guest on November 15, 2015

published from 2002 to 2011, of which 47 were designated as health promotion, found that the measurement of impacts and capturing long-term outcomes were major difficulties identified by evaluators, together with establishing a causal relationship between interventions and outcomes (Datta & Petticrew, 2013). It has been recognized that there are important constructs in health promotion, such as participation and empowerment, that present conceptual and measurement challenges and require an understanding of how both qualitative and quantitative methods can be used to obtain valid data (Brandstetter, McCool, Wise, & Loss, 2014). Practitioners in this study stated that fostering collaborations, with practitioners from other agencies and with academics, would broaden the expertise and resources they could draw on to develop measurement solutions for the projects they were undertaking. Brug et al. (2011), in their study of evaluation by health promotion agencies in the Netherlands, also recommended the formation of collaborations involving academic partners to provide expert input into evaluations. This is an endorsement of the instrumental value that communities of practice play in knowledge gathering and utilization by organizations (Brehaut & Eva, 2012). The practitioners in the current study who were collaborating with universities identified a number of benefits of this, including access to evaluation expertise and academic journals and more efficient use of their time and resources. Training and development opportunities can be enablers of improved evaluation capacity (Carman & Fredericks, 2010). Practitioners in this study considered that establishment of partnerships with universities would provide access to one-to-one coaching and mentoring. They also recommended evaluation training programs tailored to the skill levels of practitioners. The view that much could be gained from access to the learning arising from the implementation of health promotion projects in diverse settings and contexts in Victoria is consistent with the findings of an earlier review of the Department of Health Integrated Health Promotion program in Victoria. This review concluded that the level of experience gained in cross-sectoral, partnership-based practice had “created a virtual knowledge bank in relation to the health of local community and health promotion strategies” (Saxon, 2008, p. v). Active dissemination of this knowledge, through sharing of evaluation reports and conducting forums for practitioners across agencies and regions, will assist in evaluation capacity building. A limitation of this study was that the sample was restricted to Department of Health funded agencies

within metropolitan Melbourne. The findings, therefore, may not reflect the practices and experiences of rural and nongovernment health promotion agencies. In the audit of evaluation reports, it is possible that agencies may not have provided all reports completed in the designated time period. A number of the reports lacked methodological detail, and it was not possible to verify the extent to which this reflected the reporting style of agencies or gaps in the evaluation methods used in projects. The state of Victoria, Australia, has a strong history of leadership in health promotion and well-developed infrastructure for program delivery. This added to the breadth and depth of experience that this study has been able to draw on in examining important determinants of evaluation practice. The mixed-methods approach has revealed how evaluation is being conducted and reported on, and offered insights into the barriers being faced and the actions that will help improve capacity. Important steps to strengthen evaluation practice, in Victoria and other jurisdictions, include developing evaluation guidelines that allow collection of evidence relevant to the needs of funded agencies, that are communicated and applied in a consistent way; setting reporting periods that give sufficient time for collection of data at a scale and depth that will yield valuable insights; aligning staffing levels and project funding with the requirements of rigorous evaluation; facilitating collaboration between practitioners and those with skills in measurement and evaluation, including university-based specialists; and providing ongoing evaluation training and mentoring opportunities that match the skill levels of different practitioners. These actions will improve the quality of evaluation and foster a culture of learning and evidence-based practice in health promotion agencies. References Allegrante, J. P., Barry, M. M., Airhihenbuwa, C. O., Auld, M. E., Collins, J. L., Lamarre, M.-C., . . . Mittelmark, M. B. (2009). Domains of core competency, standards, and quality assurance for building global capacity in health promotion: The Galway Consensus Conference Statement. Health Education & Behavior, 36, 476-482. Bauman, A., & Nutbeam, D. (2013). Evaluation in a nutshell: A practical guide to the evaluation of health promotion programs. Sydney, New South Wales, Australia: McGraw-Hill. Bowen, S., & Zwi, A. B. (2005). Pathways to “evidence-informed” policy and practice: A framework for action. PLoS Medicine, 2, e166. Brandstetter, S., McCool, M., Wise, M., & Loss, J. (2014). Australian health promotion practitioners’ perceptions on evaluation of empowerment and participation. Health Promotion International, 29, 70-80.

722  HEALTH PROMOTION PRACTICE / September 2015

Downloaded from hpp.sagepub.com by guest on November 15, 2015

Brehaut, J. C., & Eva, K. W. (2012). Building theories of knowledge translation interventions: Use the entire menu of constructs. Implementation Science, 7, 114. Brown, J. L., & Kiernan, N. E. (2001). Assessing the subsequent effect of a formative evaluation on a program. Evaluation and Program Planning, 24, 129-143. Brug, J., Tak, N. I., & Te Velde, S. J. (2011). Evaluation of nationwide health promotion campaigns in The Netherlands: An exploration of practices, wishes and opportunities. Health Promotion International, 26, 244-254. Carman, J. G., & Fredericks, K. A. (2010). Evaluation capacity and nonprofit organizations: Is the glass half-empty or half-full? American Journal of Evaluation, 31, 84-104. Cherney, A., & Head, B. (2010). Evidence-based policy and practice: Key challenges for improvement. Australian Journal of Social Issues, 45, 509-526. Datta, J., & Petticrew, M. (2013). Challenges to evaluating complex interventions: A content analysis of published papers. BMC Public Health, 13, 568. Green, J., & Tones, K. (2010). Health promotion planning and strategies (2nd ed.). London, England: Sage. Green, L., & Kreuter, M. (2005). Health program planning: An educational and ecological approach. New York, NY: McGrawHill. Hartz, Z., Goldberg, C., Figueiro, A., & Potvin, L. (2008). Multistrategy in evaluation of health promoting community interventions: An indicator of quality. In L. Potvin & D. McQueen (Eds.), Health promotion evaluation practices in the Americas (pp. 253267). New York, NY: Springer. Jolley, G. M., Lawless, A. P., Baum, F. E., Hurley, C. J., & Fry, D. (2007). Building an evidence base for community health: A review of the quality of program evaluations. Australian Health Review, 31, 603-610. Kirsh, B., Krupa, T., Horgan, S., Kelly, D., & Carr, S. (2005). Making it better: Building evaluation capacity in community mental health. Psychiatric Rehabilitation Journal, 28, 234-241. Leech, N. L., & Onwuegbuzie, A. J. (2009). A typology of mixed methods research designs. Quality & Quantity, 43, 265-275. Lobo, R., McManus, A., Brown, G., Hildebrand, J., & Maycock, B. (2010). Evaluating peer-based youth programs: Barriers and enablers. Evaluation Journal of Australasia, 10, 36-43. McQueen, D. V. (2001). Strengthening the evidence base for health promotion. Health Promotion International, 16, 261-268. Napp, D., Gibbs, D., Jolly, D., Westover, B., & Uhl, G. (2002). Evaluation barriers and facilitators among community-based HIV prevention programs. AIDS Education and Prevention, 14(Suppl. A), 38-48. Nebot, M. (2006). Health promotion evaluation and the principle of prevention. Journal of Epidemiology & Community Health, 60, 5-6.

Nutbeam, D. (1998). Evaluating health promotion—progress, problems and solutions. Health Promotion International, 13, 27-44. O’Connor-Fleming, M. L., Parker, E., Higgins, H., & Gould, T. (2006). A framework for evaluating health promotion programs. Health Promotion Journal of Australia, 17, 61-66. Pettman, T. L., Armstrong, R., Doyle, J., Burford, B., Anderson, L. M., Hillgrove, T., . . . Waters, E. (2012). Strengthening evaluation to capture the breadth of public health practice: Ideal vs. real. Journal of Public Health, 34, 151-155. Potvin, L., & McQueen, D. V. (2008). Practical dilemmas for health promotion evaluation. In L. Potvin & D. McQueen (Eds.), Health promotion evaluation practices in the Americas (pp. 253-267). New York, NY: Springer. Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29, 25-45. Prevention and Population Health Branch. (2010). Evaluation framework for disease prevention and health promotion programs. Melbourne, Victoria, Australia: Victorian Government Department of Health. Richards, L. (2009). Handling qualitative data: A practical guide. Thousand Oaks, CA: Sage. Richards, L., & Morse, J. M. (2007). Readme first for a user’s guide to qualitative methods (2nd ed.). Thousand Oaks, CA: Sage. Saxon, R. (2008). Partnerships for effective integrated health promotion: An analysis of impact on agencies of the Primary Care Partnership Integrated Health Promotion strategy. Retrieved from http://www.health.vic.gov.au/pcps/downloads/analysis_strategy. pdf Serry, T., & Liamputtong, P. (2010). The in-depth interviewing method in health. In P. Liamputtong (Ed.), Research methods in health: Foundations for evidence-based practice (pp. 45-60). Melbourne, Victoria, Australia: Oxford University Press. Shediac-Rizkallah, M. C., & Bone, L. R. (1998). Planning for the sustainability of community-based health programs: Conceptual frameworks and future directions for research, practice and policy. Health Education Research, 13, 87-108. Stubbs, J. M., & Achat, H. M. (2011). Monitoring and evaluation of a large-scale community-based program: Recommendations for overcoming barriers to structured implementation. Contemporary Nurse, 37, 188-196. Tavares, M. d. F. L., Barros, C. M. S., Marcondes, W. B., Bodstein, R., Cohen, S. C., Kligerman, D. C., . . . Mendes, R. (2007). Theory and practice in the context of health promotion program evaluation. Promotion & Education, 14(Suppl. 1), 27-30. Torres, R. T. (2001). Evaluation and organizational learning: Past, present, and future. American Journal of Evaluation, 22, 387-395. Yeatman, H., & Nove, T. (2002). Reorienting health services with capacity building: A case study of the Core Skills in Health Promotion Project. Health Promotion International, 17, 341-350.

Francis and Smith / AUSTRALIAN HEALTH PROMOTION AGENCIES  723

Downloaded from hpp.sagepub.com by guest on November 15, 2015

Toward Best Practice in Evaluation: A Study of Australian Health Promotion Agencies.

Evaluation makes a critical contribution to the evidence base for health promotion programs and policy. Because there has been limited research about ...
320KB Sizes 1 Downloads 9 Views