HEALTH EDUCATION RESEARCH

Vol.30 no.2 2015 Pages 193–205 Advance Access published 10 December 2014

Evidence valued and used by health promotion practitioners V. Li1*, S. M. Carter1 and L. Rychetnik2 1

Centre for Values, Ethics and the Law in Medicine, Sydney School of Public Health, The University of Sydney, NSW 2006, Australia and 2School of Medicine, University of Notre Dame, NSW 2010, Australia *Correspondence to: V. Li. E-mail [email protected]

Abstract

Introduction

The use of evidence has become a foundational part of health promotion practice. Although there is a general consensus that adopting an evidence-based approach is necessary for practice, disagreement remains about what types of evidence practitioners should use to guide their work. An empirical understanding of how practitioners conceptualize and use evidence has been lacking in the literature. In this article, we explore (i) practitioners’ purposes for using evidence, (ii) types of evidence they valued, and (iii) qualities that made evidence useful for practice. 58 semi-structured interviews and 250 h of participant and non-participant observation were conducted with 54 health promotion practitioners working across New South Wales, Australia. Interviews were recorded and transcribed, and field notes were written during the observations; these were analysed using Grounded Theory methods. Practitioners used evidence for practical and strategic purposes, and valued four different types of evidence according to their relevance and usefulness for these purposes. Practitioners’ ideal evidence was generated within their practice settings, and met both substantive and procedural evaluation criteria. We argue that due to the complex nature of their work, practitioners rely on a diverse range of evidence and require organizational structures that will support them in doing so.

Health promotion is commonly defined as ‘the process of enabling individuals and communities to increase control over the determinants of health and thereby improve their health’ [1]. At the 51st World Health Assembly, all member states were urged to ‘adopt an evidence-based approach to health promotion policy and practice’ [2]. Since then, as in other areas of public health, health promotion practitioners and policy makers have been increasingly required to use research to examine the benefits and harms of potential strategies, evaluate the effectiveness of their actions, provide justification for decisions made, and demonstrate that health promotion is a worthwhile investment to governments, funding bodies and community organizations [3–6]. Despite this, disagreement remains about what types of evidence practitioners should use to guide their decisions and actions [4, 7–10]. Concerns have included the need to reflect intervention complexity [11–13], employ a range of evaluation methods [14–18], consider social and political contexts [19–22], incorporate lay knowledge, [10, 23, 24] and evaluate implementation processes as well as outcomes [25, 26]. We present the findings of an empirical study that investigated health promotion practitioners’ use of evidence in their day-to-day practice. We will argue that practitioners value four different types of evidence according to their usefulness for different purposes, and are committed to using a range of evidence to ensure best practice.

ß The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please email: [email protected]

doi:10.1093/her/cyu071

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

Received on June 24, 2014; accepted on November 12, 2014

V. Li et al.

194

evidence they valued and how these were categorized, and (iii) the qualities that made evidence useful for guiding practitioners’ decisions and actions in different practice contexts.

Methods This study was part of a larger NHMRC-funded qualitative research project (632679) that examined the nature and role of values, ethics and evidence in health promotion interventions in New South Wales (NSW), Australia. The study methodology combined grounded theory [35] and ethnography [36] to investigate practitioners’ use of evidence and situate our analysis in the contexts in which they worked. At the time of the study, NSW was divided into 8 Area Health Services for management purposes; they have since been restructured into 16 Local Health Districts. We selected three of the (then) Area Health Services as research sites. These varied in geographical area, level of urbanization, and socioeconomic status. All had several health promotion teams spread across various locations. We focused on practitioners’ use of evidence to guide interventions in overweight and obesity prevention as this was the dominant area of work for health promotion in NSW at the time of our data collection (May 2010 to June 2011). Overweight and obesity continue to be major priorities in health promotion policy and practice and will remain so into the foreseeable future. All practitioners who worked on overweight- and obesity-related health promotion activities, and those who had roles which crossed all program areas (for example, service managers, communication officers, research and evaluation officers) were invited to participate in the study; of the 57 invited, only 3 declined. We conducted 58 semi-structured interviews with 54 health promotion practitioners; 4 practitioners were interviewed a second time to clarify or extend insights from their initial interviews. The participants had different amounts of experience in health promotion, ranging from 6 months to over

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

In recent years, there has been increasing attention to the translation of research evidence into practice and generation of practice-based evidence [6, 27–29]. A lot of work has also been done on the perception and use of evidence by policy makers [30, 31], but only a small number of empirical studies have investigated health promotion practitioners’ understanding and use of evidence. They include an international comparative case study [32], an in-depth interview study with health promotion workers from the English National Health Service [33], and two surveys of Australian health promotion practitioners [13, 34]. All of these studies focused on practitioners’ use of formal research evidence about the nature of health problems and effectiveness of interventions. These studies explicitly excluded other kinds of information, for example, national policies, guidelines for best practice, colleagues’ opinions, theories, and general knowledge obtained from community members, distinguishing them from research ‘evidence’. In two of the studies [13, 33], practitioners emphasized the need for evidence to be useful for guiding their day-to-day practice, and valued this quality over the scientific rigour of the evidence. ‘Usefulness’ meant relevant to local community contexts and oriented to the practical aspects of strategy implementation. These authors argued for existing models of evidence-based practice to be tailored to health promotion, including expanding the sources of evidence that are valued, increasing collaboration between practitioners and researchers, and taking practitioners’ needs into account. An empirical understanding of how practitioners understand and use evidence will assist with developing and implementing an expanded model of evidence-based practice. We need a better understanding of the qualities that make evidence useful for health promotion practitioners, and of the processes involved in using evidence to guide their practice. In this article, we present the results of an empirical study of health promotion practitioners’ use of evidence within their everyday practice contexts. We describe (i) the purposes for which the practitioners used evidence, (ii) the types of

Evidence valued and used by health promotion practitioners Data collection commenced in May 2010 and was completed in June 2011. All the interviews and observations were conducted by V.L. and another member of the research team, both of whom have training and experience in public health and health promotion. Analysis of the interview transcripts and field notes was commenced soon after the interviewing and participant observations began. Early inductive data analysis, led by V.L., involved detailed coding, extensive memo writing and team discussion; later analysis became more focused, combining codes, exploring relationships and developing analytic categories [35]. The central categories presented in our analysis were developed from the data rather than being imposed from other sources. We were not seeking to replicate any existing models of evidence-based practice but were sensitized to the relevant literature. Following the preliminary data analysis, we held six feedback sessions between July and December 2012 with the current Local Health Districts that participated in the data collection phase of our study. During these feedback sessions, our preliminary empirical findings were presented to the practitioners. Focused discussions were conducted after the presentations in which practitioners were asked to reflect on the findings presented, provide feedback, and discuss the implications and relevance of the findings for their practice. The discussions were tape-recorded, transcribed and included into our final analysis of the data. Ethics approvals were obtained from the Sydney South West Area Health Service Human Research Ethics Committee under the Model for Single Ethical and Scientific Review of Multi-Centre Research; doctoral student involvement (V.L.) was ratified by the Human Research Ethics Committee of the University of Sydney. All participants gave individual consent to be interviewed or observed; unit managers consented to release any documents used. All participants were free to withdraw from the study or parts of it at any time. All the data used has been de-identified by replacing names with alphanumeric codes. 195

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

30 years, and worked in various roles, including program implementation, research & evaluation, media & communications, management, and advocacy. Most participants had worked on a diverse range of health promotion strategies during their careers, not only overweight and obesity strategies. The interview schedule was developed by the research team and piloted with two practitioners. After a few introductory questions about the participant’s current and previous role(s) in health promotion, interviews focused on their core values, understandings of health promotion and obesity prevention, strategies adopted, conceptualization and use of evidence, and ethical considerations. Questions from the interview schedule relevant to the conceptualization and use of evidence are provided in Appendix 1. We have provided all of the questions used in the study but have reported only some of the findings in this article. Interviews were conducted by two interviewers who had training and experience in public health and health promotion. All the interviews were conducted in the health promotion units during work hours and lasted 30–150 min. They were tape-recorded and later transcribed verbatim by a professional transcriber. We also conducted 250 h of participant and non-participant observation over 15 months with 38 practitioners from 2 Area Health Services. These observations were treated as an opportunity to contextualize the issues discussed in the interviews. We observed health promotion practitioners going about their daily work, particularly when this involved implementing programs in the field. During and after each observation session we immediately wrote unstructured field notes and examined documents provided by practitioners; these were later incorporated into memos. We also examined documents collected during the observations. The interviews and participant observations were conducted at the same time. This allowed us to work iteratively between them and seek further clarification when discrepancies arose. Researchers learned about how programs were operating on the ground during observations, which better prepared them to discuss the programs in the interviews.

V. Li et al.

Results

Practitioners used evidence for practical and strategic purposes Practitioners said their main aim was to improve the health and wellbeing of communities. To achieve this, they used evidence to understand the health issues, identify risk factors, develop and implement effective and resource-efficient health-promoting strategies, and evaluate their actions. We will refer to these as the practical purposes for using evidence. Practitioners reported that an explicit use of evidence for these practical purposes had become central to health promotion in recent years, as they were increasingly required to demonstrate that their strategies were evidence-based: In the 70 s and 80 s, we were all about the art of health promotion, sort of without a scientific base, and then the pendulum swung the other way. And everything had to be evidence-based and researched. . . The focus shifted to evidence-based projects which all had to have controls, pre-post design, and written up and published. That was very much the paradigm when I started in the mid-90 s, and that’s where we are. [C6:153–165] While managers relied on evidence to identify resource-efficient strategies, practitioners generally 196

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

Participants in this study believed that an increased focus on evidence-based practice had been necessary and beneficial for health promotion. They agreed with the general principle that it was important they used good evidence in their practice. However, what they considered to be good evidence, their purposes for using evidence, and the types of evidence they used in practice varied widely. We found that (i) practitioners used evidence for practical and strategic purposes, (ii) they valued different types of evidence for their different qualities, and (iii) their ideal evidence for guiding practice was often lacking in the existing evidence base. We will explain each of these findings in turn.

focused on the effectiveness of strategies for improving health outcomes, rather than on issues of opportunity cost. Besides this, there was little variation between the individual practitioners in their evidence use, suggesting that this group was relatively homogeneous in their approaches to evidence. Another practical purpose for using evidence was to minimize harm done to communities. Practitioners acknowledged that their work could have unintended harmful consequences and described the importance of using evidence to minimize the risk of harm: ‘I do firmly believe that we need some evidence before we launch into things. I think the prospect of doing harm is too great to not have some inkling of where it is going to go’. [C1:1145–1146] In addition to using evidence for practical purposes, practitioners also used evidence to enable their broader program of work by justifying investment in health promotion and increasing their professional autonomy. We will refer to these as the strategic purposes for using evidence. Practitioners often used evidence to obtain support for proposed strategies from management, partners and funding bodies because decision makers valued evidence: ‘When you’re looking at funding, it’s based on outcomes and outcomes is really what they’re looking at. Outcomes, outcomes, outcomes; all the time, and value for the dollar’. [A8:1009-1011] Presenting evidence that demonstrates effectiveness to management was also key to increasing the credibility of health promotion and thereby creating space for innovation: ‘We deliver on our business plan and we showcase best practice through the research that we do. For example, we did a prevention program and we showed that there was a 20% reduction in hospital admissions. . . So [then] when you try to do something new, they give you the freedom’. [C6:478-496] Practitioners also used evidence strategically to build trust in communities, which helped to increase community members’ willingness to adopt and participate in their policies and programs. This was particularly important when practitioners were new to a community and did not have existing relationships with its members.

Evidence valued and used by health promotion practitioners Practitioners’ purposes for using evidence depended on the work they were required to do. We also note that practitioners frequently moved between different roles within health promotion, and their purposes for using evidence changed when different activities were required of them. This was the only systematic pattern of variation in the data regarding their use of evidence.

Practitioners described different types of evidence that were relevant to and useful for their practice. These types of evidence varied along two interacting dimensions. The first dimension focused on the qualities of evidence. We have labelled this the humanistic to scientistic dimension of evidence (Fig. 1). Scientistic evidence was reductionist, that is, it described complex interventions (for example,

Fig. 1. Practitioners described evidence varying along a humanistic–scientistic dimension.

197

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

Practitioners valued four different categories of evidence

community program for preventing heart disease) by breaking them down into fundamental or simple components that could be standardized and measured (for example, blood pressure). The quality of scientistic evidence was evaluated using procedural criteria, that is, by considering whether the evidence had been created in accordance with methodological rules. Scientistic evidence was mostly composed of numerical datasets; it sought to understand causal relationships between pre-determined variables. It was generated using research methods and measurement tools that offered investigators the most amount of control over the communities and allowed them to test strategies under controlled conditions. The study investigators were often external to the communities, observing and working ‘on’ them, and measured only the variables targeted for change. Scientistic evidence was, therefore, perceived as least prone to bias in its methods, and most oriented to generalizability. Humanistic evidence, on the other hand, was complex and highly

V. Li et al.

Humanistic practitioner-generated evidence Humanistic practitioner-generated evidence was generated by practitioners to inform future practice by increasing their understanding of issues, strategies, practice settings and their role in communities. It was sometimes generated in collaboration with external researchers. This evidence was made up of needs assessments, action research studies, and program evaluations conducted by practitioners, as well as their reflections on the process of strategy implementation and other knowledge they obtained first-hand from communities. Practitioners used both qualitative and quantitative methodologies to collect and analyse the data, and presented the evidence in ways that were most readily accessible to other practitioners. These included publications in journals that were aimed at and provided freeof-charge to practitioners such as the NSW Public 198

Health Bulletin, presentations at professional network meetings and health promotion conferences, and other forms of communication between colleagues. Practitioners valued this category of evidence the most as they found it to be the most useful evidence for practical purposes. Because this evidence was contextualized in practice and generated by practitioners, it was highly relevant to their day-to-day work and contained ‘professional wisdom’ that could only be gained through hands-on experience. It also incorporated first-hand information obtained from communities, with large amounts of attention paid to the local contexts in which they lived. As such, it provided practitioners with important knowledge for developing effective strategies and an emic understanding of communities, which was useful for translating research-based guidelines into appropriate practical recommendations and tailoring strategies to the needs, culture and values of communities. Practitioners’ reflections on their previous implementation efforts were especially helpful for those who were less experienced in doing health promotion, giving them valuable insights into the processes involved, how strategies may be affected by changing circumstances in the practice settings, and potential barriers to achieving the intended outcomes. Practitioners found this category of evidence to be less useful for the strategic purposes of enabling their work by increasing their professional autonomy and obtaining the necessary resources. It was useful for building trust in communities because the methods used to generate this evidence required practitioners to engage with community members and implement strategies as they collected data, which some practitioners believed was ethically important: ‘I think we always need to make sure we’re offering decent interventions or long term approaches to working and improving communities before we have the right to go in and ask questions about their individual behaviours or perceptions’. [A1:1148-1151] When community members saw practitioners conduct program evaluations alongside strategy implementation, they felt the evaluations were done for their benefit, rather than being

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

contextualized in practice. The quality of humanistic evidence was evaluated using substantive criteria, that is, criteria that emphasized what the evidence meant and what it was about, as opposed to criteria that emphasized methodological rules. The methods used for generating this evidence gave study investigators the least amount of control over the communities and their settings, and allowed them to explore factors other than the intended outcomes, such as potential harms of the strategies used. This evidence was perceived as most prone to potential bias in its methods, and least potentially generalizable to other communities. The second dimension along which different types of evidence varied was the degree to which practitioners were involved in generating the evidence. This dimension ranged from low to high practitioner involvement. Based on the interaction of these two dimensions, we developed a fourquadrant model of evidence in health promotion practice (Fig. 2). The horizontal axis represents the humanistic to scientistic dimension, and the vertical axis represents the low to high practitioner involvement dimension. This model allows us to organize the evidence that practitioners valued into four categories, which we will discuss in turn.

Evidence valued and used by health promotion practitioners

a separate research activity driven by researchers’ objectives. This helped to increase community members’ trust in practitioners and willingness to adopt and participate in their policies and programs.

Humanistic non-practitioner-generated evidence As for the previous category of evidence, humanistic non-practitioner-generated evidence was highly contextual and largely grounded in the perspectives and experiences of community members. It was mostly generated by communities and partnering

organizations, with little practitioner or researcher involvement, using methods that focused on obtaining information about communities and trialling strategies in real settings, rather than under controlled conditions. This category also included qualitative studies of local communities conducted by researchers, but practitioners rarely found or used them in their practice. Evidence was presented using a wide variety of media, such as DVDs, community newspapers and paintings, and often included personal stories from community members. Practitioners valued these stories as they allowed 199

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

Fig. 2. A model of different categories of evidence that practitioners valued in their practice.

V. Li et al. them to see, hear and experience for themselves the effects of their actions, which added an important personal dimension to their work:

This evidence was also widely accessible and appealed to human emotion, which made it useful for drawing attention to health issues, promoting effective strategies to larger populations, and persuading decision makers and partners to support health promotion work. It was used for strategic purposes but only in the specific local contexts from which it was obtained. Practitioners valued this category of evidence mainly for its usefulness in ensuring that their strategies were appropriate for the communities, thereby maximizing their effectiveness and reducing the risk of doing harm. This type of evidence was often presented by community members to practitioners during periods of ‘professional loitering’, a term practitioners used to describe time they spent on building relationships with communities by simply being present and partaking in their activities. It provided valuable insight into particular communities and informed practitioners how they should act when implementing strategies in those settings. However, practitioners often felt discouraged from using this evidence in their practice by the system in which they worked, particularly if it lacked input from expert researchers in generating the evidence or failed to meet procedural criteria for evaluating evidence.

Scientistic practitioner-generated evidence

Scientistic non-practitioner-generated evidence

Scientistic practitioner-generated evidence was mostly generated by practitioners working in

Similar to the category of evidence above, scientistic non-practitioner-generated evidence was primarily

200

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

I don’t have to see that 200 people said it made a difference. If somebody stops me and says, ‘I’m now doing exercise which I had never done it before so thank you’, that is the biggest accolade for me at a personal level because I know that that has changed something in somebody’s life. The fact that 200 people might say the same in a questionnaire is really nice, but it’s the personal dimension that has kept me there. [B2:1452–1457]

specialist research and evaluation teams within health promotion units, often in partnership with researchers from academic institutions and made possible by external research grants. It was academic in nature and prioritized meeting procedural criteria for evaluating evidence over substantive criteria. Practitioners and researchers preferred to use quantitative methodologies and validated measurement tools to generate this evidence, to allow observation and testing to occur under more controlled conditions. This evidence was made up of literature reviews, research studies, formal evaluations of previously implemented strategies, and research-based guides to best practice written for practitioners and published in academic journals or reports. Practitioners valued this category of evidence mainly because it was useful for strategic purposes. They presented this evidence to management and external funding bodies to demonstrate the effectiveness of their work, thereby increasing their professional credibility and obtaining support for proposed strategies. For instance, one practitioner remarked: ‘So we have evidence from the research. Sometimes this is when evidence based comes really handy, when you go ask for money. You have to show them, this is the findings, and we need to work to address these issues.’ [A18: 989-992] When possible, practitioners conducted research studies and made their findings available by publishing journal articles, presenting at professional conferences, and showcasing their work in the media. The usefulness of this evidence for practical purposes was, however, often limited due to the lack of detailed contextual information, particularly if it had been conducted in other settings. Sometimes practitioners had to contact the authors of a study to obtain more information about the practice contexts before using it in order to avoid misinterpreting the evidence and so applying it inappropriately to their own practice.

Evidence valued and used by health promotion practitioners strategic purposes. According to the practitioners, the ideal evidence for health promotion practice met both substantive and procedural criteria for evaluating evidence and was mostly generated by practitioners. This evidence was often lacking in the current evidence base for several reasons. Firstly, it was perceived to be difficult for evidence to be both humanistic and scientistic because methods used to generate humanistic evidence were not usually helpful for satisfying procedural criteria for evaluating evidence, and vice versa. Secondly, practitioners said they were not always able to investigate issues adequately and measure the outcomes of their work due to the nature of health promotion interventions and changing circumstances of the environments in which they are implemented: ‘We usually look at the process and impact stuff but the outcome stuff we usually can’t do because the outcome stuff is either many years away, like people’s health outcomes, or it would require huge amounts of money to be able to find it out’. [C9:390-397] Thirdly, practitioners felt that while it was important to be evidence-based, their primary role was to do hands-on work in communities as opposed to conducting research: ‘I think we need to have a credible workforce and we need to be basing our interventions and our approaches on well thought out studies and information, but we don’t want to go too far and just become researchers’. [A11:1035-1037] Lastly, practitioners often lacked the training and resources required to generate this ideal evidence in their day-to-day practice, especially if they did not work in specialist research and evaluation teams.

Practitioners’ ideal evidence was both humanistic and scientistic

We have shown that health promotion practitioners used evidence for practical and strategic purposes, and valued four different types of evidence according to their usefulness for these purposes and relevance to practice. Practitioners’ ideal evidence met both substantive and procedural criteria for evaluating evidence and was primarily generated by practitioners. This evidence was, however, often lacking in the current evidence base.

Even though practitioners recognized the importance of procedural criteria for evaluating evidence, they often valued humanistic evidence over scientistic evidence as they believed it was more useful for guiding their practice. However, practitioners worked in a system that valued scientistic evidence highly, which was why they often used it for

Discussion

201

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

reductionist and scientific in nature. It was mostly generated by researchers with little or no practitioner involvement. This evidence included literature reviews, research studies and evidence summaries published in the academic literature. Researchers used methods and measurement tools that gave them the most amount of control possible over communities and their environments in order to minimize bias in their results. As such, the evidence was perceived by practitioners to lack context, relevance and applicability to their day-to-day practice. Practitioners were strongly encouraged to use this category of evidence by decision makers to guide their practice. This was because decision makers particularly valued procedural criteria for evaluating evidence. However, practitioners valued this evidence the least as they found it to be least useful for practical purposes and often lacked the skills required to interpret and use it from other settings, or the resources required to generate their own. Practitioners also felt that the authors of this evidence conceptualized communities merely as populations from which study subjects could be recruited for intervention testing, which they did not believe was good health promotion practice: ‘. . . you can’t do a randomised control trial with a lot of the things that we’re doing. We are actually working in living, breathing communities; they’re not in silos. They exist and they interact with the outside world and everything that goes on with it’. [A10:794-797] Practitioners used this evidence for strategic purposes more frequently than practical purposes. In doing so, they relied on the authority and credibility of the researchers, academic institutions, and research centres that generated the research.

V. Li et al.

202

considered to be good evidence by the wider research community. This is not unique to health promotion; practitioners working in other health professions (for example, dentistry [39]) have previously been shown to value evidence differently to researchers. There were several limitations of this study. While we recognized that health promotion encompasses many different roles and areas of work, we only invited practitioners who were either working on obesity prevention or had roles that spanned across all the focus areas to participate. This offered the benefit of cohesion across the participants. It is possible that practitioners working in other areas, such as tobacco control and injury prevention, may use evidence differently. However, we think this is unlikely, as many of the participants in this study had worked, or were working, across multiple areas, and participants often discussed health promotion broadly rather than limiting themselves to a discussion of obesity interventions. This study was only conducted in three of eight (then) area health services across NSW. Although it is possible that including other areas might have influenced our findings, the study sample was diverse on criteria including geography, socioeconomic status and cultural diversity of the areas, as well as organizational structure and operation. Also, we conducted the interviews and participant observations during a period of major organizational change for health services in NSW. Although this change may have conceivably affected participants’ expressed views, the subsequent feedback sessions and workshops were held after the restructure was complete, and the findings were endorsed by participants in all of those sessions. Lastly, this analysis is based on data obtained during long immersion of the researchers in the health promotion units. Researchers became well-acquainted with the practitioners; they interviewed them in-depth, observed and participated in their everyday work, and examined documents that formed part of their practice. We visited all the health promotion units again when analysis was completed, presented findings, sought feedback from practitioners, and modified findings based on this feedback. These strategies supported the

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

Previous empirical studies [13, 33] found that while practitioners were committed to evidencebased practice, they questioned the place of evidence in health promotion and the way it has been defined because of the many challenges they faced when using evidence to guide their work. Such challenges arose from fundamental differences between the complex nature of health promotion practice and the types of evidence that were valued by decision makers. We have added to this finding by developing a typology of evidence that practitioners do value, contrasting evidence produced with practitioners with evidence produced by others, and scientistic evidence, evaluated according to procedural criteria regarding study design, with humanistic evidence, evaluated according to substantive criteria regarding content and meaning. Practitioners reported that both scientistic and humanistic evidence were important but they often pulled against one another. This suggests that methods which bring procedural and substantive concerns together, for example, mixed methods studies, systematic qualitative research, community-based participatory research, or trials combining measures that have a high level of content validity with rigorous process evaluation, may be the most useful for health promotion and more effort should be placed on generating such evidence. Our findings also showed that practitioners’ use of evidence was largely determined by their purpose for using evidence. Although evidence-based practice required health promotion practitioners to use evidence for practical purposes, several authors have previously argued that practitioners’ use of evidence is often affected by politics [13, 22, 37, 38]. We found that not only were practitioners’ use of evidence influenced by political realities, they used evidence for strategic purposes. Practitioners often worked in contexts that did not value health promotion and had to use evidence strategically to enable their practice. This finding highlights the importance of identifying the purpose for evidence use first and foremost and allowing the purpose to determine the types of evidence to be obtained. It also provides an explanation for why practitioners might choose to use types of evidence that are not generally

Evidence valued and used by health promotion practitioners incorporation of context was just one of the qualities that made evidence useful for practice; the other substantive criteria for evaluating evidence, that is, criteria that emphasized what the evidence meant and what it was about, were also important. We argue that evidence should be evaluated according to both procedural and substantive criteria to ensure that it is both trustworthy and useful for practice. In addition, continued engagement with practitioners in the project of developing evidence-based health promotion will also ensure the best use of evidence to guide practice.

Acknowledgements The authors thank the health promotion practitioners who participated in this project and the many colleagues whose support and advice have assisted us greatly, particularly our Associate Investigators Ian Kerridge, Louise Baur, Adrian Bauman, Avigdor Zask, Beverley Lloyd and Michelle Daley. Thanks also to the anonymous reviewers for their engagement and feedback.

Funding Australian National Health and Medical Research Council (NHMRC 632679). Australian National Health and Medical Research Council Postgraduate Public Health Research Scholarship (NHMRC 1017669 to V.L.). Australian National Health and Medical Research Council Career Development Fellowship (NHMRC 1032963 to S.M.C.).

Conflict of interest statement None declared. References 1. World Health Organisation. Ottawa Charter for Health Promotion. Geneva: WHO, 1986. 2. World Health Organisation. Resolution of the Executive Board of the WHO on Health Promotion. Health Promot Int 1998; 13: 266.

203

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

veracity of our findings and allowed us to be confident in our conclusions. We aimed to make a useful empirically based contribution to the ongoing debates and discussions on what and how evidence should be used in order to improve the use of evidence in health promotion and address the challenges faced by practitioners. The involvement of practitioners in evidence-based practice has increased in recent years and large amounts of effort have been placed on training practitioners to appraise, translate, and use evidence obtained from peer-reviewed journals [40–42]. This study suggests, however, that health promotion practitioners need to use a range of evidence for different purposes due to the complex nature of their work. The ideal evidence for practice met both substantive and procedural criteria for evaluating evidence and was mostly generated by practitioners. Because this evidence is currently lacking in the evidence base, practitioners will benefit from using a combination of different types of evidence to inform their practice and ensuring that the evidence they generate is both humanistic and scientistic. They need to be equipped with the necessary skills to appraise, use and generate all different types of evidence; working closely with the specialist research and evaluation teams within their health promotion units and/or researchers from academic institutions may be helpful. It also requires organizational structures that will support practitioners in doing this. One way such structures could be supportive is by formally recognizing these four kinds of evidence, and their different political and strategic uses. Such formal, practice-informed recognition may help resolve the existing conflicts and tensions between different understandings and perspectives on evidence. Another may be to redefine the criteria for evaluating evidence so that it includes both procedural and substantive elements. There is increasing attention to the translation of evidence and the importance of context in interpreting evidence [6, 43] (for example, the Community Preventive Services Task Force in the United States has recently started to include small amounts of information about the practice settings in its evidence summaries [44]). In our study, we found that the

V. Li et al.

204

24. Higgins JW, Strange K, Scarr J et al. ‘It’s a feel. That’s what a lot of our evidence would consist of’: Public health practitioners’ perspectives on evidence. Eval Health Prof 2001; 34: 278–96. 25. Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled trial be? BMJ 2004; 328: 1561–63. 26. Nutbeam D, Bauman A. Evaluating health promotion in a nutshell. Sydney: McGraw-Hill Australia, 2006. 27. Ammerman A, Smith TW, Calancie L. Practice-based evidence in public health: improving reach, relevance, and results. Annu Rev Public Health 2014; 35: 47–63. 28. Lobb R, Colditz GA. Implementation science and its application to population health. Annu Rev Public Health 2013; 34: 235–51. 29. Milat AJ, King L, Newson R et al. Increasing the scale and adoption of population health interventions: experiences and perspectives of policy makers, practitioners, and researchers. Health Res Policy Syst 2014; 12: 18. 30. Innvaer S, Vist G, Trommald M et al. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Pol 2002; 7: 239–44. 31. Amara N, Ouimet M, Landry R. New evidence on instrumental, conceptual, and symbolic utilization of university research in government agencies. Sci Commun 2004; 26: 75–106. 32. Juneau CE, Jones CM, McQueen DV et al. Evidence-based health promotion: an emerging field. Global Health Prom 2011; 18: 79–89. 33. South J, Tilford S. Perceptions of research and evaluation in health promotion practice and influences on activity. Health Educ Res 2000; 15: 729–41. 34. Oldenburg B, O’Connor M, French M et al. The Dissemination Effort in Australia: Strengthening the Links Between Health Promotion Research and Practice. Canberra: Commonwealth of Australia, 1997. 35. Charmaz K. Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis. London: SAGE Publications, 2006. 36. Hammersley M, Atkinson P. Ethnography: Principles in Practice, 3rd edn. London: Taylor & Francis, 2007. 37. Dobrow MJ, Goel V, Upshur REG. Evidence-based health policy: context and utilisation. Soc Sci Med 2004; 58: 207–17. 38. Bowen S, Zwi AB. Pathways to ‘evidence-informed’ policy and practice: a framework for action. PLoS Med 2005; 2: 600–5. 39. Sbarani A, Carter SM, Evans RW. How do dentists understand evidence and adopt it in practice? Health Educ J 2011; 71: 195–204. 40. Lloyd B, Rychetnik L, Maxwell M et al. Building capacity for evidence-based practice in the health promotion workforce: evaluation of a train-the-trainer initiative in NSW. Health Promot J Aust 2009; 20: 151–4. 41. Baker EA, Brownson RC, Dreisinger M et al. Examining the role of training in evidence-based public health: a qualitative study. Health Promot Pract 2009; 10: 342–8. 42. Jansen MWJ, Hoeijmakers M. A Masterclass to teach public health professionals to conduct practice-based research to promote evidence-based practice: a case study from the Netherlands. J Public Health Man 2013; 19: 83–92.

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

3. Raphael D. The question of evidence in health promotion. Health Promot Int 2000; 15: 355–67. 4. Rychetnik L, Wise M. Advocating evidence-based health promotion: reflections and a way forward. Health Promot Int 2004; 19: 247–57. 5. NSW Department of Health. Promoting the Generation and Effective Use of Population Health Research in NSW: A Strategy for NSW Health 2011–2015. Sydney: New South Wales Department of Health, 2010. 6. Rychetnik L, Bauman A, Laws R et al. Translating research for evidence-based public health: key concepts and future directions. J Epidemiol Commun Health 2012; 66: 1187–92. 7. Rychetnik L, Frommer M, Hawe P et al. Criteria for evaluating evidence on public health interventions. J Epidemiol Commun Health 2002; 56: 119–27. 8. Speller V, Wimbush E, Morgan A. Evidence-based health promotion practice: how to make it work. IUHPE Promot Educ 2005; 12: 15–20. 9. McQueen DV, Jones C. Global perspectives on health promotion effectiveness. New York, NY: Springer Science and Business Media, 2007. 10. Springett J, Owens C, Callaghan J. The challenge of combining ‘lay’ knowledge with ‘evidence-based’ practice in health promotion: Fag Ends Smoking Cessation Service. Crit Public Health 2007; 17: 243–56. 11. Nutbeam D. Evaluating health promotion - progress, problems and solutions. Health Promot Int 1998; 13: 27–43. 12. McQueen DV. Strengthening the evidence base for health promotion. Health Promotion Int 2001; 16: 261–8. 13. James EL, Fraser C, Anderson K et al. Use of research by the Australian health promotion workforce. Health Educ Res 2007; 22: 576–87. 14. Green J, Tones K. Towards a secure evidence base for health promotion. J Public Health Med 1999; 21: 133–9. 15. Nutbeam D. The challenge to provide ‘evidence’ in health promotion. Health Promot Int 1999; 14: 99–101. 16. Green LW. From research to ‘best practices’ in other settings and populations. Am J Health Behav 2001; 25: 165–78. 17. Jack SM. Utility of qualitative research findings in evidencebased public health practice. Public Health Nurs 2006; 23: 277–83. 18. Glasgow RE, Emmons KM. How can we increase translation of research into practice? Types of evidence needed. Ann Rev Public Health 2007; 28: 413–33. 19. Frommer M, Rychetnik L. From evidence-based medicine to evidence-based public health. In: Lin V, Gibson B, Daly J (eds). Evidence-Based Health Policy; Problems and Possibilities (Ch 5). Melbourne: Oxford University Press, 2003. 20. Rychetnik L, Hawe P, Waters E et al. A glossary for evidence based public health. J Epidemiol Commun Health 2004; 58: 538–45. 21. Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: an evolving concept. Am J Prev Med 2004; 27: 417–21. 22. Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: A fundamental concept for public health practice. Ann Rev Public Health 2009; 30: 175–201. 23. Popay J, Williams G. Public health research and lay knowledge. Soc Sci Med 1996; 42: 759–68.

Evidence valued and used by health promotion practitioners 43. Milat AJ, King L, Bauman A et al. The concept of scalability: Increasing the scale and potential adoption of health promotion interventions into policy and practice. Health Promot Int 2013; 28: 285–98. 44. Community Preventive Services Task Force. Obesity prevention and control: behavioral interventions that aim to reduce recreational sedentary screen time among children. The Guide to Community Preventive Services 2014. Available at: www.thecommunityguide.org/obesity/behavorial.html. Accessed: 8 September 2014.

Introductory questions Can you please tell me the story of your career so far? What have been the best things in your career in health promotion? What have been the worst things in your career in health promotion? What has been the most rewarding project? What has made it rewarding? What has been the most frustrating project? What has made it frustrating? What values drive you personally in health promotion?

Values in overweight and obesity prevention If you were the one making the decisions, what kind of programs would you develop and fund for overweight and obesity prevention? (Probe: Tell me about the programs. Why would you want to develop and fund those programs in particular?) Are there any programs that you would not fund? (Probe: Why not those programs?) What effect has the increased focus on overweight and obesity had on your work? (Probe: What effect has it had on health promotion resources?)

Evidence When you develop or implement a program, what information do you need? Thinking about the times when you tried to obtain support for a program, what kind of information did

Ethics Do you think there are right and wrong ways to do health promotion? (Probe: How did you come to have that point of view?) Have there been times when your colleagues have disagreed about what was the right or wrong thing to do? (Probes: Can you tell me more about that issue? Why did people feel so strongly about it?)

Finishing questions If you had the freedom to determine the role of a health promotion service, what would it be? (Probe: What would that role involve? Why those particular ideas/directions?) [For experienced practitioners] You have been in health promotion for x years. What has kept you there? [For less experienced practitioners] Do you think you will stay in health promotion? In an ideal world, what would the next 10 years of your career in health promotion look like? What would you like to achieve?

205

Downloaded from http://her.oxfordjournals.org/ at University of Pittsburgh on November 14, 2015

Appendix 1: Interview Schedule

you need to get this support? (Probe: What was needed to gain support or resources from management/external funding bodies/partnering organizations?) What does evidence mean to you as a health promotion practitioner? What do you think about evidence-based practice in health promotion? (Probe: How does it affect your day-to-day work? What have been some of the challenges of evidence-based practice? (Probe: Tell me about some of the challenges you have faced. How do you deal with situations where you think an idea or programme is good, but there is not much formal evaluation or published evidence?) How are programs evaluated in your health promotion service? What do you think about the evaluation methods used? (Probe: How appropriate are the methods used? Do you think evaluation can be done better? If so, how?)

Evidence valued and used by health promotion practitioners.

The use of evidence has become a foundational part of health promotion practice. Although there is a general consensus that adopting an evidence-based...
384KB Sizes 2 Downloads 6 Views