Evaluation and Program Planning 44 (2014) 68–74

Contents lists available at ScienceDirect

Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan

Approaches to evaluation in Australian child and family welfare organizations Alicia McCoy *, David Rose, Marie Connolly Social Work, Melbourne School of Health Sciences, The University of Melbourne, Alan Gilbert Building, 161 Barry Street, Carlton, VIC 3053, Australia

A R T I C L E I N F O

A B S T R A C T

Article history: Received 25 July 2013 Received in revised form 3 February 2014 Accepted 5 February 2014 Available online 18 February 2014

Child and family welfare organizations around the world aspire to achieve missions that will improve outcomes for vulnerable children and families and ultimately reduce the prevalence and impact of child maltreatment. In Australia, this work is currently being influenced by an increasingly turbulent political and economic climate; one that is requiring organizations to engage with evaluation in new and advanced ways so that they are not left behind in the increasingly complex and competitive environment that they now operate in. Despite the apparent awareness and understanding of the essential place of evaluation in quality and effective service delivery, it is also understood that evaluation of the human services work that child and family welfare organizations undertake is extremely challenging due to its intricate, ever-changing and often innovative nature. Embedding evaluation within such organizations therefore requires a tailored and planned decision-making and implementation process. This paper will briefly describe the recent socio-political history and environment that Australian child and family welfare organizations operate in and how this has impacted on engagement with evaluation. With consideration to this, it will describe the evaluation approaches available to organizations and the factors that may influence selection of a specific approach. It will then explore the benefits and challenges of these evaluation approaches, and consider the implications for child and family welfare agencies more broadly. ß 2014 Elsevier Ltd. All rights reserved.

Keywords: Evaluation Child and family welfare Nonprofit

1. Introduction Evaluation has diffused differently around the world, at various times and in distinct ways, generally influenced by a nation’s historical and social contexts. Over the past 20 years in particular however, a seemingly worldwide surge in demand for not-forprofits to engage in evaluation has occurred, coupled with a growing interest from organizations regarding what evaluation might offer their services and beneficiaries. This occurrence has been explained by the ‘‘appeal of the universal criteria of neutrality and objectivity, as the field has become increasingly rationalized, bureaucratized, and made subject to market forces’’ (Barman, 2007, p. 103). It has also been suggested that a series of social changes have created a juncture where major cultural change towards outcomes and impact is nigh. This has included growth in

* Corresponding author at: 7/47 Patterson Street, Bonbeach, Victoria 3196, Australia. Tel.: +61 418 137 788. E-mail address: [email protected] (A. McCoy). http://dx.doi.org/10.1016/j.evalprogplan.2014.02.004 0149-7189/ß 2014 Elsevier Ltd. All rights reserved.

the areas of social investment and social entrepreneurship, the rise of technology which is making data more accessible, and the economic downturn and resulting cuts in government spending creating a central role for evaluation in future decision-making (Lumley, Rickey, & Pike, 2011). Current discourse around the world is suggesting that the ‘‘social sector seems to have woken up to the promise of data’’, with not-for-profits now primed to value and utilize evaluation more than ever (Lumley, 2013). This includes evaluation use within organizations, but also a move towards shared or collective impact across the child and family welfare sector. A growing understanding of what data and evaluation can offer organizations is now seen as exciting, and while it is known to be complex and challenging work, it is being considered a worthwhile undertaking to improve outcomes for beneficiaries, and in tackling the wider social problems that the sector aspires to address. Shared concerns and challenges around the world have seen evaluation in the child and family welfare sector, and indeed the not-for-profit sector more broadly, become an international issue. Discussion and reflection about how to best evaluate the complex

A. McCoy et al. / Evaluation and Program Planning 44 (2014) 68–74

services that such organizations deliver is increasingly traversing national boundaries. Whilst each country grapples with its own national contexts and influences, common issues are creating an environment primed for international learning and action. Considering this resonance across international circumstances and jurisdictions, attention to the Australian experience, as outlined in this paper, provides a valuable insight into key issues and a contribution to the international knowledge-base that is being used to better understand and implement evaluation in child and family welfare organizations. 2. Child and family welfare organizations: the Australian context As part of the Australian not-for-profit sector, child and family welfare organizations have a rich and long history of helping the most vulnerable and disadvantaged people and communities around Australia. Over the past 200 years, the Australian not-forprofit sector has grown in size and diversity, now contributing approximately $43 billion to Gross Domestic Product; employing approximately 890,000 people; and receiving $5.1 billion in donations and $25.5 billion in direct government funding (Australian Charities and Not for Profit Commission, 2012). Of the 600,000 not-for-profits in Australia, around 56,000 are considered charities, with 43% having a social and community welfare purpose (ibid). An unknown proportion of these organizations work with children and families as their core business. As in the United States, the operating environment for Australian child and family welfare organizations has changed significantly over the last half century, and this evolution has had a substantial impact on how such organizations value, use and promote evaluation, both internally and externally. Up until the early 1970’s, Australia was one of the lowest-spending welfare nations in the world. It was at this time that the Labor government’s social welfare reforms created extensive changes to the welfare system including the introduction of a national health service, an increase in social benefits, and the revitalization of community services (McMahon, Thomson, & Williams, 2000). In subsequent years, governments took on a greater role in funding social services, and the new public management of the 1980s and 1990s saw an increasing utilization of not-for-profit organizations delivering welfare services previously provided by government agencies (Productivity Commission, 2010). This marketization of the welfare state introduced competitive tendering processes through public procurement models, contributing to a growing uncertainty surrounding the financial sustainability of some child and family welfare organizations. Whilst many of the surviving organizations formed a heavy reliance on government funding for their existence, some went on to explore new opportunities to remain operational and grow independence. This included mergers and acquisitions, but also innovative ventures such as social enterprises that would generate discretionary income streams to provide some shield from any financial and political turbulence. As the not-for-profit sector explored new innovations and opportunities, federal and state governments began investigating the activities of the sector and its development as a critical part of the Australian society and social economy. This included an Industry Commission report in 1995, and an Australian Bureau of Statistics report within the national accounting framework in 2002 (Productivity Commission, 2010). More recently in 2009, the Productivity Commission, the Australian Government’s independent research and advisory body for social, economic and environmental issues, undertook a review of the contribution of the not-for-profit sector. The aims of this research focused on improving the measurement of the not-for-profit sector’s

69

contribution to society, and how obstacles to this contribution might be minimized (Productivity Commission, 2010). From this review process, the Australian government implemented a notfor-profit reform agenda with the expressed purpose of strengthening the not-for-profit sector in Australia. This has included the introduction of an independent national regulatory body, the Australian Charities and Not-for-Profit Commission, which is similar to equivalent bodies in other countries such as the Charity Commission in the United Kingdom. Service sector reforms have also been occurring at a state level, including specific inquiries into the functioning and effectiveness of child and family welfare systems such as the Special Commission of Inquiry into Child Protective Services in NSW (often referred to as the Woods Inquiry) in 2008 and the Protecting Victoria’s Vulnerable Child Inquiry in 2012 (often referred to as the Cummins Inquiry). These recent events were watershed moments for the not-for-profit and child and family welfare sectors in Australia, and today under the increasing scrutiny of government and the public, the operating environment of these organizations continues to change. There has been an emerging interest in how the not-for-profit sector may better engage with capital investment, with the view of expanding the traditional form of funding organizations to deliver on social outcomes. The New South Wales state government has commenced a social benefit bond trial, following the social impact bond trial in the United Kingdom in 2010 and similar to the ‘Pay for Success Bonds’ in the United States (NSW Government, 2012). The social benefit bond allows investors to fund the delivery of services, and receive a return on investment when agreed social outcomes are achieved; maintaining and building on a reliance on evaluation (The Centre for Social Impact, 2011). This restructure of the relationship between government, not-for-profits and social investors may alter the way child and welfare organizations function as they are increasingly required to robustly demonstrate the achievement of measurable outcomes in order to attract and maintain the interest and confidence of investing parties and government departments. As government explores this and similar opportunities to work with the business and private sector, there has also been a growing interest and use of business-type models initiated by Australian not-for-profit organizations, many of which rely on evaluation of outcomes. New methodologies such as Social Return on Investment and Social Accounting are being utilized as organizations look to articulate their point of difference in an increasingly competitive operating environment. These projects, easily understood by business as they ‘speak its language’, will go a long way in addressing the rising interest in strategic philanthropy in Australia, where foundations and trusts are demanding more from recipients’ reporting so they can better assess the impact of their grants in addressing costly social problems (Patrizi & Thompson, 2011). The business sector is also part of the growing push for transparency of the activities of not-for-profit organizations, including through the use of incentives. For example, the corporate responsibility arm of accounting firm PwC Australia (formally PricewaterhouseCoopers) has initiated a transparency award, encouraging not-for-profit organizations to not only be transparent about their governance, finances and investments, business strategies and stakeholder engagement, but also about their activity and performance such as organizational outcomes (PwC Australia, 2013). It is in this latter environment of scrutiny, transparency and accountability for Australian child and family welfare organizations, through the forging of new relationships between the first, second and third sectors, and with the increasing focus on evidence-informed practice in the human services, that an interesting role for evaluation has emerged.

70

A. McCoy et al. / Evaluation and Program Planning 44 (2014) 68–74

3. A brief history of Australian child and family welfare organizations engagement with evaluation Prior to the early 1970’s, there was little to no evaluation occurring in the Australian health and welfare system (Commonwealth of Australia, 1979). Rather it was as child and family services began to be outsourced to community organizations over subsequent years, that interest and awareness of evaluation grew throughout the sector (Straton, 1982). At this time, evaluation was seen to be a ‘‘grass roots activity. . . largely the responsibility of professional programme staff rather than that of expert evaluation consultants’’, or even internal experts (Straton, 1982, p. 6). Evaluation was also seen as a form of programme monitoring, and as a decision-making tool for policy and programme development (ibid). These early days of evaluation in Australia were not without challenges, and child and family welfare organizations along with other not-for-profit entities had to navigate complex new processes and relationships. From historical papers and government submissions, it appears that early evaluation seemed to be ‘of’ the sector, rather than ‘by’ the sector, causing discomfort and anxiety, especially around the possibility of cuts to funding. The Australian Council of Social Service Inc. (ACOSS) then stated: Fear, we have found, is very much at the basis of some agencies’ resistance [to evaluation]. . . In the case of welfare agencies responding to the possibility of programme evaluation, fear is the response that is manifest when evaluation is seen to be linked to the cutting of programme funds’’ (Commonwealth of Australia, 1979, p. 106). ACOSS then went on to say: ‘‘It must be recognized that fear is not necessarily, or always, a rational response. Programme evaluation need not lead to fund cuts and welfare organizations, far from losing from the experience, may in fact stand to gain: for example, a better, more sound welfare programme; a smoother, more efficient management system; a clearer, more appropriate set of goals’’ (Commonwealth of Australia, 1979, p. 106). While some organizational anxiety about evaluation by third parties tends to continue, especially where cuts to funding may be an outcome, the view of evaluation has shifted more towards an appreciation of ACOSS’s second point. Many child and family welfare organizations in Australia today are developing a growing awareness, understanding and commitment to evaluation, with a likely acceleration due to an encouraging, if not demanding, external environment.

4. Approaches to evaluation in child and family welfare organizations While there are anecdotal suggestions of a rise in internal evaluation being undertaken in a broad range of organizations at the expense of external consultancy, there seem to be no significant studies in Australia or around the world that provide robust evidence for this claim (Mathison, 2011). An informal study in 1997 reported that approximately 75% of evaluations in Canada and France, and 50% of evaluations in the United Kingdom and United States, were completed internally. This same study also estimated that the proportion of internal evaluation in Australia was approximately 80%. A study of non-government organizations in Sydney NSW over a decade later found a high interaction with and interest in internal research, with four of the eight organizations involved already employing research staff, and the remaining four having intent to do so in the near future (Keen, 2009). More

recently, child and family welfare sector peak bodies such as Family and Relationships Australia (FRSA) and the Centre for Excellence in Child and Family Welfare in Victoria have looked into the issue of research and evaluation and the impact on its member organizations. A broad understanding of the state of evaluation in child and family welfare organizations in Australia, however, remains unknown. While figures in the above research studies suggest that utilization of internal evaluation in not-for-profit organizations may be high, it is important to note that some studies have found that there may be some confusion by organizations about what actually constitutes evaluation, or at the least, varied opinion about what evaluation should involve. A 2007 US study by Carman found that while community-based organizations were engaging in activities to meet accountability requirements and management needs, there was in fact little evaluation occurring to assist with service delivery improvements. The practice of Australian child and family welfare organizations is likely, at least in part, to be consistent with this finding also. ‘‘As a result, what we are seeing is that community-based organizations are engaging in all kinds of strategies in an effort to try to show that they are doing good work. . . at the expense of the one strategy that would actually help organizations to know if they are doing good work – evaluation’’ (Carman, 2007, p. 72) When child and family welfare organizations do make a decision to invest in clear, quality evaluation, there are various options available to them, falling into three broad categories: organizations can either engage the services of external evaluation consultants and academics; they can focus on building internal capabilities by investing in an organizational evaluation function; or they can utilize both of these approaches in a form of hybrid model. Adapted from Patton’s work (2008), Table 1 describes the various levels of evaluation commonly used by organizations within these key approaches. This use begins with little to no evaluation being conducted internally; to a number of staff members being given a remit for evaluation work; to a level of organizational growth and change where evaluation becomes fully integrated and valued across all levels of the organization. 4.1. External evaluation approaches Owen (2006) refers to two types of external evaluation approaches: outside evaluation for outsiders, or outside evaluation for insiders. Outsider for outsider evaluation includes activities such as external audits and large-scale government commissioned evaluations. While human service organizations are often required to provide data and interact with evaluators in such an approach, the organization is unlikely to directly benefit from evaluation findings. Outsider for insider evaluation is when organizations commission an evaluation by an external party, often an evaluation consultancy firm or university. The evaluation is generally attached to a specific programme, most likely following the organization receiving a new funding grant (Mathison, 1994). An evaluator is engaged for a specified period of time to complete an agreed upon evaluation and produce an evaluation report. The evaluator then disengages from the organization, although they may be re-commissioned by the same organization at a later date. 4.2. Internal evaluation approaches Internal evaluation can be defined as that which is ‘‘done by project staff, even if they are special evaluation staff – that is, even if they are external to the production/writing/teaching/service part of the project’’ (Scriven, 1991, p. 197). While an external evaluator is generally hired for a specific assignment, internal evaluation is

A. McCoy et al. / Evaluation and Program Planning 44 (2014) 68–74

71

Table 1 Evaluation use and organizational approaches to evaluation for child and family welfare organizations. Level of evaluation use by an organization

Description

Entirely external

 No evaluation of programmes or related activities such as developing programme logic models or creating evaluation plans is undertaken by internal staff members  If evaluation is required to meet accountability requirements, it is conducted by external evaluation consultants  Internal staff members conduct evaluation on a small minority of programmes but it is on an ad hoc basis. No systematic approach to evaluation exists  The focus and methodology for any evaluation is usually set by an external stakeholders e.g. providing required performance and/or accreditation data to funders  Occasional evaluation (focusing on outputs and processes only) is carried out by internal staff members who have been temporarily given an evaluation remit  Other evaluation activities such as developing programme logic models and creating evaluation plans rarely occurs  One staff member is assigned to perform evaluation on a part-time basis and as directed by the CEO or senior management evaluation often focuses on whether or not the programme is achieving the goals that the organization and funders set out e.g. a programme reached the target population  Some external evaluation or support may be provided  At least one staff member is assigned to evaluation on an ongoing full-time basis  This staff member is seen as the ‘go to’ person to meet any evaluation ‘needs’. There may be some input from programme managers in identifying evaluation needs  Evaluation often includes a focus on outcomes as well as processes  Some external evaluation or support may be provided  Regular evaluation occurs throughout the organization and results are reported to staff in a meaningful way  Several internal staff members are skilled in evaluation and are conducting projects on a regular basis. The organization has an evaluation coordinator or manager  Organizational policies on evaluation exist e.g. all programmes must have a programme logic model, all programmes must gather client feedback data  Evaluations are used for informed decision-making about programme development and costs  The organization has an overarching evaluation framework in place, requiring all programmes to be evaluated  An evaluation manager leads an internal evaluation team. The evaluation manager is a valued and active member of the organization’s senior management team, contributing to and influencing high-level organizational decision-making.  The evaluation team provides training and coaching to internal staff on evaluation and how to use findings to develop and improve practice.  Evaluation findings are used to improve programmes, and organizational structure and processes in an ongoing manner  Evaluation findings are shared with the Board, organizational partners and key stakeholders through reports, newsletters, websites, annual reports and social media  An overarching organizational culture exists where evaluation is seen as critical to organizational effectiveness and quality, both in direct service delivery, and throughout other organizational functions  Evaluation is used to assess and avoid mission drift

Most likely organizational approach to evaluation External

Minimal ad hoc internal evaluation

Occasional internal evaluation

Part-time internal evaluator

Full-time internal evaluator

Routine internal evaluation

Fully integrated and highly valued internal evaluation

Internal

Hybrid

(partial)

(partial)

Adapted from Patton (2008).

generally a long-term organizational investment (Love, 1991). Owen (2006) describes two forms of internal evaluation: insider for outsider evaluation where evaluations are conducted in-house with the purpose of meeting funding requirements and accountabilities for external audiences; and insider for insider evaluation where the purpose of the evaluation is heightened from being driven by external demands to being motivated by internal benefits, such as programme improvement and organizational learning (ibid). An internal evaluation approach may have various structures within an organization. The evaluation team or assigned individual may provide evaluation services to the entire organization in a centralized structure of leadership, practice and education. Alternatively, a decentralized structure sees the evaluation function situated within a specific programme or programmes within the organization. In this instance, evaluation has the sole focus of evaluating that specific programme, and usually any direct support and education remains within the programme area. Finally, staff members involved in general service delivery may have a remit to conduct evaluation to meet organizational needs in

an embedded structure. Whilst the first two options usually involve staff specifically trained in evaluation, the third option may rely on staff with little or no formal training in evaluation (Bourgeois, Hart, Townsend, & Gagne, 2011). This option may be associated with approaches such as practitioner-based research and action research where practitioners fulfil felt or expressed needs and motivations by conducting evaluation as part of their practice. 4.3. Hybrid evaluation approaches A hybrid evaluation combines the two approaches by supplementing internal evaluation efforts with external consultancy, either on a continuous or sporadic basis. This approach may build on internal evaluation functions and capabilities through models such as: internal evaluators collecting the data and external evaluators determining the methodologies and carrying out the data analysis; projects coordinated internally with external monitoring; or both internal and external independent and concurrent evaluations with integrated findings (Christie, Ross,

72

A. McCoy et al. / Evaluation and Program Planning 44 (2014) 68–74

& Klein, 2004; Mathison, 2011). These options can involve external consultants on a continuum from limited involvement such as mentoring, advice and data analysis, to a collaborative effort where both parties are equal contributors to the project, to external evaluators fully directing the input and tasks of internal evaluators by having overall responsibility for the coordination of the project (Bourgeois et al., 2011). 5. Considerations for organizations when choosing an evaluation approach There are several factors that may influence a child and family welfare organization’s choice to form an internal evaluation function, to engage an external evaluation consultant, or to use both in a hybrid model. Factors such as cost, organizational size and organizational capacity may all affect how evaluation is used in service delivery and management practices. Consideration of these factors is indeed critical, as what may be right for one organization, may not be for another. It is important for child and family welfare organizations to consider factors such as their specific organizational needs, the importance of organizational understanding, the need for objectivity and credibility, and relationship building and management in their decision about how they wish to invest in evaluation. 5.1. Meeting organizational needs The underpinning of an organization’s decision about whether to engage in internal, external or hybrid evaluation is what approach will best meet the organization’s needs, and an obvious reality for first consideration is that of cost. For some child and family welfare organizations with low revenue or discretionary income, unless funded through a specific programme contract, evaluation through the use of internal evaluation personnel or external consultancy may not be a viable option. Instead, these organizations may rely on practitioners, team leaders and programme managers to conduct a basic level of evaluation to fulfil accountability requirements to funders and any organizational needs for quality improvement. For organizations that are in a position to invest more broadly in evaluation, internal, external or hybrid approaches are possible, depending on assessment of organizational need. Internal evaluation is often thought to be more cost-efficient than engaging an external consultant, however strategic utilization of external evaluation can be a worthwhile option for organizations who are interested in more of a discrete approach to evaluation in order to just meet specific, time-limited needs (Christie et al., 2004; Conley-Tyler, 2005). It is therefore organizations that wish to use evaluation as part of wider organizational plans that may see a benefit in an internal evaluation function. ‘‘Generally it requires an agency to adopt a commitment to the accumulation of knowledge about itself – that is, it adopts a continuous learning focus’’ (Owen, 2006, p. 142). When used in this way, internal evaluation can contribute to organizational learning and decision-making, maximizing organizational investment for long-term gain (Volkov, 2011). Internal evaluation staff can act as an ‘evaluation touchstone’ and a reminder and promoter of an organization’s commitment to evaluative thinking and practice. This in turn can contribute to a positive, reflective learning culture, something that may be out of reach, or at the least more difficult to achieve, when using external evaluators on a sporadic basis. As part of any decision-making around evaluation approaches, organizational leaders need to consider the various purposes of evaluation, including external accountability requirements, internal programme development and quality improvement, and strategic promotion of service impact. These purposes are highly

influential in the decision about which approach will best suit a particular organization. From an internal programme development and quality improvement perspective, forming an internal evaluation function can provide some security that evaluation personnel will remain at the conclusion of the evaluation to assist with implementing the changes recommended and to provide ongoing support. This bridging of the research to practice gap is of significant benefit and opportunity for organizations, and one that otherwise might be challenging to accomplish (Christie et al., 2004). However, a hybrid approach to evaluation, or even an external approach where an ongoing relationship is formed between an organization and evaluator, may also offer this benefit despite no internal function. Both internal and external evaluators can also be flexible to respond to methodological issues through the evaluation, although an internal evaluation approach has been associated with heightened situational responsiveness (Volkov & Baron, 2011). A final consideration is that certain evaluators may be better suited for particular evaluation projects than others. External evaluators are more likely to be trained in evaluation than internal evaluators, especially in complex methodologies and data analysis, which can be beneficial when a project requires an intricate or comprehensive approach (Love, 1991; Scriven, 1991). Therefore, when organizations require an evaluator with a specialist skill-set and an internal evaluator does not have such skills, recruiting an external evaluator, either independently or in a hybrid approach, can be beneficial. 5.2. Organizational understanding One of the key differences between internal and external evaluation approaches is the level of organizational knowledge and understanding that an individual or team can bring to the evaluation. As employees of an organization, internal evaluators have first hand knowledge of the organization’s history, structure, programmes and culture, and can use this information when designing evaluation plans, engaging staff, conducting data collection, and analysing results. This deeper understanding allows for tailored evaluation methodologies to be chosen and a more comprehensive and informed interpretation of findings (Bourgeois et al., 2011; Love, 1991). External evaluators, given the time and resources in project timelines, or through an ongoing relationship with an organization, can partly bridge this gap but are unlikely to understand organizational nuances that could be of benefit in the evaluation process and in the ultimate utilization of the evaluation by the organization (Conley-Tyler, 2005). The understanding developed by an internal evaluator as an employee of an organization does not come without its challenges however, with the role being vulnerable to other constraints that can impact on evaluation processes. Kennedy (1983, p. 520 in Lyon, 1989, p. 241) argues that organizational dynamics mean that evaluators ‘‘must find ways either to adapt their role to the needs of the organization or to manipulate the organization so that it will accept the role they feel is appropriate’’. A hybrid evaluation model may provide some benefit here by allowing for a balance between an evaluator with solid organizational understanding and an external evaluator who is in an independent position and is less likely to be exposed to organizational politics. As described, child and family welfare organizations are forever adapting and reforming as a result of a changing external environment. Evaluation, when used more broadly than just in service delivery, can provide a way for organizations to understand the change they are experiencing and better plan for the future. ‘‘An organization’s ability to meet the demands of these different kinds and levels of changes rests, in part, on how

A. McCoy et al. / Evaluation and Program Planning 44 (2014) 68–74

successful it is at making knowledge building through evaluation a core competence. From this perspective, evaluation is not an end in itself; rather it is an instrumental and strategic step in the organization’s implementation of and adaptation to change’’ (Moxley & Manela, 2000, p. 317) This can be a meaningful role for evaluation, and an internal evaluator can offer this distinct contribution due to their strong knowledge and understanding of the organization. 5.3. Relationship building and management Relationship building is a critical attribute of any quality and effective evaluation process, however this can be a challenging task. Love (1991) describes a history of difficult relationships between managers and internal evaluators, including tension, misalignment of values, and different views about the organization. Programme managers have been described as viewing evaluators as impractical and out of touch with frontline demands and capabilities. On the other hand, evaluators have been said to view managers as acting without sufficient thought and planning, as making decisions with unreliable information, and as marketing programmes without correct and transparent use of outcomes or a commitment to the long-term view of organizational goals (ibid). Some of these challenges may be mitigated when the evaluator, managers and frontline practitioners are homophilous, perhaps by sharing similar education and training backgrounds, therefore speaking the same language and working ‘on the same page’ (Rogers, 2003). Of course, external evaluators are likely to face similar challenges when working with internal programme staff. Indeed, external evaluators can be perceived as a threat, both in their role as an evaluator and as an ‘outsider’ to the organization (Owen, 2006). For external evaluators, investing in relationships with key stakeholders within an organization not only builds trust between the evaluator and organizational staff, but can also assist in fostering a positive attitude towards current and future evaluations. It may also increase the likelihood that an evaluation will be utilized to improve service delivery (ibid). By ensuring clear role clarification of the role of the internal or external evaluator and that of programme staff, a culture of understanding is created where differences in ‘‘backgrounds, roles, goals, values and frames of reference’’ are recognized, and there is an appreciation of the perspectives of both evaluators and managers so that ‘‘mutual respect and a solid partnership reflecting complementary roles’’ is built (Love, 1991, p. 9). Patton (2008) describes unsuccessful roles for evaluators as including a: spy, fear-inspiring dragon, number cruncher and organizational conscience. Evaluators are said to be more effective when taking on roles such as: management consultant, decision support, information resource, expert trouble-shooter and systemic planner. If an evaluator can embody these roles and demystify evaluation to staff members at all levels of the organization with support and understanding, barriers between service delivery and evaluation can be overcome. 5.4. Objectivity and credibility Objectivity and credibility are often described as critical attributes of any evaluation. It has been suggested that external evaluation is more objective and credible than internal evaluation, which can be influenced by organizational perspectives and values. However this myth has been widely dispelled in research and is now not supported by many evaluators (Conley-Tyler, 2005). Scriven (in Mathison, 1994) states that preferences and values are not necessarily equivalent to biases, and Sonnichsen (2000) argues

73

that neither internal nor external evaluators can ever be truly objective. Patton (2008) expands on this by describing evaluation as having political inherency because of six key factors:  It involves people and their values, perception and politics  It requires making classifications and categories to filter data and this can be controversial  It uses data that requires interpretation. ‘‘Interpretation is only partly logical and deductive; it’s also value laden and perspective dependent’’ (p. 531)  Actions and decisions that may affect resources stem from evaluation  It involves programmes and organizations where decisions about power, status and resources are made. Evaluation can affect this decision-making  It involves information. ‘‘Information leads to knowledge; knowledge reduces uncertainty; reduction of uncertainty facilitates action; and action is necessary to the accumulation of power’’ (p. 531) While evaluation is a political task and one by its very nature that evades objectivity, impartiality is something that is achievable for both internal and external evaluators with self-awareness and careful monitoring. However, while an internal evaluator may demonstrate impartiality through methodological frameworks, data analysis and transparency when reporting findings, only an expert eye may recognize such measures. Therefore what perhaps becomes more critical to an organization is the idea of perceived objectivity. Ensuring evaluations are seen to be objective by powerful stakeholders can be the reality for many child and family welfare organizations, especially considering the highly emotive and complex political context of the work being undertaken. This perceived objectivity can lend itself to a credibility that internal evaluation may not be able to achieve amongst certain readers and decision-makers. This can be especially relevant when organizations have the responsibility to commission an evaluation for a government funded programme, where findings will be made public and may be considered in policy-making (Conley-Tyler, 2005). In such instances, an organization may choose to engage an external evaluator, either independently or as part of a hybrid approach, to provide oversight to the evaluation project and to author the report that will be disseminated to the outside audience. However, in saying this, it remains critical that an organization first consider the purpose of the evaluation and what approach will best meet the needs of the organization when making a decision based on perceived credibility or objectivity.

6. Conclusion Evaluation of the type of work being undertaken by child and family welfare organizations in Australia, the United States, and around the world is inherently complex and challenging. However, as the external environment around such organizations shifts and creates new demands for quality and effective service delivery, transparency, and accountability, many organizations have little choice to create and strengthen evaluation functions or risk being left behind. For child and family welfare organizations in Australia and the United States, especially those that have low revenue or discretionary income, choosing which evaluation approach to invest in can be a challenging undertaking. There is no right or wrong approach to use, instead organizations must consider the purpose of particular evaluations, understand their broader needs and desires for evaluation, and appreciate what benefits and challenges they may face in implementing an approach, before

74

A. McCoy et al. / Evaluation and Program Planning 44 (2014) 68–74

coming to a decision about what is right for their organization: at that particular time, and for the future. While some of the conclusions drawn during this paper could apply to all human service organizations, child and family welfare organizations face particular complexities when implementing evaluation due to the nature of the work they undertake. Child and family welfare practice takes place in a highly political environment and under a great deal of scrutiny from the public and from government. This is understandable when one considers the magnitude of providing services to vulnerable families and children at risk of harm or neglect. Offering the most effective services possible, capable of altering the trajectories of such families is essential, and the role of evaluation in revealing what is and is not effective, critical. It is organizations that aspire to broader organizational change in order to achieve a culture of learning, evaluative-inquiry, and evidence-informed processes and practices that may invest in and embrace an internal evaluation function, where this is possible. In either instance, it is the viewing of evaluation as part of a broader purpose that will allow child and family welfare organizations in both Australia and the United States to realize their organizational missions, and contribute to the sector’s mutual aim of safe, healthy and happy children and families. The ‘wicked social problems’ of our time are complex and entrenched. The best chance our sectors have of achieving progress for the benefit of children and families relies on quality and effective services, implemented well to achieve positive outcomes and sustainable impact. This in turn relies on motivated organizations and the leadership of the professionals within them, to view strong evaluation practices and cultures as an ethical and collective responsibility to meet the best interests of children, families and the society in which we live. References Australian Charities and Not-for-Profit Commission. (2012). Australian Charities and Notfor-Profit Commissionhttp://www.acnc.gov.au/ACNC/About_ACNC/NFP_reforms/ Background_NFP/ACNC/Edu/NFP_background.aspx. Barman, E. (2007). What is the bottom line for nonprofit organizations? A history of measurement in the British Voluntary Sector. Voluntas, 15, 101–115. Bourgeois, I., Hart, R., Townsend, S., & Gagne, M. (2011). Using hybrid models to support the development of organizational evaluation capacity: A case narrative. Evaluation and Program Planning, 34, 228–235. Carman, J. (2007). Evaluation practice among community-based organizations: Research into the reality. American Journal of Evaluation, 28, 60–75. Christie, C., Ross, R., & Klein, B. (2004). Moving toward collaboration by creating a participatory internal–external evaluation team: A case study. Studies in Educational Evaluation, 30, 125–134. Commonwealth of Australia. (1979). Through a glass, darkly. Evaluation in Australian health and welfare services, Canberra: Australian Government Publishing Service. Conley-Tyler, M. (2005). A fundamental choice: Internal or external evaluation. Evaluation Journal of Australasia, 4, 3–11. Keen, S. (2009). Research evaluation and innovation: A study of Sydney-based community organizations. CSI Issues Paper No. 8 Sydney: Centre for Social Impact.

Love, A. (1991). Internal evaluation: Building organizations from within. Newbury Park, CA: Sage Publications. Lumley, T. (2013). It’s good that charities are interested in data, but why only now? The Guardianhttp://www.guardian.co.uk/voluntary-sector-network/2013/may/17/charities-data-why-now. Lumley, T., Rickey, B., & Pike, M. (2011). Inspiring impact: Working together for a bigger impact in the UK social sectorhttp://evpa.eu.com/wp-content/uploads/2012/01/ NPC_Inspiring-Impact_1211.pdf. Lyon, E. (1989). In-house research: A consideration of roles and advantages. Evaluation and Program Planning, 12, 241–248. McMahon, A., Thomson, J., & Williams, C. (2000). Understanding the Australian Welfare State: Key documents and themes. Croydon, Victoria: Tertiary Press. Mathison, S. (1994). Rethinking the evaluator role: Partnership between organizations and evaluators. Evaluation and Program Planning, 17, 299–304. Mathison, S. (2011). Internal evaluation, historically speaking. In (Series Ed.) & B. Volkov, & M. Baron (Vol. Eds.), New directions for evaluation: vol. 132. Internal evaluation in the 21st century (pp. 13–23). . Moxley, D., & Manela, R. (2000). Agency-based evaluation and organizational change in the human services. Families in Society, 81, 316–327. NSW Government. (2012). NSW Governmenthttp://www.treasury.nsw.gov.au/site_plan/social_benefit_bonds_trial_in_nsw_FAQs. Owen, J. (2006). Program evaluation: Forms and approaches (3rd ed.). Crows Nest, Australia: Allen & Unwin. Patton, M. (2008). Utilization-focused evaluation (4th ed.). Los Angeles: Sage. Patrizi, P., & Thompson, E. (2011). Beyond the veneer of strategic philanthropy. Foundation Review, 2, 52–60. Productivity Commission. (2010). Contribution of the not-for-profit sector. Canberra: Productivity Commission. PwC Australia. (2013). PwC Australiahttp://www.pwc.com.au/about-us/corporate-responsibility/transparency-awards/index.htm. Rogers, E. (2003). Diffusion of innovations (5th ed.). New York: Free Press. Scriven, M. (1991). Evaluation thesaurus. Thousand Oaks: Sage. Sonnichsen, R. (2000). High impact internal evaluation. Thousand Oaks: Sage. Straton, R. (1982). Program evaluation in Australia. Evaluation Research Society Newsletter, 6, 6–8. The Centre for Social Impact. (2011). Report on the NSW Government social impact bond pilothttp://www.csi.edu.au/assets/assetdoc/0b6ef737d2bd75b9/Report_on_the_NSW_Social_Impact_Bond_Pilot.pdf. Volkov, B. (2011). Beyond being an evaluator: The multiplicity of roles of the internal evaluator. In (Series Ed.) & B. Volkov, & M. Baron (Vol. Eds.), New directions for evaluation: vol. 132. Internal evaluation in the 21st century (pp. 25–42). . Volkov, B., & Baron, M. (2011). Issues in internal evaluation: Implications for practice, training, and research. In (Series Ed.) & B. Volkov, & M. Baron (Vol. Eds.), New directions for evaluation: vol. 132. Internal evaluation in the 21st century (pp. 101– 111). . Alicia McCoy is a PhD candidate at the University of Melbourne. She is a social worker with a background in women and children’s health. She is currently Research and Evaluation Manager at Family Life, a community organization providing child and family welfare services in Melbourne, Australia. Dr David Rose is a Lecturer in the Department of Social Work, University of Melbourne. He teaches programme planning and evaluation in the postgraduate social work course and has an ongoing research interest in programme evaluation in the human services. He has substantial prior experience in programme design and evaluation within notfor-profit human service organizations in Australia. Professor Marie Connolly is Chair and Head of Social Work at the University of Melbourne. Her research interests include engagement strategies in child and family welfare and she has a background in statutory child protection as a practitioner and manager.

Approaches to evaluation in Australian child and family welfare organizations.

Child and family welfare organizations around the world aspire to achieve missions that will improve outcomes for vulnerable children and families and...
326KB Sizes 0 Downloads 3 Views