Environmental Management DOI 10.1007/s00267-014-0435-3

Accounting for Results: How Conservation Organizations Report Performance Information Adena R. Rissman • Robert Smail

Received: 26 August 2014 / Accepted: 16 December 2014  Springer Science+Business Media New York 2014

Abstract Environmental program performance information is in high demand, but little research suggests why conservation organizations differ in reporting performance information. We compared performance measurement and reporting by four private-land conservation organizations: Partners for Fish and Wildlife in the US Fish and Wildlife Service (national government), Forest Stewardship Council—US (national nonprofit organization), Land and Water Conservation Departments (local government), and land trusts (local nonprofit organization). We asked: (1) How did the pattern of performance reporting relationships vary across organizations? (2) Was political conflict among organizations’ principals associated with greater performance information? and (3) Did performance information provide evidence of program effectiveness? Based on our typology of performance information, we found that most organizations reported output measures such as land area or number of contracts, some reported outcome indicators such as adherence to performance standards, but few modeled or measured environmental effects. Local government Land and Water Conservation Departments reported the most types of performance information, while local land trusts reported the fewest. The case studies suggest that governance networks influence the pattern and type of performance reporting, that goal conflict among principles is associated

A. R. Rissman (&) Department of Forest and Wildlife Ecology, University of Wisconsin-Madison, 1630 Linden Drive, Madison, WI 53706, USA e-mail: [email protected] R. Smail Nelson Institute for Environmental Studies, University of Wisconsin-Madison, 550 North Park Street, Madison, WI 53706, USA

with greater performance information, and that performance information provides unreliable causal evidence of program effectiveness. Challenging simple prescriptions to generate more data as evidence, this analysis suggests (1) complex institutional and political contexts for environmental program performance and (2) the need to supplement performance measures with in-depth evaluations that can provide causal inferences about program effectiveness. Keywords Evidence-based conservation  Environmental governance  Monitoring and evaluation  Land conservation  Performance measurement  Policy outcomes

Introduction Environmental policy and management are increasingly influenced by the ‘‘governance by performance management’’ movement (Moynihan 2008). Leaders, scientists, and foundations promote performance measurement to enhance environmental outcomes and improve legitimacy in a time of shrinking budgets (Christensen 2003; Ferraro and Pattanayak 2006; Pullin and Knight 2001). Governments and nongovernmental organizations (NGOs) are under growing pressure to measure and evaluate environmental programs (Sutherland et al. 2004). Yet differences in how organizations measure and report performance information have not been well examined. Proponents suggest that better monitoring and evaluation will enhance effectiveness and cost-efficiency, demonstrate results internally and to the public, improve learning and targeting of investments, and eliminate counterproductive management (Keene and Pullin 2011; Srebotnjak 2007; Stem et al. 2005). Critics suggest increased performance

123

Environmental Management

measurement does not improve program implementation (Radin 2006) and is challenged by complex and iterative dynamics in practice (Heinrich and Marschke 2010). Few studies have empirically examined how and why conservation organizations report performance information. We compared performance reporting by four different conservation organizations (governmental and nongovernmental, local and national) to examine the flow of performance information reporting, the association of political conflict with performance information, and the contribution of performance information to assessing program effectiveness. Reliance on performance measurement is a ‘‘fundamental reform in public administration’’ (Frederickson and Frederickson 2006). The Government Performance and Results Act (GPRA) of 1993, updated in 2010, requires federal agencies to document their results, intensifying pressure for resultsbased accountability (Frederickson and Frederickson 2006). In the nonprofit sector, managing for results is advanced by foundations, government funders, board members, and consumers (Carman 2009). For example, the Conservation Measures Partnership (CMP) developed a global Open Standards for the Practice of Conservation to help conservation NGOs demonstrate effectiveness and craft performance measures, with support from the US Agency for International Development (USAID) and major foundations (Conservation Measures Partnership 2007). Certification and ecolabeling rely heavily on performance measures to assure consumers about adherence to environmental and social standards (Cashore 2002). Performance management and adaptive management both derive from systems theory in assuming that feedback loops with iterative information will improve decision-making. Logic models of program performance distinguish inputs, outputs, and outcomes (McLaughlin and Jordan 1999). Program inputs (dollars) and outputs (acres conserved, agreements signed, miles of buffer installed) are common ‘‘bucks and acres’’ measures of conservation performance (King and Fairfax 2004). In contrast, outcomes are missionachieving results like changes in land cover, biological diversity, water quality, or resource production. Performance standards are indicators of outcomes that represent political and administrative agreements on what practices will be deemed sufficient to achieve goals. Global outcome conditions such as regional water quality measures are important for examining overall trends but do not evaluate program effectiveness (Nichols and Williams 2006). Models and measures of a program’s contribution, compared to the counterfactual without the program, provide the best indicator of program effects but are expensive and often uncertain. We compared how four organizations monitored and reported performance information on private-land conservation programs. We focused on private-land conservation

123

because private lands are critical for wildlife habitat, water quality, scenic open space, and sustainable resource production (Knight 1999), private-land conservation programs have diverse social and institutional contexts (Kamal et al. 2014; Raymond and Brown 2011), and they have particular challenges related to monitoring and demonstrating outcomes (Rissman 2011). First we asked, how does the pattern and type of performance information reporting vary? We expected to find that inter-organizational relationships influence the pattern and type of performance reporting, since the structure of relationships among managers, funders, authorities, and stakeholders is important for performance measurement (Frederickson and Frederickson 2006; Imperial 2005). Program implementation devolved or transferred to local government and nonprofit organizations is expected to have greater complexity and more obstacles to performance accountability, which may increase or decrease reporting (Provan and Milward 2001). Second, we asked whether high conflict in an organization’s political environment is associated with greater reporting and use of performance information. Although performance information is sometimes viewed as simply factual, broader political contexts affect how measures of environmental performance are developed and interpreted, since society is pluralistic with competing interest groups. Some evidence suggests that contentious political arenas increase performance information, since managers rely more heavily on data to justify decisions to diverse stakeholders (Kroll and Moynihan, in press). Managers may also report performance information in order to be responsive to stakeholders who derive meaning from the information. Managers have been found to be more likely to use performance information in future decisions when they perceive high conflict among stakeholders (Moynihan and Hawes 2012). However political conflict has also been found to decrease the reporting and use of performance information (Jennings and Hall 2011). Multiple principals (those on whose behalf program managers act) with divergent goals can create challenges for program implementation, and managers may respond by providing greater performance information. Finally, we asked whether performance information contains the type of evidence useful in measuring program effectiveness. We expected to find significant perceived gaps between performance information and evidence of program effects (Hatry 2006). The rhetoric of performance measurement suggests that performance information will make conservation more effective by revealing if actions are working (Salafsky et al. 2002). Others emphasize the difficulty of producing and using this information about programs’ environmental outcomes (Harris and Heathwaite 2012). Challenges in demonstrating causality include lack of experimental design, high multicollinearity, spatial

Environmental Management

misfit between scale of program effects and measured environmental conditions, temporal lags, and private or difficult to observe information.

Methods Given the exploratory state of the research on how organizations conduct performance measurement, we selected a broad set of case study organizations. We chose four private-land conservation organizations that differed in their extent (local and national) and their organization type (governmental and nongovernmental): national government United States Fish and Wildlife Service—Partners for Fish and Wildlife (USFWS-PFW), national nonprofit market-based certification program Forest Stewardship Council (FSC), county government Land and Water Conservation Departments (LWCDs), and local nonprofit land trusts. All shared a mission of enhancing conservation on private lands but varied in their organizational type, spatial extent, specific resource goals, and goal conflict among principles (Table 1). Goal conflict among an organization’s principles was defined as substantial differences in missions (e.g., economic production, environmental protection, and social equity). We considered selecting for a shared conservation goal (e.g., wildlife, water quality, forest sustainability, or open space) but determined that selecting a resource sector would narrow the range of performance information and accountability pressures, since sectors like water quality have a national regulatory framework while open space and forest sustainability do not. This selection of cases provides information on how and why programs report performance information, but does not reveal whether programs with certain characteristics will necessarily develop those types of information (Yin 2009).

We took an actor-based approach to understanding performance information flows based on empirical data from documents and interviews. We diagrammed each program’s formal monitoring and reporting network from document analysis and interviews. Starting with a Wisconsin, USA field office in each organization, we asked managers who they monitor and to whom they report performance information. To diagram the relationship between private lands and the field office, we relied on written reports and interviews with field staff to determine how they monitored private landowners and contractors. We also asked field staff and managers how they aggregated, and to whom they reported, field-level information. We conducted extensive internet searches for reports and asked interviewees to identify reporting documents including annual reports, budget reports, grant reports, and monitoring reports. We defined monitor as ‘‘observe or check the progress or quality of something over a period of time’’ and report as ‘‘give an account of something that one has observed, heard, done, or investigated’’ (Oxford English Dictionary 2014). We conducted 31 interviews, divided among the networks of the county LWCDs (12 interviews), USFWSPFW (5), FSC (6), and local land trusts (8). We conducted 1-hour, semi-structured interviews recorded in staff offices or by phone between 2011 and 2013. We interviewed one or two people from each organization with the greatest responsibility for monitoring and reporting. For the LWCD network, we conducted interviews with a representative from two county LWCDs (medium and high capacity), Wisconsin Department of Agriculture, Trade, and Consumer Protection (DATCP), Wisconsin Department of Natural Resources (DNR), Land and Water Conservation (LWC) Board, Wisconsin Association of Land Conservation Employees (WALCE), and Wisconsin Land and Water Conservation Association (WLWCA) which

Table 1 Characteristics of four case study organizations and the spatial extent of their programs in the United States (US) and Wisconsin (WI) Land and Water Conservation Departments (LWCD)

USFWS-Partners for Fish and Wildlife (USFWS-PFW)

Forest Stewardship Council (FSC)

Local Land Trusts

Type

Government

Government

Nonprofit organization

Nonprofit organization

Organizational extent

Local

National

National

Local

Policy tools

Restoration, incentives, technical support, regulation

Restoration, incentives

Market-based certification

Acquisition, incentives

Goal

Land, water and soil resources

Migratory birds, threatened & endangered species

Forest sustainability

Natural habitats, communities, open space, working lands

Goal conflict among principals (excluding landowners)

HIGH

LOW

HIGH

LOW

Spatial extent

16,964,000 ha (WI) n/a (US)

10,000 ha (WI) 1,724,400 ha (US)

2,259,900 ha (WI) 13,781,900 ha (US)

48,500 ha (WI) 19,029,000 ha (US)

123

Environmental Management

represents Wisconsin Land Conservation Committees (LCCs) and Departments (LWCDs). For USFWS-PFW, we interviewed staff from local, regional, and headquarters USFWS offices and a partner organization (a Joint Venture). For FSC, we interviewed staff from FSC-US, a certification body, and DNR. For the land trust network, we conducted interviews with three land trusts (low and medium capacity), Gathering Waters Conservancy (the Wisconsin land trust service center), Land Trust Accreditation Commission (commissioner, staff members, and one declined interviewee), and Internal Revenue Service. Repeated efforts to reach someone at White House Office of Management and Budget (OMB) for the LWCD, USFWS-PFW, and land trust cases were unsuccessful. The resulting monitoring and reporting networks track the unidirectional flow of performance information across formal reporting relationships. We designated information flowing into the field office as primarily monitoring, and information reported by the field office as primarily reporting, although we recognize that both types of functions could potentially be involved in many relationships. The networks were defined by ascertaining the flow of performance information from private land to field offices to state and national agencies within the executive and legislative branches, excluding the judicial branch and international reporting. A single role was assigned to ‘‘landowners’’ and ‘‘private donors or funders’’ and USFWS ‘‘partners’’ given our focus on formal, role-based relationships. Network sizes reflect the minimum number of formal organizations or roles. Table 2 describes a typology of performance information that we developed based on prior studies (e.g., Henri and Journeault 2008; Koontz and Thomas 2012). We coded types of performance measures from written reports or staff interview transcripts when reports could not be found. Network diagrams were shown to at least two employees from each case for validation. We summed the size of the network (number of organizations or suborganizations) and the percent of connections in which each type of performance measure was monitored or reported (100 % would mean all eight types of performance measures were shared across all connections). Performance monitoring and reporting were categorized as high (67–100 %), medium (34–66 %), low (1–33 %) or none (0 %). To understand how and why programs reported or collected performance information, we asked interviewees what pressures for accountability they experienced and how they used performance information in program implementation and budgeting (questionnaire in Appendix 1). Qualitative analysis of interview and document information examined monitoring goals, methods for monitoring, ease of measuring program effects, rationale for monitoring and reporting, goal conflict among

123

organizations, and the perceived usability of monitoring information in decision-making. Interviewees had an average of 13 years in their current position (range of 2–32 years) and 25 years in resource management (range of 8–35 years). Acronyms are in Appendix 2.

Case Studies County Land and Water Conservation Departments (LWCDs) Soil and water conservation has relied on decentralized local government implementation since the 1930s. County LWCDs promote land and water conservation practices to achieve greater environmental land stewardship. Numerous agencies have partial oversight over LWCD activities. In Wisconsin, county LWCDs are guided by citizen-led LCCs, part of County Boards. Wisconsin LCCs are among 3,000 Soil and Water Conservation Districts performing this work nationwide (Wisconsin Land and Water Conservation Association 2012). LWCDs submit reports to funders including state agencies DNR and DATCP and federal Natural Resources Conservation Service (NRCS) to inform them about ‘‘the quality and quantity of work being produced by the Department’’ (Dane County Land and Water Resources Department 2008). DNR reports to the Environmental Protected Agency (EPA) under Clean Water Act requirements. LWCDs submit 10-year plans for approval to the statewide advisory LWC Board comprised of representatives from local, state and federal agencies and agricultural and environmental NGOs. LWCDs are also responsible for implementing state and federal performance standards. NRCS technical standards such as 590 are made mandatory for LWCD projects by Wisconsin state regulations Natural Resources 151 and Agriculture, Trade, and Consumer Protection 50. These standards included specific requirements on livestock management, manure storage systems, and streambank management. United States Fish and Wildlife Service—Partners for Fish and Wildlife Program (USFWS-PFW) The federal UWFWS-PFW directly implements land conservation to ‘‘provide for the restoration, enhancement, and management of fish and wildlife habitats on private land’’ with a particular focus on Federal Trust Species such as migratory birds and threatened or endangered species (PFW Act; 16 USC 3771 2(a)(8)b). USFWS reported working with over 44,000 private landowners from 1987 to 2010, restoring and enhancing 1,026,000 acres of wetlands; 3,235,000 acres of uplands; and 9,200 miles of stream habitat (Partners for Fish and Wildlife Program 2010). The

Input, output, or outcome

Outcome

Outcome

Outcome

Outcome

Outcome

Outcome

Output

Input

Performance measure type

Measured outcomes

Modeled outcomes

Global outcome conditions

Performance standards for land management

Narrative descriptions

Land use, land cover

Program activities

Dollars, staff

X

X

X

Behavioral change

Table 2 Typology of performance information

X

X

X

X

X

X

Environmental change

Program inputs or resources, such as dollars, staff or volunteer time spent

Program activities such as number of contracts signed or acres acquired

Changes in land use, including landowner behavior; and in land cover

Written descriptions of program outcomes

Adherence to a priori standards related to land management

Environmental conditions examined independently from program effects

Quantitative modeled outcomes resulting from program, compared to counterfactual without program

Quantitative measured outcomes resulting from program, compared to counterfactual without program

Description

‘‘Federal grant program expenditures: $16,290,822 dollars under EQIP [Environmental Quality Improvement Program] for best management practices’’ (Wisconsin Department of Agriculture Trade and Consumer Protection and Wisconsin Department of Natural Resources 2009)

‘‘Total number of conservation easements: 31. Total acreage restricted by conservation easements: 1833.41’’ (Land trust tax form 990, Schedule D)

‘‘From 1987 to 2010, the Program has successfully restored and enhanced 1,026,000 acres of wetlands, 3,235,000 acres of uplands, and 9,200 miles of stream habitat.’’ (Partners for Fish and Wildlife Program 2010)

The new acquisition ‘‘will serve as a model grassland restoration project in the upper portion of the watershed and will help protect water quality in the … River.’’ (Land trust annual report 2010)

‘‘The 2010 audit…included a review of the primary ecological requirements of the Standard covered by FSC Principle 6. The auditor found conformance with these requirements, with the exception of CARs [Corrective Action Requests] issued’’ (Rainforest Alliance SmartWood Program 2011). FSC standards have specific land management terms, for example: ‘‘Indicator 5.6.b. Average annual harvest levels, over rolling periods of no more than 10 years, do not exceed the calculated sustained yield harvest level.’’ (Forest Stewardship Council—United States 2010)

USFWS habitat conservation, long-term outcome measure is the ‘‘percentage of migratory bird species that are at healthy and sustainable levels: 61.8 %’’ (Office of Management and Budget 2006)

‘‘The project evaluation strategy will be based on comparing pre- and postproject changes in modeled pollutant loading to water resources…Sheet, Rill, and Wind Erosion [to measure] Acres meeting T [calculated through] RUSLE-2 or wind erosion model’’ (Wisconsin Department of Natural Resources 2013)

Not found in formal performance reporting among our case studies

Example

Environmental Management

123

Environmental Management

Program has an online, georeferenced accomplishment reporting system Habitat Information Tracking System (HabITS) that fosters consistent local reporting to the national office. USFWS-PFW was established in 1987 and is one of the few federal private-land conservation programs within a wildlife-missioned agency. It relies on partnerships with numerous state, federal, nonprofit organizations, tribes, and private landowners and offers financial and technical assistance to these partners. Since it is a federal agency, the USFWS reports on accomplishments to the White House OMB to comply with GPRA requirements, and to Congress. Forest Stewardship Council (FSC) FSC is a nonprofit organization established in 1993 to improve sustainable forestry through a market-based ecolabel verified through certification of forest practices and product chain of custody. FSC’s principles and criteria are linked to indicators through the Forest Management Standard, which specifies the performance standards that certificate-holders must meet. Indicators include requirements for land management, such as harvest rates that can be permanently sustained, adherence to best management practices for water quality, maintenance of reserve sites that represent diverse forest type and age classes, and a precautionary approach to managing high conservation value forests (Forest Stewardship Council—United States 2010). Third-party certification bodies (e.g., SCS Global Services) conduct audits of certificate-holders (landowners or group managers of landowner groups). An accreditation body (Accreditation Services International) audits the auditors periodically. FSC reports to partners and International Organization for Standardization (ISO) and is a member of the ISEAL Alliance which defines credible standards systems. Corrective Action Requests are issued to landowner or group manager certificate holders for nonconformance with criteria and indicators. One interviewee stated, ‘‘the whole idea of forest certification is to provide assurance through monitoring that claims forest management organizations make about their responsible practices really are true.’’ FSC is one of the most wellrecognized certification and ecolabeling programs. Local Land Trusts Local land trusts are independent nonprofit organizations that ‘‘conserve land by undertaking or assisting in land or conservation easement acquisitions, or by stewardship of such land or easements’’ (Land Trust Alliance 2011). The Land Trust Alliance (LTA) is the national convener and representative of 1,700 local and state land trusts in the United States, including 58 in Wisconsin. It asserts that

123

‘‘by effectively saving land, land trusts enhance the economic, environmental and social values of their communities. They provide clean water, fresh air, safe food, places for recreation and a connection to the land that sustains us all’’ (Land Trust Alliance 2011). In response to concerns about accountability from funders, Congress, and the Internal Revenue Service (IRS), LTA created the Land Trust Accreditation Commission in 2006 to conduct voluntary land trust accreditations based on ethical performance standards, guided by the Land Trust Standards and Practices and Accreditation Requirements Manual. Land trusts report to the Tax Exempt & Government Entities Division of the IRS about conservation properties through Form 990. Landowners file a return to IRS about charitable deductions of land and conservation easements through Form 8283. Land trusts report to public and private funders for purchased properties and to regulatory authorities for mitigation properties acquired as permit conditions. The land trust movement has developed standards and practices for organizational and financial management, but in contrast with the other case study programs, has not developed land management or other performance standards. Some reporting information is common among land trusts due to centralized requirements from the IRS and Accreditation Commission.

Results Reporting, Monitoring, and Using Performance Information Across four diverse conservation organizations, performance measurement reporting followed a network rather than linear pattern (Fig. 1a–d). The county LWCD reported the most types of performance information (69 % of connections monitoring/reporting information types), followed by federal USFWS-PFW (54 %), national nonprofit FSC (43 %), and local land trusts (36 %) (Table 3). Performance measures were primarily outputs, land use/land cover indicators and narrative descriptions; sometimes performance standards; rarely global conditions or modeled outcomes; and never measured program outcomes. Governance networks appeared to influence the flow of performance information. The county LWCD network involved the most actors, reflecting the decentralized nature of county government implementation of soil and water conservation with local, state, and federal funds and mandates. The USFWS Partners for Fish and Wildlife had the most integrated reporting system, called HabITS, which allowed managers to describe social and ecological change in quantitative and narrative form both vertically within the agency’s hierarchy and to diverse partners such as regional

-

-

--

-

---

LWCD

--

County Executive

-

-

--

Contractor

-

-

-

-

-

---

---

---

---

--

Landowner, Land manager

-

-

-

---

-

---

---

Logger

----

--

Certification body (e.g. SCS)

---

FSC

---

-

EPA

---

------

President/ OMB

Congress

------

IRS (donated properties)

-------

----

----

Field PFW

----

State PFW

----

Regional PFW

----

National PFW

--

----

----

----

- --

Land Trust

- - ----

Land Trust Accreditation Commission

----

------

----

Public funders (purchased properties)

----

- --

(d) Local land trusts

Landowner

President/ OMB

- --

----

----

Members, private funders

Permitting Authorities (mitigation properties)

Contractor

State partners

Regional partners (e.g. Joint Venture)

National partners

Dollars, staff (IN) Landowner Program activities (OP) Land use, land cover (OP/OC) Narrative descriptions (OC) Performance standards for land management (OC) Global outcome measures (OC) A B A (principal) monitors B (agent) Modeled outcomes (OC) A Measured outcomes (OC) B A (agent) reports to B (principal)

ISO (International Organization for Standardization)

ISEAL Alliance

State University Agricultural Industry Rep Environmental NGO Rep

FSA

Forester --

Governor/ DOA

State Legislature

-

WLWCA

--

--

--

-

---

- ------

- ------

-

---

LWC Board

DATCP

DNR

--

President/ OMB

Congress

(b) US Fish and Wildlife Service - Partners for Fish and Wildlife

Fig. 1 Organizations monitored and reported performance information on program inputs (IN), outputs (OP), and outcomes (OC). Bold text indicates direct sources of performance reporting requirements; bold outline encircling the organization indicates the manager responsible for monitoring in the field. Arrow length and width do not encode meaning in this figure

---

Group manager

ASI (Accreditation Services International)

Partners

Forest Stewardship Council

Landowner

-

County Board/ LCC

NRCS

--

Congress

a) County Land and Water Conservation Departments

Environmental Management

123

Environmental Management Table 3 Number of connections and types of input (IN), output (OP) and outcome (OC) performance measures for each conservation organization’s formal network for monitoring and reporting program performance Land and Water Conservation Departments

USFWS-Partners for Fish and Wildlife

Forest Stewardship Council

Local Land Trusts

Number of organizations/roles

19

11

10

8

Number of monitoring or reporting connections

27

10

15

11

Types of performance information measured

8 of 9

6 of 9

6 of 9

5 of 9

Percent of connections monitoring or reporting each type of information Measured outcomes (OC)

NONE (0 %)

NONE (0 %)

NONE (0 %)

NONE (0 %)

Modeled outcomes (OC)

LOW (22 %)

NONE (0 %)

NONE (0 %)

NONE (0 %)

Global outcome conditions (OC)

MED (56 %)

LOW (30 %)

NONE (0 %)

NONE (0 %)

Performance standards for land management (OC)

HIGH (85 %)

NONE (0 %)

HIGH (80 %)

NONE (0 %)

Narrative descriptions (OC)

HIGH (100 %)

HIGH (100 %)

HIGH (87 %)

MED (64 %)

Land use, land cover (OC)

HIGH (100 %)

HIGH (100 %)

HIGH (67 %)

MED (55 %)

MED (61 %)

MED (38 %)

MED (39 %)

LOW (20 %)

Program activities (OP)

HIGH (100 %)

HIGH (100 %)

HIGH (80 %)

HIGH (91 %)

Dollars, staff (IN)

HIGH (89 %)

HIGH (100 %)

LOW (33 %)

HIGH (82 %)

HIGH (69 %)

MED (54 %)

MED (43 %)

MED (36 %)

AVERAGE outcome monitoring and reporting

AVERAGE all information monitoring and reporting Rationales for measuring performance Improve program effectiveness

Yes

Yes

Yes

Yes

Statutory requirement

Yes

Yes

No

Yes

Reduce program cost

Yes

Yes

No

Yes

Account to funders

Yes

Yes

Yes

Yes

Account to members

No

No

Yes

Yes

Account to consumers about products

No

No

Yes

No

Joint Ventures. FSC reporting was designed around specific performance standards that provided transparency and accountability for the ecolabel. Our measure of the average percent of connections sharing information across the network was reduced for FSC by the lack of input reporting on dollars and staff and the role of specialized organizations in the network like ISO. Local land trusts reported the fewest types of information and had the lowest overall levels of performance information, which were focused on outputs and narrative descriptions. Key roles were held by the Land Trust Accreditation Commission and IRS. Land Trust Standards and Practices, which underlie land trust accreditation, do not include land management performance standards (Land Trust Alliance 2004). The IRS provides a second direct connection to landowners besides that running through the conservation program since they can audit landowners for questionable conservation donations. The IRS has an important but constrained role in ensuring conservation outcomes because conservation purposes are defined vaguely in the Internal Revenue Code and there is only a three-year statute of limitations on charitable deductions. Organizations monitored the outcomes or behavior of key actors actually conducting land management and shaping environmental outcomes on the ground. These

123

included third-party restoration contractors (USFWSPFW), private consulting foresters and loggers (FSC), and farmers and crop consultants (LWCD). Interviewees reported that connections to land managers have become weaker due to time constraints and contracting out to third parties. One field-level conservation employee stated ‘‘30 years ago we had people who knew how to do conservation planning on the land—that’s all they did. We’ve entered an era where people don’t necessarily spend a lot of time in the field and work directly with the land user.’’ This lengthened the principal-agent chain and may limit the experiential knowledge that could provide nuance and context for performance measures. Landowners were framed both as agents in need of accountability and as powerful principals able to attract incentives and avoid regulation. Conservation organizations were concerned that landowners would not voluntarily enroll in programs if they perceived monitoring, enforcement, and public accountability measures as unfavorable. Interviewees suggested three primary uses of performance information: improving performance, enforcing management standards, and legitimating or advocating for programs. Interviewees in all four cases indicated an interest in using performance information to improve program performance, but were mixed on whether existing

Environmental Management

monitoring and reporting systems actually helped them learn to improve their programs. All four cases involved some tension or power struggle between principals (elected officials, funders, consumers, members), conservation program staff, and landowners or land managers around measuring, reporting, and using performance information. Tensions arose around the main rationales for measuring performance, listed in Table 3. County LWCD interviewees reported that performance measures were rarely used in political and bureaucratic decisions regarding funding and program implementation. Some staff viewed reporting as ‘‘bean-counting’’ to serve many bosses, with only partial and opaque links to improving program implementation. The state agricultural agency DATCP is a major funder of county LWCD budgets, but DATCP did not use performance information in budgeting decisions since budgets were designed to provide equity among counties. In contrast, FSC provides a clear example for using performance information to enforce management standards in an effort to improve environmental performance. In the FSC system, third-party certification bodies determine compliance with performance standards and issue Corrective Action Requests to landowners or group manager certificate holders found to be out of compliance. For example, a certification body associated with FSC monitored Wisconsin forests and found some soil disturbance from logging activity. It asked the state government, which is the certificate holder for state and county public lands and a group of private landowners, to quantify excessive soil rutting so that compliance with the performance standard could be clearly determined, in order to prevent excessive soil erosion. The FSC system features firewalls between the FSC standard-setting organization, the certification bodies that conduct field audits and certify landowners and group certificate holders, and the accreditors who monitor and accredit the certification body. This separation is designed to prevent compliance and enforcement processes from influencing standard-setting. Enhancing program legitimacy and justifying expenditures to public and private funders were also major drivers of reporting systems. All four organizations used narrative descriptions of best case examples to highlight the social and ecological benefits of conservation programs, often with photographs and landowner quotes. For example, the HabITS database is used in national reporting for ‘‘GPRA, budget development and justification, strategic planning, regional allocations, outreach…We MUST account for what we do’’ (Partners for Fish and Wildlife and Coastal Program 2004). A USFWS staff person said, ‘‘The bottom line is to get our numbers and accomplishments up to Washington, so they can tell our story at the national level…The more indepth, quantifiable story we can tell, the more we can justify the Partners for Fish and Wildlife program.’’

Political Conflict and Performance Information Higher performance reporting appears to be associated with multiple and diverse principles with greater goal conflicts. Both the county LWCD and FSC networks included formal roles for competing industry and environmental stakeholders demanding diverse performance information. County LWCDs report to diverse principals including EPA, DNR, DATCP, agricultural industry, and environmental organization representatives with substantial goal conflicts. County LWCD reporting is also intensified by quantitative model-based implementation of the U.S. Federal Water Pollution Control Act (the Clean Water Act). FSC’s internal governance structure is organized for power sharing among economic, environmental, and social membership chambers. USFWS had a moderate level of performance reporting without high political conflict, so political conflict is not the only factor driving performance reporting. Performance standards were key features of the FSC and LWCD networks, suggesting that the use of performance standards was not related to government versus nongovernmental status, or to local versus national organization, but instead reflected principals’ demand for performance accountability. Use of performance standards may also reflect the emphasis on technical management standards in the forestry and soil and water quality sectors but not in the land trust community. Causality All programs faced challenges in forging the causal links between program activities and environmental outcomes. One water quality manager in the LWCD network said: ‘‘I’ve been actually quite surprised that we don’t have a good link, even now that we’ve been doing this for 30-plus years, between what we’re paying for and what we’re getting…’’ Many interviewees focused on inadequate monitoring capacity as a barrier to measuring program effectiveness for water quality, forest sustainability, wildlife populations, or other goals. None of the three local land trusts we interviewed had the capacity to monitor environmental outcomes. A USFWS manager said: ‘‘You go out to a corn field that was corn last year, and it’s a drained wetland. You disable a tile or plug a ditch, and there’s three feet of water, and you see four broods of ducks…we know we did something good. Would it be great to quantify our results more? Yes. Would it be great to have a monitoring biologist on staff? You bet. That’s just the reality we’re living in: that’s not going to happen with a declining annual appropriation.’’

123

Environmental Management

USFWS-PFW reports habitat type information for restored wetlands and uplands, which is an indicator for its ultimate outcome, enhanced wildlife populations. It relies on partnerships with universities or other researchers for rigorous assessments of completed projects. Some interviewees recognized that even the best monitoring would not quantify the effect of the program, since evaluation requires a comparative design that considers the counterfactual scenario in which a program did not exist. Documenting environmental change compared to the likely counterfactual is simpler for restorations (conducted by the USFWS-PFW) than for preventing future development (land trusts) or moderating the intensity of agriculture or forestry (LWCD and FSC). Models are commonly used to estimate program effects. For instance, county LWCDs utilized quantitative models to estimate their actions’ water quality outcomes. Models included SWAT (Soil and Water Assessment Tool) and BARNY (Wisconsin Barnyard Runoff Model), based on the Revised Universal Soil Loss Equation (RUSLE 2). However, the accuracy, scale, and epistemology of these models may be contested, particularly by farmers. One interviewee said: ‘‘you either believe or you don’t believe in the universal soil loss equation, in RUSLE 2.’’ The USFWS uses wildlife population models to estimate the contribution of habitat to wildlife populations, but this is separate from PFW reporting and does not measure the relative contribution of specific programs. Landowner privacy concerns were mentioned as another barrier to evaluation. For instance, to become certified through FSC landowners must describe results of timber growth and yield models, but this information is carefully guarded and not readily available for effectiveness assessment by FSC. Overall, the cases revealed important disconnects between creating, reporting, and using performance information and evaluating program efficacy.

Discussion Performance information can play a critical role in accountability and transparency (Lockwood 2010), but reporting varies considerably among conservation organizations. In our comparison of four diverse organizations, county LWCDs had the highest overall level of monitoring and reporting, followed by the USFWS-PFW, FSC, and finally local land trusts. Performance measures were primarily outputs, land use/land cover indicators, and narrative descriptions; rarely global conditions or modeled outcomes; and never measured program outcomes. Consistent with critiques of conservation performance measures (Clark and Kozar 2011; Cook et al. 2009; Keene and Pullin 2011), specific program intervention outcomes were

123

not quantitatively measured with respect to water quality, forest management, or wildlife habitat. Among these four cases, extent of organization (national or local) and type of organization (government or nonprofit organization) did not have a consistent relationship with the level and type of performance reporting. Instead, the purpose and structure of governance networks appeared to influence monitoring and reporting requirements. Surprisingly, given constraints on local government capacity, the county LWCD case had the highest diversity of performance information including modeled water quality outcomes. However, structural and cultural constraints often limit the use of performance information in water quality programs (Genskow and Wood 2011). FSC may provide the best example of accountability-oriented auditing and monitoring, with separate organizations that create performance standards, certify landowners, and monitor the monitors to prevent their capture by landowners. FSC differs from the other three conservation policies in signaling sustainable forest management to consumers (Overdevest and Rickenbach 2006) and is one of many sustainable supply chain measures (O’Rourke 2014). The USFWS-PFW featured a vertically integrated reporting system for budget justifications and reporting to partners. In contrast, land trust reporting networks involved the fewest types of performance information. Land trusts have guarded their autonomy (Fairfax et al. 2005) in favor of local control and voluntary accreditation processes without land use or other outcome-based performance measures. Although we did not systematically measure information use, we heard that performance information is not often used, which is a common finding (Sanger 2013). We found some evidence that performance information was used in diverse ways including enforcing compliance with landowners and land users, justifying and advocating for programs, and learning to improve program effectiveness. Performance information is inherently ambiguous and is constructed and used by actors with specific roles to advance their interests (Moynihan 2008). Our interview results are consistent with other findings that performance information reported among organizations is likely to serve advocacy purposes, while information utilized within organizations is more likely to support learning and problem-solving (Moynihan 2008). Overcoming the disconnect between oversight and learning represents an important opportunity for improving the impact of accountability systems on conservation outcomes (Ebrahim 2003). Strong demands among competing principals, such as in the LWCD and FSC cases, may lead to a greater amount and diversity of performance information. Pluralism in performance measures can be beneficial for ecosystem management because multiple, conflicting benefits lead to different performance evaluation metrics (Caro et al. 2009) which reduces the noise associated with a single measure

Environmental Management

(Feltham and Xie 1994) for diverse principals with contested and changing ideas of what outcomes are most important. While information alone does not solve environmental conflicts (Sarewitz 2004), change behavior, or build adaptive institutions (Mun˜oz-Erickson et al. 2010), learning forums that connect conservation beneficiaries with conservation staff and land users (contractors, loggers, crop consultants) could enhance learning and potentially the use of performance information in decisions. Performance measures are increasingly important in environmental policy and management (Keene and Pullin 2011). A proliferation of efforts to quantify ecosystem services has been driven in part by the goal of developing ‘‘an architecture for environmental performance measurement’’ (Boyd and Banzhaf 2007) and to establish markets for trading or damage mitigation programs (Robertson et al. 2014). Our results suggest that the real contribution of individual programs toward these goals will be difficult to establish. Indeterminate causality is a major barrier to translating performance data into program evaluation (Frederickson and Frederickson 2006; Pullin et al. 2009). The counterfactual situation in which the program is not implemented cannot be observed and must be revealed through comparative approaches such as paired studies or models (Ferraro and Pattanayak 2006; Rissman and Carpenter 2015). Evidence-based evaluation is unrealistic for typical program implementation because of barriers to measuring program outcomes (Ferraro and Pattanayak 2006). Furthermore, the spatial and temporal scales of implementation and compliance monitoring are often much smaller than the scale of system response (Hilde´n 2009; Wardropper et al., in press). It is time for a closer look at performance measurement. The cases suggest tempering expectations about how performance measures can improve performance due to the tension between external accountability and internal learning and because of indeterminate causality. Scholars and practitioners would benefit from a greater understanding of the institutional and political contexts for performance measurement, evaluation, and evidence-based conservation (Brechin et al. 2010). Monitoring and reporting performance information are important for enforcing rules about land user behavior, justifying and legitimating programs, and learning to improve program outcomes. Monitoring and enforcing rules about land user behavior may be particularly important for improving conservation outcomes (Gibson et al. 2005). The cases show that performance standards and other outcome-indicators can be developed even for complex land management arenas, which raises questions about why some programs have lower outcome-based performance measures. Performance measures are constructed by actors in

dynamic political contexts and are not substitutes for indepth evaluations that address causality. Conclusions Evidence-based conservation echoes the ‘governance by performance management’ movement with ambitious aims to reform conservation practice by measuring what works. Our analysis suggests that performance reporting is shaped by policy networks and appears to be enhanced by political goal conflict. Performance information cannot be expected to provide strong causal evidence of program effects, so performance measures should be supplemented with specific, indepth evaluations. We suggest rethinking performance information for diverse uses and caution against fully quantitative systems that only count what can be easily measured. Acknowledgments We thank all the conservation staff who contributed and S. Gillon, D. Moynihan, M. Rickenbach, and our research group for helpful feedback. Funding was provided by National Science Foundation Water Sustainability and Climate DEB-1038759, McIntire-Stennis Act 0229961 and University of Wisconsin-Madison.

Appendix 1 Interview questionnaire 1.

Introductory questions a. b. c. d. e. f. g. h.

2.

What organization do you work for? What are your job title and responsibilities? How long have you worked in this agency? How long have you been in the resource management field? What is the primary resource conservation goal in your work? How many projects do you monitor? How many projects do you initiate? What policy instruments do you use (legal authorities, funding sources, incentives, acquisitions)?

Monitoring a.

Do you monitor participants or personnel in your programs/easements? i. ii. iii. iv. v.

If yes, who do you monitor? How frequently do you monitor? How active or intensive is the monitoring? Does your monitoring involve a visit to the land in question? Does your monitoring involve modeling or measuring environmental variables?

123

Environmental Management

b. c. d.

1. 2. e. f.

g.

3.

Do you have a monitoring template? Could we get a copy of a recently completed monitoring report or blank form?

Are there any informal ways you monitor your programs? What capacities or authorities do you have to change, alter or terminate a program, funding source or staff position if the program intentions are not being followed? Do you believe that your monitoring efforts are capable of capturing the specifics of the locale and the resources you manage?

c.

5.

a.

b.

b.

c.

d.

If yes, to whom? What do you report? What is the format of the reporting? How often are these reports required?

b. c. 7.

b. c. d. e.

f.

b.

Do you manage or execute any programs that have a goal with results which are difficult to quantify or demonstrate? i. ii. iii.

123

g.

If so, what types or methods? If not, does anyone else?

What are they? What makes them difficult to quantify? What steps do you take to quantify results in these cases?

h. i.

8.

Is this sufficient? Excessive?

Since you began working in resource management have you experienced increased pressure for monitoring and reporting? Has pressure to formalize monitoring and reporting increased? If yes to either, where has pressure for this increase come from?

Does monitoring improve or protect resources? i.

Do you engage in any on the ground monitoring or measurement of environmental variables? i. ii.

How much time do you spend reporting?

Rationale for monitoring and effects of monitoring a.

Do the entities to which you report have the capacity to change, alter or terminate a program, funding source or staff position if the program intentions are not being satisfactorily met? Do you feel your reporting requirements are well tailored to the specifics of your locale and the resources you manage? Are there others that you feel you need to report to or are accountable to even though it may not be formally required?

Is this sufficient? Excessive?

Change in accountability pressures over time a.

Environmental measurement a.

How much time do you spend on monitoring?

i.

Do you provide formal reports to others to account for your activities or expenditures? i. ii. iii. iv.

Do you think there is more pressure for monitoring and reporting in private-land conservation programs, versus public lands?

i.

6.

Do you feel like these programs are at an disadvantage when compared to other programs with more easily quantified results?

Workload

Reporting a.

4.

iv.

Is monitoring required? If so by whom? Is formal monitoring done? What format is used for recording monitoring results?

If yes, how?

Does it document changes in the resource? Do you perceive your monitoring and reporting efforts as more procedural or substantive in nature? To what extent are they done to improve performance? To what extent do you see your overall accountability demands as designed to make better programs? To what extent are they designed to punish/reward performance? Do you see the demand for accountability as coming mostly from inside your organization/ project or from those outside of it? Is there high conflict among those you report to? Are there ways in which you think accounting for results could be changed to improve results or efficiency?

Collaboration a. b.

Do you collaborate with other agencies or NGO’s? Do demands for accountability facilitate or hinder your ability to collaborate with others?

Environmental Management

c. d. 9.

Do you report or monitor the efforts of those collaborators with whom you share equal standing? Do you feel like collaborative efforts tend to under or over account for their accomplishments?

Closing Questions a. b.

Are we measuring what matters in private-land conservation? If not, what can we do better?

Appendix 2 Acronyms ASI BARNY CMP DATCP DNR DOA EPA FSA FSC GPRA HabITS IRS ISO LCC LTA LWC Board LWCD NGO NRCS OMB PFW SWAT WALCE WLWCA USAID USFWS

Accreditation Services International Wisconsin Barnyard Runoff Model Conservation Measures Partnership Wisconsin Department of Agriculture, Trade, and Consumer Protection Wisconsin Department of Natural Resources Wisconsin Department of Administration Environmental Protection Agency Farm Service Agency Forest Stewardship Council Government Performance and Results Act Habitat Information Tracking System Internal Revenue Service International Organization for Standardization Land Conservation Committee Land Trust Alliance Land and Water Conservation Board Land and Water Conservation Department Nongovernmental Organization Natural Resources Conservation Service Office of Management and Budget Partners for Fish and Wildlife Soil and Water Assessment Tool Wisconsin Association of Land Conservation Employees Wisconsin Land and Water Conservation Association U.S. Agency for International Development U.S. Fish and Wildlife Service

References Boyd J, Banzhaf S (2007) What are ecosystem services? The need for standardized environmental accounting units. Ecol Econ 63:616–626

Brechin SR, Murray G, Mogelgaard K (2010) Conceptual and practical issues in defining protected area success: the political, social, and ecological in an organized world. J Sustain For 29:362–389 Carman JG (2009) Nonprofits, funders, and evaluation accountability in action. Am Rev Public Adm 39:374–390 Caro T, Gardner TA, Stoner C, Fitzherbert E, Davenport TRB (2009) Assessing the effectiveness of protected areas: paradoxes call for pluralism in evaluating conservation performance. Divers Distrib 15:178–182 Cashore B (2002) Legitimacy and the privatization of environmental governance: how non-state market-driven (NSMD) governance systems gain rule-making authority. Governance 15:503–529 Christensen J (2003) Auditing conservation in an age of accountability. Conserv Pract 4:12–18 Clark MR, Kozar JS (2011) Comparing sustainable forest management certifications standards: a meta-analysis. Ecol Soc 16:3 Conservation Measures Partnership (2007) Open standards for the practice of conservation, version 2.0 Cook CN, Hockings M, Carter R (2009) Conservation in the dark? The information used to support management decisions. Front Ecol Environ 8:181–186 Dane County Land and Water Resources Department (2008) Dane County Land and Water Resource Management Plan. Dane County Land and Water Resources Department, Madison Ebrahim A (2003) Accountability in practice: mechanisms for NGOs. World Dev 31:813–829 Fairfax SK, Gwin L, King MA, Raymond L, Watt LA (2005) Buying nature: the limits of land acquisition as a conservation strategy, 1780-2004. The MIT Press, Cambridge Feltham GA, Xie J (1994) Performance measure congruity and diversity in multi-task principal/agent relations. Acc Rev 69:429–453 Ferraro PJ, Pattanayak SK (2006) Money for nothing? A call for empirical evaluation of biodiversity conservation investments. PLoS Biol 4:482–488 Forest Stewardship Council—United States (2010) FSC-US forest management standard v1.0. Forest Stewardship Council, Minneapolis Frederickson DG, Frederickson HG (2006) Measuring the performance of the hollow state. Georgetown University Press, Washington, DC Genskow KD, Wood DM (2011) Improving voluntary environmental management programs: facilitating learning and adaptation. Environ Manag 47:907–916 Gibson CC, Williams JT, Ostrom E (2005) Local enforcement and better forests. World Dev 33:273–284 Harris GP, Heathwaite AL (2012) Why is achieving good ecological outcomes in rivers so difficult? Freshwat Biol 57:91–107 Hatry HP (2006) Performance measurement: getting results, 2nd edn. The Urban Institute, Washington, DC Heinrich CJ, Marschke G (2010) Incentives and their dynamics in public sector performance management systems. J Policy Anal Manag 29:183–208 Henri JF, Journeault M (2008) Environmental performance indicators: an empirical study of Canadian manufacturing firms. J Environ Manag 87:165–176 Hilde´n M (2009) Time horizons in evaluating environmental policies. New Dir Eval 2009:9–18 Imperial MT (2005) Collaboration and performance management in network settings: lessons from three watershed governance efforts. In: Kamensky JM, Morales A (eds) Managing for results. Rowman & Littlefield Publishers Inc, Oxford, pp 379–424 Jennings ET Jr, Hall JL (2011) Evidence-based practice and the use of information in state agency decision-making. J Publ Adm Res Theory 22:245–266

123

Environmental Management Kamal S, Grodzin´ska-Jurczak M, Brown G (2014) Conservation on private land: a review of global strategies with a proposed classification system. J Environ Plan Man. doi:10.1080/ 09640568.2013.875463 Keene M, Pullin AS (2011) Realizing an effectiveness revolution in environmental management. J Environ Manag 92:2130–2135 King MA, Fairfax SK (2004) Beyond bucks and acres: land acquisition and water. Tex Law Rev 83:1941 Knight RL (1999) Private lands: the neglected geography. Conserv Biol 13:223–224 Koontz TM, Thomas CW (2012) Measuring the performance of public-private partnerships. Public Perform Manag Rev 35:769–786 Kroll A, Moynihan DP (in press) Creating public value using performance information. In: Bryson J, Crosby B, Bloomberg L (eds) Valuing public value: approaches to discerning, measuring, and assessing the public sphere, public values, and the creation of public value. Georgetown University Press, Washington, DC Land Trust Alliance (2004) Land trust standards and practices. Land Trust Alliance, Washington, DC Land Trust Alliance (2011) 2010 national land trust census report. Land Trust Alliance, Washington, DC Lockwood M (2010) Good governance for terrestrial protected areas: a framework, principles and performance outcomes. J Environ Manag 91:754–766 McLaughlin JA, Jordan GB (1999) Logic models: a tool for telling your program’s performance story. Eval Program Plann 22:65–72 Moynihan DP (2008) The dynamics of performance management: constructing information and reform. Georgetown University Press, Washington, DC Moynihan DP, Hawes DP (2012) Responsiveness to reform values: the influence of the environment on performance information use. Public Admin Rev 72:95–105 Mun˜oz-Erickson TA, Cutts BB, Larson EK, Darby KJ, Neff M, Wutich A, Bolin B (2010) Spanning boundaries in an Arizona watershed partnership: information networks as tools for entrenchment or ties for collaboration. Ecol Soc 15:22 Nichols JD, Williams BK (2006) Monitoring for conservation. Trends Ecol Evol 21:668–673 O’Rourke D (2014) The science of sustainable supply chains. Science 344:1124–1127. doi:10.1126/science.1248526 Overdevest C, Rickenbach MG (2006) Forest certification and institutional governance: an empirical study of forest stewardship council certificate holders in the United States. Forest Policy Econ 9:93–102 Partners for Fish and Wildlife and Coastal Program (2004) Habitat Information Tracking System. United States Fish and Wildlife Service. http://www.era.noaa.gov/pdfs/habits.pdf. Accessed Nov 2012 Partners for Fish and Wildlife Program (2010) Regional showcase accomplishments: Fiscal Year 2010. United States Fish and

123

Wildlife Service. http://www.fws.gov/partners/docs/PFW_ Accomplishments_2010.pdf. Accessed Nov 2012 Provan KG, Milward HB (2001) Do networks really work? A framework for evaluating public-sector organizational networks. Public Admin Rev 61:414–423 Pullin AS, Knight TM (2001) Effectiveness in conservation practice: pointers from medicine and public health. Conserv Biol 15:50–54 Pullin AS, Knight TM, Watkinson AR (2009) Linking reductionist science and holistic policy using systematic reviews: unpacking environmental policy questions to construct an evidence-based framework. J Appl Ecol 46:970–975 Radin BA (2006) Challenging the performance movement: accountability, complexity, and democratic values. Georgetown University Press, Washington, DC Raymond CM, Brown G (2011) Assessing conservation opportunity on private land: socio-economic, behavioral, and spatial dimensions. J Environ Manag 92:2513–2523 Rissman AR (2011) Evaluating conservation effectiveness and adaptation in dynamic landscapes. Law Contemp Probl 74:145–173 Rissman AR, Carpenter SR (2015) Progress on nonpoint pollution: barriers and opportunities. Daedalus (in press) Robertson M, BenDor TK, Lave R, Riggsbee A, Ruhl J, Doyle M (2014) Stacking ecosystem services. Front Ecol Environ 12:186–193 Salafsky N, Margoluis R, Redford KH, Robinson JG (2002) Improving the practice of conservation: a conceptual framework and research agenda for conservation science. Conserv Biol 16:1469–1479 Sanger MB (2013) Does measuring performance lead to better performance? J Policy Anal Manag 32:185–203 Sarewitz D (2004) How science makes environmental controversies worse. Environ Sci Policy 7:385–403 Srebotnjak T (2007) The role of environmental statisticians in environmental policy: the case of performance measurement. Environ Sci Policy 10:405–418. doi:10.1016/j.envsci.2007.02. 002 Stem C, Margoluis R, Salafsky N, Brown M (2005) Monitoring and evaluation in conservation: a review of trends and approaches. Conserv Biol 19:295–309 Sutherland WJ, Pullin AS, Dolman PM, Knight TM (2004) The need for evidence-based conservation. Trends Ecol Evol 19:305–308 Wardropper CB, Chang C, Rissman AR (in press) Fragmented water quality governance: constraints to spatial targeting for nutrient reduction in a Midwestern USA watershed. Landsc Urban Plan Wisconsin Land and Water Conservation Association (2012) What is the WLWCA? Wisconsin Land and Water Conservation Association. http://www.wlwca.org/whatiswlwca.html. Accessed 10 Nov 2012 Yin RK (2009) Case study research: design and methods, 4th edn. Sage Publications Inc, Thousand Oaks

Accounting for results: how conservation organizations report performance information.

Environmental program performance information is in high demand, but little research suggests why conservation organizations differ in reporting perfo...
329KB Sizes 0 Downloads 4 Views