Health Economics, Policy and Law (2015), 10, 367–371 © Cambridge University Press 2015 doi:10.1017/S1744133114000516 First published online 10 April 2015

Debate

Commentary to Adam Oliver’s 'Incentivising improvements in health care delivery' KARSTEN VRANGBAEK* Professor, Department of Public Health and Department of Political Science, University of Copenhagen, Copenhagen, Denmark

Abstract: The commentary discusses key issues for assessment of performance management within health care. It supports the ambition to develop more realistic understandings of performance management based on insights from behavioral economics as suggested by Adam Oliver. However, it also points to several pitfalls and potential risks to consider when doing so. The commentary concludes that this is a promising field, but further research is needed to support the development of policy instruments.

The piece by Adam Oliver is well written and interesting. It seems reasonable to start developing performance management for health care with a more systematic use of insights from behavioral economics and contemporary psychological theories. Yet, it also inspires new questions and concerns about the conditions under which such ‘laboratory results’ may be taken into the messy practical life of health care governance. In the following, I will address a number of general issues related to the question of the conditions under which performance management regimes might (not) work. Taking inspiration from the accountability literature, we should remind ourselves that performance management may serve several different purposes. Three main objectives are important: as a tool to ensure the most efficient use of public (taxpayer) money (the efficiency function); as a tool to ensure that health care organizations and professionals adhere to rules and standards (the control function); and as a tool to facilitate organizational learning (the learning function) (see Bovens et al., 2008 for a similar claim regarding accountability). This typology reminds us that there are several good reasons for applying performance management, but it should also make it clear that performance management must

*Correspondence to: Karsten Vrangbaek, Professor and Head, Department of Public Health and Department of Political Science, Center for Health Economics and Policy, University of Copenhagen, CSS, Øster Farimagsgade 5, 1353 København K, Copenhagen, Denmark. Email: [email protected]

367

368

KARSTEN VRANGBAEK

be carefully designed to the purpose one wants to achieve, and that specific schemes are unlikely to be able to serve all purposes. Indeed, one of the dilemmas of performance management is the apparent potential for conflicts, e.g. between control and learning perspectives, or between control perspectives (which tend to be rather formalistic and rigid) and efficiency perspectives (which require a certain degree of flexibility in choosing the approach that generates most efficiency, given the specific context). Oliver’s article argues for the potential benefits of combining league table competition with a quite modest use of (positive) financial incentives for good relative performance. At first glance, this might correspond well to the efficiency function described above. Yet, as most measures presented in the text remain process related, it ends up being more closely related to a control function. The success of such a scheme will be highly dependent upon choosing the right processes to monitor. This is not a trivial point, as we are often lacking solid evidence about the exact causal linkages between given processes and positive outcomes. Furthermore, emphasizing control and process dimensions may not be the best way to stimulate learning and experimentation in health care organizations – something that is highly important in a rapidly developing environment with global diffusion of new technologies (Tuohy, 2003). Most health care systems have introduced several concurrent performance management systems. The sum of performance management schemes that a given organization or health care professional is subjected to can be labeled the ‘performance management regime’ (Bovens et al., 2008; Besley et al., 2009). The notion of a performance management regime gives rise to several important discussions about the reactions of organizations and health care professionals facing multiple performance management demands. How do actors prioritize their efforts? How do they organize the work with the different demands? Can they process the demands in parallel or are we likely to see serial processing where the attention shifts from one to the other? Two types of potential pathologies are important regarding performance management regimes: performance management overload is the situation where health care organizations or professionals are facing a performance management regime that (1) imposes extraordinarily high demands on the their limited time and energy; (2) contains a comparatively high number of mutually contradictory evaluation criteria; (3) contains performance standards that extend way beyond both their own and comparable organizations’ good practices; and (4) contains performance standards that seem particularly conducive to goal displacement or subversive behavior (Bovens et al., 2008: 209). The potential negative consequences of performance management overload include the risk of gaming, tunnel vision, goal displacement, ritualization, mutual stereotyping, defensive routines and hostile behavior (Power, 1997; Pollitt, 2003; Bevan and Hood, 2006). On top of this, we can add the points made by Oliver about the risk of eroding people’s identification with the organization in which they work, and the effect that this may have on their performance (see also Akerlof and Kranton, 2010). Identity disutility is a potentially negative effect of performance

Commentary to Adam Oliver’s article 369

management overload, particularly if the performance management practices are perceived to be based on unfair or pointless measurements. This could occur if they measure the ‘wrong’ things, or if the organizations are unable to use the indicators or affect the factors shaping the results, e.g. owing to their dependency upon highquality referrals, or simply owing to differences in the profiles and resources of the patients coming into different hospital organizations. Paradoxically, there is a risk of performance management deficit even if the regime contains several concurrent schemes. This is the situation where the performance management schemes fail to address core issues for ensuring efficiency, control or learning. Integration of care across organizational units is a good example, which is also discussed in Oliver’s article. There are several important conceptual and methodological issues related to measuring efficient integration of care. Even more issues arise, when suggesting league tables and economic incentives. How should one draw the boundaries of the organizations and how can you determine the financial incentives, when several organizations and professionals (e.g. hospital departments, general practitioners, municipal health care, practicing specialists, physiotherapists, etc.) are involved? Such issues of accountability in more or less ‘loosely coupled networks’ were also discussed by Carolyn Tuohy in 2003. She points out that health care is often delivered in settings where boundaries between public and private are blurred and where horizontal relationships are more important than strict hierarchical relations as presupposed in the implicit principal-agent thinking of much accountability and performance management. Attempts to develop indicators for collaborative practices are currently taking place in Denmark in relation to the mandatory ‘health agreements’ between regional and municipal authorities in charge of different parts of health care delivery. The indicators mostly relate to processes of discharge and admission to hospitals, communication about rehabilitation and follow-up, waiting times between organizations, etc. The indicators are published on a national website. They are meant as instruments to support dialogue about health care agreements and there are no explicit sanctions or rewards related to the indicators. The system is quite new, so there have not yet been systematic assessments of the reaction within health care organizations, regions and municipalities. Interview data collected in 2014 indicate limited attention from the side of regional and municipal managers as they question the validity of the data and the lag time from collecting to publishing data. Once again this illustrates the importance of not only the choice of indicators but also the processes of publishing and using performance results. Other important issues to develop further with respect to using behavioral economics insights for designing performance management include the issue of learning effects over time. Effects are often shown in stand-alone experiments, but we have too little practical knowledge about developments over time. What happens if organizations experience that their position in league tables fluctuate significantly over time (as has been seen in Dutch performance rankings;

370

KARSTEN VRANGBAEK

Quartz et al., 2013)? How do organizations adjust their expectations and behavior according to gaining experience with the performance management schemes? It is extremely important to design follow-up studies to investigate such issues when introducing new performance management schemes. A final and important issue regarding the overall subject of the commentary is how to assess performance management schemes? On which criteria should performance management be evaluated? The concepts of overload and deficit can of course be helpful, but more fundamentally we should develop tools to assess whether the performance management schemes imposed live up to the three core objectives of improving efficiency, ensuring due process and rule following and supporting learning. This is not a trivial issue. There are significant conceptual and methodological problems in developing a good ‘program theory’ and documenting the results. Efficiency gains may be driven by other factors than performance management schemes, and we are rarely able to hold all else equal while assessing the impact. The work by Besley et al. (2009) comparing different regimes in England and Wales is interesting in this regard, yet, it only deals with one relatively simple performance indicator (waiting times), which may be affected by many other factors than just the expenditure levels. Assessment against the ‘control’ criteria is problematic in the sense that we can never know how many unwanted incidents are avoided by applying a given performance control scheme. Indications can be found in ‘before and after’ designs, but the exact link between the control design and results is problematic. Assessment with respect to the ‘learning’ criteria is also very difficult. Evaluation of the relative progress trajectories of individual organizations may be a useful starting point, but again it appears quite complex to ascribe causality with a degree of certainty. Finally, the cost of performance management schemes should be taken into consideration. Again there are a number of methodological problems in doing so. It may be relatively straightforward to map out the costs of establishing IT solutions and publishing reports, etc. However, the bulk of the costs arise from the time used within health care organizations and the resources devoted to these systems within the health organizations being monitored. In conclusion, there are many good reasons for developing and applying performance management schemes in health care. However, the enthusiasm should be tempered with more sophisticated considerations about how to match design and purpose, given the multiple objectives of health care systems and the need to involve health professionals in development and learning in a rapidly evolving environment. The introduction of perspectives from behavioral economics can be instrumental in thinking about such issues in a more realistic way than previously, and may thus be an important part of developing the next generation of performance management tools.

References Akerlof, G. A. and R. E. Kranton (2010), Identity Economics: How Our Identities Shape Our Work, Wages, and Well-Being, Princeton, NJ: Princeton University Press.

Commentary to Adam Oliver’s article 371 Besley, T., G. Bevan and K. Burchardi (2009), ‘Naming and shaming: the impacts of different regimes on hospital waiting times in England and Wales’, LSE Health and Social Care Discussion Paper, London School of Economics and Political Science, London. Bevan, G. and C. Hood (2006), ‘What’s measured is what matters: targets and gaming in the English public health care system’, Public Administration, 84: 517–538. Bovens, M., T. Schillemans and P. t’ Hart (2008), ‘Does public accountability work? An assessment tool’, Public Administration, 86(1): 225–242. Pollitt, C. (2003), The Essential Public Manager, London: Open University Press/McGraw Hill. Power, M. (1997), The Audit Society: Rituals of Verification, Oxford: Oxford University Press. Tuohy, C. (2003), ‘Agency, contract and governance: shifting shapes of accountability in the health care arena’, Journal of Health Politics, Policy and Law, 28(2–3): 195–215. Quartz, J., J. Wallenburg and R. Bal (2013), ‘The Performativity of Rankings. On the Organizational Effects of Hospital League Tables’, iBMG Working Paper No. W2013.02, http:// www.bmg.eur.nl/fileadmin/ASSETS/bmg/Onderzoek/Onderzoeksrapporten___Working_ Papers/2013/IBMG_Working_Paper_2013.02_roland_bal.pdf [July 2014].

Commentary to Adam Oliver's 'Incentivising improvements in health care delivery'.

The commentary discusses key issues for assessment of performance management within health care. It supports the ambition to develop more realistic un...
60KB Sizes 2 Downloads 7 Views