http://www.jhltonline.org

EDITORIAL COMMENTARIES

Economic evaluation in health care: A modern day quagmire Roger W. Evans, PhD,a and Francis D. Pagani, MD, PhDb From the aUnited Network for the Recruitment of Transplantation Professionals, Rochester, Minnesota; and bCenter for Circulatory Support and the Cardiovascular Center, University of Michigan, Ann Arbor, Michigan.

The report by Pulikottil-Jacob et al1 provides us with an opportunity to reflect on various issues associated with the conceptualization and the conduct of economic evaluations in health care. In our opinion, too frequently such analyses, while important in concept, lack optimal execution. Cost-effectiveness analysis, as a means to make informed decisions concerning resource allocation, has a long and distinguished history.2–4 These decisions have often been associated with government programs, including attempts to critically appraise medical technology. In this regard, from 1972 to 1995, the now defunct Office of Technology Assessment, formerly an arm of the U.S. Congress, routinely performed cost-effectiveness analyses in relationship to technological innovations in health care.5 Despite its perceived utility, cost-effectiveness analysis has often been criticized as a means to ration health care services—a taboo topic in the United States. Thus, some federal agencies, including the Centers for Medicare and Medicaid Services (CMS), have excluded cost-effectiveness information from consideration when making coverage (what to pay for) and reimbursement (how much to pay) decisions. Meanwhile, in the United Kingdom, the National Institute for Health and Clinical Excellence (NICE) has routinely included the results of cost-effectiveness analyses in its decisions on how medical technology will be deployed within the National Health Service.6 Many other countries have done likewise, with little concern about the rationing of health care resources. In effect, the results of costeffectiveness analyses are viewed as a means to better manage national single-payer health care systems. As we see it, there are 5 issues worthy of consideration when conducting an economic evaluation to establish relative cost-effectiveness. Pulikottil-Jacob et al1 have addressed most of them, although some better than others. First, ideally, cost-effectiveness analyses are conducted alongside prospective, randomized, clinical trials. Admittedly, this gold standard is rarely achieved, and we compromise accordingly, as have Pulikottil-Jacob et al.

Their cost-effectiveness analysis is based on non-randomized, observational data derived from an administrative database. In our opinion, such data in their study enhance the probability of bias for the following reasons: 1. The database excludes variables needed to risk stratify the patient groups subjected to analysis. 2. Many of the HeartMate II (Thoratec, Pleasanton, CA) patients received the device in an earlier period when less was known about patient selection and post-implant management, giving rise to an unacknowledged “period effect.” 3. The frequency of missing data is greater for the HeartMate II group than the HeartWare (HeartWare International, Inc. Framingham, MA) group. Thus, taken together, these sources of bias diminish the conclusive-significance of their results. Second, there are several uses of the term “costeffective,” and in this regard, researchers are not always explicit about their intent when conceptualizing their analyses. As pointed out by Doubilet et al,7 the criterion for claiming cost-effectiveness should be clearly identified.7There are four possibilities: 1. Cost effective ¼ cost saving 2. Cost effective ¼ effective 3. Cost effective ¼ cost saving, with an equal (or better health outcome) 4. Cost effective ¼ having an additional benefit worth the additional cost The fourth criterion is preferred but infrequently applied. Meanwhile, the use of any other criterion should be stated and qualified accordingly. In this regard, the definition of cost-effectiveness put forth by Pulikottil-Jacob et al1 is consistent with criterion 3: the treatment costs associated with the HeartWare device were lower and the outcomes better than those associated with the HeartMate II. As indicated, this is not the preferred criterion for costeffectiveness but one that is commonly used.

1053-2498/$ - see front matter r 2014 International Society for Heart and Lung Transplantation. All rights reserved. http://dx.doi.org/10.1016/j.healun.2014.02.003

Evans and Pagani

Quagmire of Economic Evaluation in Health Care

Third, in general, there are 4 methods or approaches to economic evaluation. As noted by Robinson, these approaches, which relate costs to outcomes, are as follows:8 1. Cost-minimization analysis, wherein outcomes are the same between options or treatments; no measurement is necessary 2. Cost-effectiveness analysis in which outcomes are measured in natural units; for example, life-years gained 3. Cost-utility analysis, in which utility measures are used to assess outcomes; for example, quality-adjusted lifeyears (QALY) 4. Cost-benefit analysis in which outcomes are valued in monetary terms Increasingly, cost-utility analysis is becoming the preferred method for economic evaluation. This is the method chosen by Pulikottil-Jacob et al,1 and is the one typically used by NICE, although the use of QALYs, as well the New York Heart Association (NYHA) classification as a surrogate for health utility, remain highly controversial.9–14 Unfortunately, and further complicating matters, healthrelated quality of life and, hence, QALYs were indirectly assessed by Pulikottil-Jacob et al using the NYHA classification, and health utility was assessed at 1 month after left ventricular assist device (LVAD) implant. In addition, health utility was assumed to remain consistent during the period of LVAD support. This is a serious limitation, which the authors only superficially acknowledge. In the future, this issue must be addressed in greater detail. Had QALYs been directly measured, the results of the study could be dramatically different. Fourth, economic evaluation has been greatly simplified given the availability of reasonably sophisticated decision support software packages such as TreeAge Pro (TreeAge Software, Williamstown, MA) and others, many of which are based on Excel (Microsoft Corp, Redmond, WA).15,16 Pulikottil-Jacob et al1 used Excel to model their data. Unfortunately, when using the available packages, the analyses are based on a model and assumptions that can be in error, leading to questionable results. By simply changing the model, or modifying the assumptions, a very different result can be achieved. Unlike Pulikottil-Jacob et al,1 many researchers fail to clearly describe their model and state their assumptions. As a result, it is difficult or impossible to assess the integrity of the findings. At the same time, however, even if the model is specified, and the assumptions are stated, people may take issue. By definition, models imperfectly reflect reality. Researchers using the same data and a different model may come to totally different conclusions. Fifth, there is the issue of individualization. Costeffectiveness analyses are typically conducted at what we might call the “population-level.” However, in the era of personalized, individualized, and precision medicine, the results of such analyses are being interpreted at the level of the individual patient.17,18 This is leading to the conclusion that treatments and technologies can be cost-effective at an individual level, even if the population-level results are

345

unconvincing. In other words, for a given population, a treatment or technology may be deemed cost-ineffective. However, at the individual level, the same treatment or technology can be cost-effective. This leads us to the conclusion that everything is ultimately cost-effective for someone, implying that cost-effectiveness analysis as a decision-making tool at the population-level may eventually become irrelevant. In other words, within a decade, studies such as that conducted by Pulikottil-Jacob et al will be considered an artifact of a former era. What do we conclude based on our review? First, most economic evaluations in health care have significant limitations that call into question the significance of their results. Second, reasonable guidelines for the publication of the results of economic evaluations have been proposed by well-credentialed economists with the intent of enhancing the quality of proposed studies and published results.19–22 Studies not using rigorous methodology can be a liability. Often they are a source of confusion, and in many cases, the results are not readily generalizable (i.e., lack external validity). This is why the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) guidelines are important.22 Their goal is to improve the quality and generalizability of economic evaluations. This is also where “quality of the evidence” grading becomes an issue in the development of consensus statements, guidelines, and the conduct of meta-analyses. In the end, our goal is to minimize methodologic shortcomings, recognizing we will never eliminate them entirely.

Disclosure statement F.D.P. participates in HeartWare contract research administered by the University of Michigan. R.W.E. does not have a financial relationship with a commercial entity that has an interest in the subject of the presented manuscript or other conflicts of interest to disclose.

References 1. Pulikottil-Jacob R, Suri G, Connock M, et al. Comparative costeffectiveness of the HeartWare versus HeartMate II left ventricular assist devices used in the United Kingdom National Health Service bridge-to-transplant program for patients with heart failure. J Heart Lung Transplant 2014;33:350-8. 2. Gold MR, Siegel JE, Russell LB, Weinstein MC, eds. Costeffectiveness in health and medicine. New York: Oxford University Press; 1996. 3. Muenning P. Designing and conducting cost-effectiveness analyses in medicine and health care. San Francisco: Jossey-Bass; 2002. 4. Drummond MF, O’Brien B, Stoddart GL, Torrance GW. Methods for the economic evaluation of health care programmes. 2nd edition. New York: Oxford University Press; 1997. 5. Institute of Medicine. Assessing medical technologies. Washington, DC: National Academy Press; 1985. 6. Rawlins MD. NICE: moving onward. N Engl J Med 2013;369:3-5. 7. Doubilet P, Weinstein MC, McNeil BJ. Use and misuse of the term “cost-effective” in medicine. N Engl J Med 1986;314:253-6. 8. Robinson R. What does it mean? BMJ 1993;307:670-3. 9. Torrance GW, Feeny D. Utilities and quality-adjusted life years. Int J Technol Assess Health Care 1989;5:559-75.

346

The Journal of Heart and Lung Transplantation, Vol 33, No 4, April 2014

10. Richardson G, Manca A. Calculation of quality adjusted life years in the published literature: a review of methodology and transparency. Health Econ 2004;13:1203-10. 11. Rasanen P, Roine E, Sintonen H, et al. Use of quality-adjusted life years for the estimation of effectiveness of health care: a systematic review. Int J Technol Assess Health Care 2006;22:235-41. 12. Sassi F. Calculating QALYs, comparing QALY and DALY calculations. Health Policy Plan 2006;21:402-8. 13. Holmes D. Report triggers quibbles over QALYs, a staple of health metrics. Nat Med 2013;19:248. 14. European Consortium in Healthcare Outcomes and Cost-Benefit Research. http://www.echoutcome.eu/index.php/en/home.html. 15. TreeAge Software, Inc. https://www.treeage.com/. 16. Mind Decider. Decision making software. http://www.minddecider. com/Articles.Decision-making_software_review.htm. 17. Garber AM, Tunis SR. Does comparative-effectiveness research threaten personalized medicine? N Engl J Med 2009;360:1925-7.

18. Conway PH, Clancy C. Comparative-effectiveness research—implications of the Federal Coordinating Council’s Report. N Engl J Med 2009;361:328-30. 19. Drummond MF, Jefferson TO. Guidelines for authors and peer reviewers of economic submissions to the BMJ. BMJ 1996;313: 275-83. 20. Siegel JE, Weinstein MC, Russell LB, Gold MR. Recommendations for reporting cost-effectiveness analyses: Panel on Cost-Effectiveness in Health and Medicine. JAMA 1996;276:1339-41. 21. Drummond M, Manca A, Sculpher M. Increasing the generalizability of economic evaluations: recommendations for the design, analysis, and reporting of studies. Int J Technol Assess Health Care 2005;21: 165-71. 22. Husereau D, Drummond M, Petrou S, et al. Consolidated health economic evaluation reporting standards (CHEERS) statement. BMJ 2013;346:f1049.

Economic evaluation in health care: a modern day quagmire.

Economic evaluation in health care: a modern day quagmire. - PDF Download Free
127KB Sizes 2 Downloads 3 Views