COMMENTARY

How Automation Can Help Alleviate the Budget Crunch in Public Health Research In an era of severe funding constraints for public health research, more efficient means of conducting research will be needed if scientific progress is to continue. At present major funders, such as the National Institutes of Health, do not provide specific instructions to grant authors or to reviewers regarding the cost efficiency of the research that they conduct. Doing so could potentially allow more research to be funded within current budgetary constraints and reduce waste. I describe how a blinded randomized trial was conducted for $275 000 by completely automating the consent and data collection processes. The study used the participants’ own computer equipment, relied on big data for outcomes, and outsourced some costly tasks, potentially saving $1 million in research costs. (Am J Public Health. 2015; 105:e19–e22. doi:10.2105/ AJPH.2015.302782)

Peter A. Muennig, MD, MPH

PUBLIC HEALTH RESEARCH IS facing an unprecedented shortage of funds.1,2 Although the percentage of applications that are successfully funded by the National Institutes of Health (NIH) has fallen by about half since 1999, the average cost of a research grant has roughly remained constant (Figure 1).3,4 The persistence of high grant costs in the face of severe shortages of funds is remarkable because virtually every other industry has realized significant savings by automating processes normally done by humans. Moreover, many techniques for automating public health research already exist—they just are not used. 5 Cost efficiencies in research design could result in more research being conducted under current research budget constraints. The increasing numbers of unfunded grant applications is harming public health research in a number of ways. Most apparently, as a result of the shortfall in funds, research laboratories are closing, and important public health research projects are being shelved.6 However, the shortfall of funding is also creating other unforeseen problems. Foremost, the opportunity cost of conducting public health research is increasing.7 Scientists must spend more time writing unsuccessful grant applications, and the time spent writing grants almost certainly crowds out time they could be devoting to research.7,8 This wasted time may lead to even fewer public health research projects—most of which presumably save lives—than one

September 2015, Vol 105, No. 9 | American Journal of Public Health

would predict given current funding cuts.7 One solution to these problems is to improve the cost-efficiency of research itself. There are various ways of potentially reducing research costs, including using existing data that can be linked to participants, automating the enrollment and data collection processes, outsourcing tasks to less expensive (and more competitive) organizations, and reusing expensive equipment. 8 Examples include: d

d

d

d

Personnel are needed to recruit participants, and consent for participation is nearly always accomplished with pen and paper rather than online.5 Although computer-assisted personal interviewing (CAPI) has greatly increased the efficiency of data collection and storage, this method too often still requires interviewers, supervisors, and information technology personnel.5,9 These data can sometimes instead be collected from linked billing databases, medical records data, or by asking participants to manually enter data themselves. Outsourcing research to private third-party research groups can sometimes reduce costs because (1) third parties must compete for clients for both quality and costs, and (2) the overhead for third parties is often lower than for major research institutions. Resources that are purchased during the course of an experiment (such as equipment) too often go unused rather than

d

being made available for other research teams. Participants often possess many of the resources needed for an experiment—computers, smart phones, and Internet service. These can be used to collect survey data (as well as other forms of data such as GPS or accelerometric data). This equipment can often produce data that are sufficient for many research projects.

AN EXAMPLE INTERVENTION These problems are perhaps persistent because funders provide few incentives for researchers to automate their research and because there are relatively few examples showing researchers the way forward. I have addressed this latter problem using a real-world randomized controlled trial example to show how research can be made more cost-efficient. The US Preventive Services Task Force seeks to provide recommendations for preventive screening practices in clinical settings.10 Many believe these guidelines provide the best available “standard” against which clinical recommendations are made.10 These recommendations vary by the risk profile of the patient, including his or her age. In the clinical setting, prostate and breast cancer screening are often performed without due consideration of the US Preventive Services Task Force guidelines.11 This excess screening not only

Muennig | Peer Reviewed | Commentary | e19

600 000

35

500 000

30 25

400 000

20 300 000 15 200 000

10

100 000

Average grant size (2014 USD)

Success Rate, %

Grant Size, 2014 US$

COMMENTARY

5

Success rate (%)

0

0 1999

2001

2003

2005

2007

2009

2011

2013

Year Note. The figure includes all grant mechanisms except those associated with stimulus funding. Costs adjusted by author using the consumer price index. Data adjusted to constant 2014 US dollars by author. Source. National Institutes of Health.

FIGURE 1—Average grant size and success rate of grant applications: National Institutes of Health, united States, 1999–2014.

leads to increased health system costs but also results in considerable anxiety from false positive test results and morbidity resulting from complications of unnecessary surgeries. One approach to improving clinician practice is to educate patients as to what might be the best approach given their individual characteristics.12 This process, often labeled “patient activation,” can be as simple as having the patient watch educational videos, fill out questionnaires, and complete a validated instrument to ensure that the goal of “activation” was accomplished.12,13 The randomized controlled trial was awarded as an R21 NIH grant to “activate” patients to gently confront their providers when their providers ordered screening for mammography or prostate cancer. The objective of this patient activation intervention was to steer the provider toward age- and risk-appropriate tests. The intervention included patient activation videos that were produced by

a private production company using professional actors. Automated data collection was conducted using a series of surveys to assess patient activation. Electronic medical record data on each participant were then monitored over a period of 18 months. I use this randomized controlled trial that cost just $275 000 in direct expenses to highlight the potentials and pitfalls of cost-saving research approaches. A separate budget without such cost-saving measures was created, and a traditional approach would have cost more than an estimated $1.2 million.

Personnel Our research team consisted of a principal investigator, project manager, and research assistant. All tasks required little percent effort in the grant budget. In addition, Nuna Health (a third-party insurance data aggregator; San Francisco, CA) and a videographer were paid a total of approximately

e20 | Commentary | Peer Reviewed | Muennig

$60 000 for their services. Nuna Health also provided the Webbased data collection services. The study was set up this way because Nuna would be working with sensitive back-end medical billing data, which was then paired with the online form data collected directly from participants, both of which used identifiers. Identifiers were selected to ensure a perfect match (e.g., name, plan number, and e-mail within the universe of city employees). Nuna employees had the technical expertise to collect such data in a secure manner and to perform all statistical analyses required without deidentifying the data and sending it to our team.

The Setting and Participants We approached the City & County of San Francisco Health Service System in the hope of enrolling city employees in our experiment. The city maintains contact information and insurance data on its employees. We then

recruited Nuna Health to monitor the insurance claims data. To simplify data collection efforts, we used only employees covered by Blue Shield of California. Nuna Health provides services to employers by acting as a “firewall” between the insurer (which has extensive medical information on their insured population) and the employer (which is not allowed to view data for individual employees under federal law). Before recruiting employee participants, legal agreements were signed between our research team, participating insurers, Nuna Health, and the City & County of San Francisco Health Service System in accordance with federal health information privacy regulations. Upon consent, participants were taken to a secure Web page that immediately linked them to their medical data using identifiers.

The Approach We recruited city employees via e-mails that included both the logo of the City & County of San Francisco and the National Institutes of Health. This measure was taken to improve participants’ faith that the e-mails were coming from a reliable source; logo use was approved by all entities. Those employees who were interested in participating were directed to an online consent form. (As defined by written agreement, our research team protected the privacy of this information. The City & County of San Francisco did not know if an employee consented to participate.) The participants could choose to visit their provider over the study period if they wanted, and the providers were not informed whether their patients (our participants) had seen the video. This process was approved by the Columbia

American Journal of Public Health | September 2015, Vol 105, No. 9

COMMENTARY

University Medical Center institutional review board. After agreeing to consent electronically, women were randomized to either receive a patient activation video on breast cancer screening (https://vimeo.com/ 53082462) or a control video on walking. Men were randomized to receive a patient activation video on prostate cancer screening (https://vimeo.com/53218629) or the control video on walking. Each patient activation video not only prompted viewers to engage their provider regarding their breast or prostate screening tests but also more broadly to have a discussion of the risks and benefits of all the laboratory procedures that had been ordered to screen for disease. We did this to increase the likelihood that changes in billing values would be adequately powered, even if the effect size for our primary outcome was small. After filling out basic health survey questions online, the video was presented, and the participants’ knowledge and understanding were assessed with an online form. This form was designed to prompt participants in ways similar to how a standard in-person interviewer might prompt participants using a CAPI device. Those employees who participated were paid a nominal fee of $20 upon completion of the study, which was electronically mailed to the participants using an online check-writing platform. Every 6 months, the participant was asked about their most recent medical visit and assessed for activation using a formal instrument.13 The billing data indicated which tests had been ordered in each group, allowing for the costs of medical care and health outcomes of the participants to be monitored. For instance, changes

in codes used to represent adverse medical outcomes (such as impotence from a prostate biopsy or infection from a breast biopsy) were also monitored.

Outcomes We piloted the process by administering the survey, intervention, and postintervention data from volunteers at Nuna Health. This allowed us to uncover a few system glitches (e.g., incompatible browsers) that had to be repaired. These setbacks were minor. There was no evidence of randomization failure, and all participants were successfully prospectively linked to their medical data from which the outcomes were measured.

Lessons Learned Our targeted enrollment was estimated to be approximately 1100 based on a power analysis. We originally believed that we had access to more than 10 000 e-mails. However, the number of valid e-mails contained in the San Francisco database was smaller than we had projected when first testing the database. 62% of the 1001 participants who were contacted and indicated that they had read the email actually clicked through to enroll in the study. Significantly fewer actually completed all phases of the study. The project manager was available to answer questions by e-mail but received just three inquiries. Another significant issue arose when the Columbia University Medical Center institutional review board halted the study after initially approving it and after data collection had begun. Concerns were raised about electronically signing consent to research forms and also about collecting sensitive health information through a third-party data aggregator. These concerns were eventually

September 2015, Vol 105, No. 9 | American Journal of Public Health

allayed, but they did delay the project for nearly a year.

Successes Our team successfully contacted participants, enrolled them, exposed them to an intervention, and collected prospective data with very few personnel hours involved in any of these processes. Although we fell short of our target enrollment, other aspects of the research project seemed to go well, and we believe that we collected all of the relevant medical data we were required to collect. The study outcomes are not presented in this commentary because it focuses on the process rather than the study itself. This “proof of concept”—that an experiment can be run at a greatly reduced cost—was not the first study to include many of the approaches we have presented. However, because the literature on research efficiency and management is sparse, hopefully it will (1) serve as a call for researchers to think more about ways in which they can reduce their budget footprint with the NIH, (2) facilitate reform of grant-making processes, (3) pave the way for a rethinking of institutional review board protocols such that the use of automated systems can be more readily accommodated, and (4) spur the development of research project management literature that emphasizes new ways of maintaining quality while cutting costs.

INCORPORATING COST EFFICIENCY INTO GRANT REVIEWS Such approaches can greatly reduce costs, but careful thought must be given to the trade-offs between data quality and project cost when alternative approaches are used. Quality versus cost

trade-offs must be made explicit by researchers in their grant applications and in the published research that results from it. How this is discussed will depend upon the nature of the research itself. For example, cell phone GPS and accelerometric data will be inadequate for a project that requires an estimation of caloric expenditure as a primary outcome measure. However, it will likely be fine for ascertaining whether participants are exercising more, particularly if the participant is prompted to indicate whether he or she is exercising. At present, grant reviewers within the NIH or other funding institutions focus mostly on significance, innovativeness, the analytical approach, and team or institutional quality. In a new era of cost-conscious research in the United States, funding institutions may wish to ask reviewers to explicitly consider cost efficiency as a separate area of grant evaluation. There may also be tradeoffs with other areas of grant evaluation. For instance, relying on users with smart devices also raises thorny questions surrounding the generalizability of study findings by social class (and therefore, in many regions, race and ethnicity as well). Currently, cell phone ownership is similar (and nearly universal) across groups defined by race and ethnicity, but smart phone ownership still differs somewhat.14 As a final consideration, the NIH requires data collected by its grantees to be made publically available. It could do the same for the CAPI devices, laboratory kits, and other equipment that it funds. In the Oregon Health Insurance Experiment, laboratory costs were reduced by creating portable laboratories that fit into a suitcase (thereby obviating the need for

Muennig | Peer Reviewed | Commentary | e21

COMMENTARY

a van or clinic).15 Unfortunately, despite their potential, these portable kits were never reused and are currently sitting in a warehouse.

IMPLICATIONS FOR FUTURE RESEARCH The decline in research funding in the United States requires innovative solutions if scientific advances are to continue. Fortunately, rapidly industrializing nations, such as China, are quickly approaching the United States both in terms of expenditures and the number of scientific publications.16 Therefore, the global impact of these funding shortages will be somewhat mitigated. In addition, the private sector is also innovating and taking bigger risks in public health research. One example is Apple’s open source ResearchKit (Apple Inc., Cupertio, CA) for its watch, which allows researchers to more easily develop public health research using tools that participants already possess.17 Interdisciplinary collaborations, such as those between public health researchers and urban planners, are also expanding research resources for public health researchers. Finally, big data can be used to conduct experiments at little cost. GPSenabled smart phones, electronic medical record data, billing data, and online search data are resources that already exist and are underutilized. There are many ways to free up federal dollars for research. The many redundant national health surveys that are routinely conducted in the United States could be consolidated, and participants could be linked across many different sources of data (e.g., hospital billing, birth, death, education, and crime data sets) within a single

survey, for example. The many thousands of surveys collected every year could also include consent for the participant to voluntarily release his or her data with identifiers. However, the largest gains might come from simple changes to the ways that grants are reviewed by large agencies. Such modifications would ideally be accompanied by changes to data privacy laws in ways that protect participants but still give researchers access to their data when they consent to it. While only one country— Estonia—has done so, it is conceivable that everyone within the United States could possess a biometrically validated digital identity card. Such a card could be used to link an individual to all of his or her data (including travel data by GPS, shopping behavior, medical data, survey data, and so forth) with the option to manage what is shared and what is not. One way of seeing the research implications of using automated systems is through the lens of cost-effectiveness analysis. Costeffectiveness analysis is a tool for comparing medical treatments that allows policymakers to select treatments in ways that maximize the number of lives (or qualityadjusted life years) saved within a fixed budget.18 It is perhaps time to apply a similar standard to prioritizing public health research projects. Whether this means weighing the impact of a grant against its cost, assessing the efficiency of the research approach, or simply mandating automated data collection systems, the time has certainly come to at least start a debate on how agencies might deal with the budgeting realities of 21st-century research using 21st-century technologies. j

e22 | Commentary | Peer Reviewed | Muennig

About the Author Peter A. Muennig is with the Mailman School of Public Health, Columbia University, New York, NY. Correspondence should be sent to Peter A. Muennig, MD, MPH, Associate Professor, Mailman School of Public Health, Columbia University, ARB 4th Floor, New York, NY 10032 (e-mail: [email protected]). Reprints can be ordered at http://www.ajph. org by clicking the “Reprints” link. This article was accepted May 16, 2015.

Acknowledgments This project was funded by National Institutes of Health grant R21HD07156101. I thank Zohn Rosen, PhD, and Rosemary Passantino for their help in developing this manuscript and for their leadership on the project, as well as Jini Kim and the rest of the team at Nuna Health for making this research possible. These members include Jeff Quinn, Shana Sweeny, and David Chen.

References 1. Pflumm M. NIH funding rates drop to record lows. Nat Med. 2011;17(6):637. 2. Emanuel EJ, Schnipper LE, Kamin DY, Levinson J, Lichter AS. The costs of conducting clinical research. J Clin Oncol. 2003;21(22):4145---4150. 3. National Institutes of Health. Research portfolio online reporting tools: average grant cost. Available at: http:// report.nih.gov/nihdatabook/charts/ Default.aspx?chartId=155&catId=2. Accessed May 3, 2015. 4. National Institutes of Health. Research portfolio online reporting tools: success rates and funding rates. Available at: http://report.nih.gov/NIHDatabook/ Charts/Default.aspx?showm=Y&chartId= 124&catId=13. Accessed May 3, 2015.

11. Cabana MD, Rand CS, Powe NR, et al. Why don’t physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282(15): 1458---1465. 12. Mosen DM, Schmittdiel J, Hibbard J, Sobel D, Remmers C, Bellows J. Is patient activation associated with outcomes of care for adults with chronic conditions? J Ambul Care Manage. 2007;30(1): 21---29. 13. Hibbard JH, Stockard J, Mahoney ER, Tusler M. Development of the Patient Activation Measure (PAM): conceptualizing and measuring activation in patients and consumers. Health Serv Res. 2004; 39(4 pt 1):1005---1026. 14. Pew Research Center. Cell phone and smart phone ownership demographics. Available at: http://www.pewinternet. org/data-trend/mobile/cell-phone-andsmartphone-ownership-demographics. Accessed May 3, 2015. 15. Finkelstein A, Taubman S, Wright B, et al. The Oregon Health Insurance Experiment: evidence from the first year. National Bureau of Economic Series Working Paper no. 17190. Available at: http://www.nber.org/papers/w17190. pdf. Accessed February 1, 2015. 16. World Bank. Research and development expenditure. Available at: http:// wdi.worldbank.org/table/5.13. Accessed October 18, 2014. 17. Apple. ResearchKit. Available at: https://www.apple.com/researchkit. Accessed May 2, 2015. 18. Muennig P. Cost-Effectiveness Analysis in Health: A Practical Approach. San Francisco, CA: Jossey-Bass; 2007.

5. Shapiro JS, Bessette MJ, Baumlin KM, Ragin DF, Richardson LD. Automating research data collection. Acad Emerg Med. 2004;11(11):1223---1228. 6. Macilwain C. Biology boom goes bust. Cell. 2013;154(1):16---19. 7. von Hippel T, von Hippel C. To apply or not to apply: a survey analysis of grant writing costs and benefits. PLoS ONE. 2015;10(3):e0118494. 8. Basken P. Cuts may force long-awaited efficiencies at NIH. The Chronicle of Higher Education. March 25, 2013:25---28. 9. Sainsbury R, Ditch J, Hutton S. Computer assisted personal interviewing. Soc Res Update. 1993;3:1---12. 10. Harris RP, Helfand M, Woolf SH, et al. Current methods of the US Preventive Services Task Force: a review of the process. Am J Prev Med. 2001;20 (3 suppl):21---35.

American Journal of Public Health | September 2015, Vol 105, No. 9

How Automation Can Help Alleviate the Budget Crunch in Public Health Research.

In an era of severe funding constraints for public health research, more efficient means of conducting research will be needed if scientific progress ...
494KB Sizes 1 Downloads 6 Views