i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 8 4 ( 2 0 1 5 ) 469–476

journal homepage: www.ijmijournal.com

Cognitive workload changes for nurses transitioning from a legacy system with paper documentation to a commercial electronic health record Lacey Colligan a,∗ , Henry W.W. Potts b , Chelsea T. Finn c , Robert A. Sinkin d a

Dartmouth Hitchcock Medical Center, Lebanon, NH, United States Centre for Health Informatics & Multiprofessional Education (CHIME), University College London, London, UK c Emergency Department, Stony Brook University Hospital, Stony Brook, NY, United States d Division of Neonatology, University of Virginia Health System, Charlottesville, VA, United States b

a r t i c l e

i n f o

a b s t r a c t

Article history:

Objective: Healthcare institutions worldwide are moving to electronic health records (EHRs).

Received in revised form

These transitions are particularly numerous in the US where healthcare systems are pur-

8 February 2015

chasing and implementing commercial EHRs to fulfill federal requirements. Despite the

Accepted 15 March 2015

central role of EHRs to workflow, the cognitive impact of these transitions on the workforce has not been widely studied. This study assesses the changes in cognitive workload

Keywords:

among pediatric nurses during data entry and retrieval tasks during transition from a hybrid

Nursing

electronic and paper information system to a commercial EHR.

Cognitive workload

Materials and methods: Baseline demographics and computer attitude and skills scores were

Patient safety

obtained from 74 pediatric nurses in two wards. They also completed an established and

Electronic health record

validated instrument, the NASA-TLX, that is designed to measure cognitive workload; this

Electronic medical record

instrument was used to evaluate cognitive workload of data entry and retrieval. The NASA-

Human factors

TLX was administered at baseline (pre-implementation), 1, 5 and 10 shifts and 4 months

NASA-TLX

post-implementation of the new EHR. Results: Most nurse participants experienced significant increases of cognitive workload at 1 and 5 shifts after “go-live”. These increases abated at differing rates predicted by participants’ computer attitudes scores (p = 0.01). Conclusions: There is substantially increased cognitive workload for nurses during the early phases (1–5 shifts) of EHR transitions. Health systems should anticipate variability across workers adapting to “meaningful use” EHRs. “One-size-fits-all” training strategies may not be suitable and longer periods of technical support may be necessary for some workers. © 2015 Elsevier Ireland Ltd. All rights reserved.



Corresponding author. Tel.: +01 603 443 3646. E-mail addresses: [email protected], [email protected] (L. Colligan).

http://dx.doi.org/10.1016/j.ijmedinf.2015.03.003 1386-5056/© 2015 Elsevier Ireland Ltd. All rights reserved.

470

1.

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 8 4 ( 2 0 1 5 ) 469–476

Introduction

Hospitals around the world have been and are transitioning to electronic health records (EHRs). In the US, the federal government set criteria for implementation and use of EHRS (“meaningful use”) and reduced reimbursements to healthcare institutions who failed to meet sequential meaningful use deadlines. In 2012, approximately 40% of US hospitals met stage 1 “meaningful use” (data capture criteria) and only 5% met stage 2 “meaningful use” (advance clinical processes) [1]. In order to avoid reduced reimbursements through failure to meet meaningful use stages by certain deadlines, the remaining US hospitals are likely to purchase and implement commercial EHR products certified to meet “meaningful use” objectives [2].

1.1. Commercial EHRs and “top-down” implementations Commercial EHRs are built by vendors that may have originally designed their system for billing purposes. In the US, these vendors have added functionalities to their established platform in order to meet specific meaningful use criteria. When these EHRs are implemented, administrations disable the legacy system and transition all workers to use of the commercial product (“top-down” implementation). A commercial EHR differs from a “home-grown” EHR that is introduced to a clinical workforce gradually (whose rate of adoption provides feedback to the developers), and iteratively improved as workflows and technology mutually adapt [2]. In mandated implementations, worker adoption of new technology is moot [3]; workers cannot work if they do not use the new technology. In this setting, worker adaptation to new technology, adjusting their workflow to the new tool, may better describe the process. During a period of adaptation, workers must gain new technology skills as well as adapt established workflows to accommodate the mandated EHR. EHR implementations may present challenges to patient safety and healthcare workflow. The effects of such implementations on patient outcomes and safety are relatively unknown [4–6], and while EHRs are often touted as bringing safety benefits, they can bring unintended negative consequences [7–9]. A systematic review of decision support systems suggests that maximum benefit of such support is achieved when a system is developed locally and gains gradual acceptance by staff [7]. Adaptation of technology to staff behavior through iterative design promotes successful adoption of health IT [10,11], whereas mandated implementations can be hindered by poor staff acceptance [12–16]. Rapid implementation thus raises safety concerns [17,18].

1.2. Impact of large health information technology transitions Large health information technology implementations bring a massive and rapid change to practice and can lead to physical, mental, and emotional exhaustion of a workforce [3,4]. The workforce that actually interacts with the patient at the point of care is considered to practice at the “sharp end” of

the health system [19] and includes nurses who are physically and temporally close to the patient and the care. Changing work demands at the “sharp end” due to EHRs have been noted to increase work stress amongst nurses [20,21]. Nursing work is cognitively demanding, requiring effective prioritization of tasks and little margin for error [22]. Additionally, nurses describe losing track of their patients as they concentrate on learning a new technology system [12,13,23]. Consequently, altering the cognitive load of routine nursing tasks may affect care and patient safety. A report from the AMIA 2009 Health Policy Meeting focusing on unintended and unanticipated consequences of health IT and policy called for research on human factors and cognition [24]. The science of human factors instructs that “computing technology and artifacts are integral parts of [the] cognitive process and should be designed to correspond to human characteristics of reasoning, memory, attention, and constraints (human-centered design” [25]. Only a few studies have looked at the impact of EHRs at the “sharp end” from a cognitive view [4,26–28].

1.3.

Goal of study

We studied the immediate impact on pediatric hospital nurses as they transitioned from a legacy system with paper documentation to a commercial EHR that was implemented in a top-down manner. Seeking a quantitative and reliable measure, we chose an instrument developed and validated by NASA, the Task Load Index (NASA-TLX) [29], to measure workload. We studied serial cognitive workloads for data retrieval and entry (documentation) tasks experienced by pediatric nurses in two ward settings before and at multiple intervals after the workplace introduction of the new system (“golive”). We hypothesized that NASA-TLX scores would increase immediately after the introduction of the new system, and then decrease. However, we wished to study how high scores would rise, for how long, and if they rose, would they eventually return to baseline, or a different level. We also measured additional variables to see if we could predict this pattern for different individuals.

2.

Materials and methods

2.1.

Study setting

The institution studied is a 131-bed Children’s Hospital within an academic tertiary care healthcare system. Nurse participants worked on either an inpatient ward (“Ward”) or a neonatal intensive care unit (“NICU”). For three decades, this system relied on a hybrid electronic and paper information system. A computerized practitioner order entry (CPOE) had been used since the 1970s (Medical Information System, or MIS), and independent lab and radiology (PACS) electronic systems had been added later. Clinical documentation was paper-based and each patient had a binder with their documentation hole-punched and inserted. In March 2011, the legacy computer systems and paper documentation were discontinued and all in-patient units went “live” over one night with a comprehensive commercial EHR.

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 8 4 ( 2 0 1 5 ) 469–476

471

Before implementation, all pediatric nurses received 16 h of classroom training; the curriculum was purchased as a package from the vendor. Access to the live EHR was not permitted until these required classes were completed. During the first two weeks after “go-live”, the nurse- patient ratio was doubled [that is, the number of patients per nurse was halved]. Multiple modalities for technology support were available 24/7 during the first month after “go-live”. Subsets of nurses received additional training and were designated “superusers”. During the implementation period, super-users did not care for patients but were assigned to provide one-to-one assistance to nurses with questions or problems with the new system. Super-users were often unit leaders and were encouraged to serve as “champions” of the new EHR. All users and super-users had access to an EHR telephone hotline for additional support in the first month after “go-live”.

2.2.

Measurement instruments

The NASA-TLX is a mental workload assessment tool designed to capture the subjective experience of workers engaged in human-machine complex socio-technical systems. The NASA-TLX is probably the most widely used of mental workload scales because of its multi-dimensional feature and the ease of its administration [30,31]. The NASA-TLX has been subjected to a variety of independent evaluations in which its reliability, sensitivity and utility have been substantiated [29,31]. The NASA-TLX utilizes a visual analogue scale to depict six independent sub-scales that measure cognitive workload: Mental, Physical and Temporal Demands, Own Performance, Effort and Frustration (see Fig. 1). The premise is that these dimensions represent the “workload” experienced by workers; the dimensions reflect theories equating workload with the magnitude of the demands relative to the resources of the worker. NASA-TLX scores have been shown to correlate with error rates in other complex socio-technical domains [32,33] as well as healthcare [34]. In aviation and aeronautics, NASA-TLX increases over 15% are considered significant by engineers when testing new technologies and prompt re-evaluation and/or redesign [35,36]. NASA-TLX has been used previously to study task difficulty in healthcare [34,37–39]. In this study, the NASA-TLX was administered for two specified tasks: data entry and data retrieval. These tasks were selected after pilot interviews with nurses because these tasks could be clearly defined, recognized and were consistent with training language provided by the commercial vendor. Demographic information (see Table 1), and two previously validated instruments related to computers were administered at baseline. The computer attitude score [40] is a 20 item questionnaire that has been shown to be positively correlated to computer performance [41] and is used to study computer anxiety [42]. The Computer Understanding and Experience Scale is a validated 12-item assessment of “know-how” related to computers; it was designed to capture skills rather than computer attitudes or training and can distinguish between expert and general computer users [43]. Both use 5 point Likert response scales.

Fig. 1 – NASA-TLX Scale.

Table 1 – Demographics of participants. Variable

Results

Age

20–30 years: 25 (34%) 31–40 years: 17 (23%) 41–50 years: 16 (22%) 51–60 years: 12 (16%) 61+ years: 4 (5%) 15 years: 28 (38%) 15 years: 15 (20%) 35 h: 51 (69%) 22 yes (30%), 52 no (70%) Mean = 6.9; standard deviation = 1.7 Mean = 4.6; standard deviation = 1.9

Years in profession

Years at this hospital

Hours worked per week

Previously used another EHR Computer attitudes score Computer skills score

472

2.3.

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 8 4 ( 2 0 1 5 ) 469–476

Sampling

Participants were recruited if they worked on a nursing shift in the Ward or the NICU during a specific two week period, one month before “go-live”. All available nurses working were invited to participate [census sample], except for super-users who were excluded. “Travellers”, nurses who came from other institutions and were hired on a temporary basis to support clinical workload during the training and implementation periods, were included. Travellers used the previous hybrid electronic and paper system and received the same EHR training as permanent nursing staff. Travellers may have used other commercial EHRs in their other institutions; previous experience was a demographic detail captured at baseline.

2.4.

Data collection

Two trained and scripted researchers administered the demographic surveys along with the first NASA-TLX before implementation of the EHR [t0 ]; this established a baseline of cognitive workload for nurses using the familiar hybrid electronic and paper information system. After the “go-live” date, we sought to administer NASA-TLXs at the end of the first [t1 ], fifth [t2 ], and tenth [t3 ] shifts after the EHR went “live”, and then a further 4 months after introduction [t4 ]. Shifts were twelve hours long, and nurses worked on a full-time or parttime basis. Thus, time between individual nurses’ shifts was variable depending on the specifics of their schedule, vacation, staffing census etc. The NASA-TLXs for data entry and retrieval were administered at the end of each participant’s 12-hour shift and the nurses were asked to reflect on their experience specifically in that shift using the EHR. Each time, nurses were asked to score their workload on a separate NASA-TLX for the activities of both data entry and retrieval. The researchers tracked each participant’s schedule individually in order to collect data at the appropriate time and administer the NASA-TLX at the end of that shift.

2.5.

Fig. 2 – Timeline of NASA-TLX data collection by subject number (n).

participants gave written informed consent. All participants were given the option of withdrawing from the study at any time.

3.

Results

3.1.

Descriptive results

Data were collected on 74 participants (50 in NICU and 24 on the Ward) at baseline (t0 ) and t1 . No participants asked to withdraw from the study, although complete data (no shift missed) was only obtained for 63 (82%) participants (see Fig. 2). Given the higher rate of missing data at t4 , analyses were re-run excluding t4 , i.e. on the 71 participants with complete data through to t3 : this made no difference to the conclusions. While we sought to collect data after a specific number of shifts for t1 , t2 and t3 , this was not always possible. We accepted data after an approximately similar number of shifts; specific variation in the number of shifts after “go-live” is detailed in Fig. 2. Demographic variables (n = 74) are described in Table 1. The computer attitudes score and computer skills score are calculated by averaging over the relevant items, producing two roughly Normally distributed variables. (The computer skills measure had one clear outlier, lying 7 standard deviations beyond the next highest value. This was changed to be two standard deviations beyond the next highest value.) These two measures are highly correlated (r = 0.69, p < 0.001).

Data analysis 3.2.

The research design yielded 6 NASA-TLX sub-scale scores for each of 2 tasks, thus 12 sub-scores in total, at each of 5 times for each participant. Given the large number of NASA-TLX measurements, we planned to average over these 12 sub-scores in a manner that would be consistent with any relationships seen between sub-scores, so as to improve reliability and decrease the possibility of type I error as appropriate. The NASA-TLX instrument as developed twenty years ago included sub-weightings within the six sub-scales based on the task. However, the instrument is typically now used without sub-weightings of the sub-scales because a “simplified” [non-sub-weighted] approach has been demonstrated to remain sensitive [29,31]; consequently, we used the simplified analysis.

2.6.

Ethics

Appropriate ethics approval was obtained from the Institutional Review Board. Participant identity was protected and

NASA-TLX data reduction

As noted previously, the NASA-TLX has six sub-scales: Mental Demands, Physical Demands, Temporal Demands, Own Performance, Effort, and Frustration. Results of these subscale scores were broadly Normally distributed. The Physical Demands sub-scale scores tended to be lower than for the other five scores and showed a poorer correlation with the other sub-scales. Feedback from participants suggested there were different interpretations of the sub-scale question: the physical demand of typing and searching with a mouse versus the physical demand of pushing mobile computer stations. Given this and its reduced relevance to the task, we excluded the Physical Demands sub-scale score completely from the analysis. This approach of dropping less relevant sub-scales has been used previously with the NASA-TLX [29]. To ensure that omission of the Physical Demand did not alter our results, we performed a sensitivity analysis where we repeated the results presented below including the Physical Demands scores: this had minimal effect on the results.

473

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 8 4 ( 2 0 1 5 ) 469–476

Table 2 – Repeated measures ANOVA for each independent variable. Variable

Interaction with time

p8pc@

Computer attitude score Computer skills score Prior use of EHR Age Years in profession Full/part time (part-time being defined as ≤35 h) Place of work (NICU v. Ward)

Wilks’ lambda = 0.63, F4, 56 = 8.4, p < 0.001 Wilks’ lambda = 0.81, F4, 56 = 3.3, p = 0.017 Wilks’ lambda = 0.91, F4, 56 = 1.5, p = 0.22 Wilks’ lambda = 0.91, F4, 56 = 1.3, p = 0.27 Wilks’ lambda = 0.90, F4, 56 = 1.6, p = 0.19 Wilks’ lambda = 0.86, F4, 56 = 2.2, p = 0.08 Wilks’ lambda = 0.97, F4, 56 = 0.4, p = 0.83

F1, 59 = 0.1, p = 0.76 F1, 59 = 5.4, p = 0.024 F1, 59 = 2.6, p = 0.11 F1, 59 = 2.3, p = 0.13 F1, 59 < 0.1, p = 0.9

At each of the given time points, the scores on the remaining five sub-scales across the two tasks of data entry and data retrieval were highly correlated (rs ≥ 0.80, ps < 0.001). This justified just using an average across all 10 sub-scores (TLX average) for each time (t0 –t4 ).

8

TLX average score

7 6 5

Low computer atude score

4

High computer atude score

3

3.3.

NASA-TLX average scores over time

Using the TLX average gives a within-subjects design where data is available on the same individuals across 5 time points. We thus carried out a repeated measures ANOVA. The TLX average scores were 3.7 (95% confidence interval: 3.2–4.2) at t0 ; 5.9 (95% CI: 5.3–6.4) at t1 ; 4.4 (95% CI: 3.9–4.9) at t2 ; 3.8 (95% CI: 3.2–4.3) at t3 and 3.4 (95% CI: 2.9-3.8) at t4 . The difference over time is highly significant: Wilks’ lambda = 0.40, F4, 57 = 22.0, p < 0.001. The TLX average score rose significantly from t0 to t1 , but then fell back from t1 to t4 , with t3 being about the same as t0 , and t4 slightly lower. We used least significant difference post hoc tests to do pair-wise comparisons between each time point: the TLX average at t1 is significantly higher than all the other times, and that at t2 is significantly higher than at t3 and t4 , but no other pair-wise differences are significant. We repeated the analysis with each of the various independent variables separately (see Table 2). For each of these, we looked for a main effect of the new variable, i.e. does it predict average TLX score overall, and for an interaction between time and the new variable, i.e. does it affect how average TLX scores changes over time. The computer attitude and computer skills scores show significant interactions with time. To illustrate these, we show means and confidence intervals of the TLX average score when the sample is split by the median computer attitude score (Fig. 3); a median split of computer skills results in a similar graph. Thus, we see the same pattern of TLX average scores rising from t0 to t1 , but then declining from t1 to t4 . However, those with low computer attitude scores show a larger rise to t1 and the t4 score remains above the t0 score. On the

2 1 0 0

1

2

3

4

Fig. 3 – TLX average scores over time by computer attitude score.

other hand, those with high computer attitude scores show a smaller increase from t0 to t1 , with t2 –t4 falling below t0 . There is a significant main effect of age with older participants showing higher TLX scores, consistently at all times. For example, the mean t4 score for those in the 20–30 years age bracket was 2.8 compared to 4.4 in the 41–50 years bracket. We then conducted a repeated measures ANOVA including all three independent variables identified as significant predictors above: computer attitudes, computer skills and age. This demonstrates that only computer skills independently predicts outcome (see Table 3).

3.4.

Secondary analyses

We performed two additional sets of analyses to evaluate changes in cognitive workload during the initial period of adjustment to a new EHR (as the peak period of risk), and in the long term after nurses were familiar with the new EHR (see Table 4). To assess initial disruption with “go-live” and the first days of new EHR use, we calculated a simple score of an individual’s initial adaptation as the difference between t1 and t0 . Initial adaptation is significantly correlated with most of our

Table 3 – Single repeated measures ANOVA with technology attitudes score, technology skills score and age included as co-variates. Variable

Interaction with time

Computer attitude score Computer skills score Age

Wilks’ lambda = 0.77, F4, 54 = 4.1, p = 0.006 Wilks’ lambda = 0.97, F4, 54 = 0.5, p = 0.77 Wilks’ lambda = 0.98, F4, 54 = 0.3, p = 0.86

Main effect (shown if no significant interaction) F1, 57 < 0.1, p = 0.90 F1, 57 = 1.1, p = 0.31

474

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 8 4 ( 2 0 1 5 ) 469–476

Table 4 – Multivariate regressions predicting initial and long-term adaptation (significant results are in bold). Initial adaptation (=t1 − t0 )

Variable

Computer attitude score Computer skills score Prior use of EHR Age Years in profession Full-time Place of work (Ward)

B

ˇ

p

B

ˇ

p

−0.9 −0.1 2.1 −0.3 0.2 −0.3 −0.8

−0.5 −0.1 0.3 −0.1 0.1 −0.1 −0.1

0.002 0.57 0.006 0.49 0.68 0.66 0.28

−0.6 −0.1 0.1 0.0 0.0 −0.3 −0.9

−0.4 −0.1 0.0 0.0 0.0 −0.1 −0.2

0.014 0.57 0.88 0.99 0.96 0.63 0.18

independent variables. We put these into a multiple regression (see Table 4). This shows a significant overall model: F7, 66 = 4.9, p < 0.001, adjusted R2 = 27%. Second, we calculated a measure of long-term adaptation as the difference between t4 and t0 . Initial and long-term adaptations are highly correlated: r = 0.72, p < 0.001. Longterm adaptation is significantly correlated with most of our independent variables. We again put these into a multiple regression (see Table 4). This shows a significant overall model: F7, 55 = 2.8, p = 0.016, adjusted R2 = 17%. The computer attitude score is the main, independent predictor for both initial and long-term adaptation. Prior EHR use is significant in moderating the initial impact, but is not significant in the long term (90% of those reporting Prior EHR use were Travellers).

4.

Discussion

4.1.

Summary

This study explores changes in cognitive workload for pediatric nurses for the tasks of data input and retrieval during a “top-down” implementation of a commercial EHR. Cognitive workload for these tasks increased in the first shifts of use after “go-live” and then returned towards baseline; these changes were significant even in the setting of halved patient care responsibilities, prior training and substantial roundthe-clock face-to-face and telephone user support. Both t1 and t2 scores were more than 15% above the baseline workload; human factors evaluation standards for safety-critical industries would pose concern for increased workload at this level [35,36]. The increased cognitive workload dissipated after approximately 10 shifts, but there was considerable interpersonal variation during the transition to the new EHR. The key predictor of fast adaptation was a positive computer attitudes score. We also found an effect of computer skills and age, but they appear to be mediated by the computer attitudes score.

4.2.

Long-term adaptation (=t4 − t0 )

Limitations

This study is limited by its focus on pediatrics nurses in one academic medical center using one commercial EHR product. We could not control for individual time differences between shifts due to the complexity of the staffing schedules. Other EHR products are likely to have different usability characteristics that may or may not affect cognitive workload similarly. Additionally, other EHR vendor training programs may be more or less adequate than the training our participants

received. We were unable to study a control group of nurses because the mandated implementation affected everyone in the entire health system.

4.3.

This study related to previous work

Increased workload during EHR implementation has previously been identified as a barrier to EHR implementation [44]. Our work is the first to measure cognitive workload in situ and our findings confirm perceptions that learning a new EHR is indeed hard work. We also show that, even in the long term, replacing a hybrid, fragmented information system with an integrated EHR does not necessarily mean that the cognitive work of data-related tasks will be decreased. Our results are consistent with previous work in usability and cognitive workload of EHR use in simulation [45]. Nurses’ attitudes towards computers and electronic charting pre- and post- computer implementation have been studied previously [46–49]. One review indicates that even though task time did not change, nurses’ attitudes towards computer use became significantly more negative after the implementation process [46]. Our finding of increased cognitive workload during early days of EHR use may identify one of the factors leading to negative reactions towards these systems.

4.4. Variation across workers during EHR implementation Identification of cross-participant variability in computer attitudes, skills and adaptation raises questions regarding institutional change management strategies. A widespread campaign to promote the benefits of the EHR may have affected different workers depending on their computer attitudes (for example, promoting sophisticated EHR functionalities may actually increase some worker’s anxiety pre-implementation). Moreover, our results suggest that a “one-size-fits-all” strategy may not adequately support staff during EHR implementation. EHR training based on the type of worker (nurse, physician, etc) rather than the worker’s aptitude and individual needs may underserve some users. Sudden discontinuation of support services (hot-line discontinued and super-users back to full time patient care) may disadvantage workers experiencing slow adaptation, or with shift schedules that precluded their learning with initial available support. The computer attitude score was easy to administer and may be useful for identifying staff members who will be slower

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 8 4 ( 2 0 1 5 ) 469–476

adapters, while the NASA-TLX is also easy to use and can identify those who are struggling with an EHR after initial support has been discontinued. Although determinants of adaptation speed are probably multi-factorial, quick identification of potential fast and slow adapters can allow training and ongoing support to be tailored to users. The workers facing the greatest transition and adaptation challenge are at most risk for degraded performance and burnout. Enhanced training and support during implementation could be targeted appropriately to those predicted to experience greater increases in cognitive workload. Ongoing support could be delivered to those experiencing greater increases in cognitive workload after the transition period. There is a familiar stereotype that younger individuals can pick up new technologies faster, as seen with debates around ‘digital natives’ in the education of healthcare professionals [50]. We found older staff members recorded higher NASA-TLX scores, but any effect of age appears to be mediated through the computer attitudes score. Years in profession and full or part-time status were not predictive of adaptation. Prior EHR use had a short-term effect on adaptation. As healthcare workers gain more exposure to EHRs, they may adapt more quickly to other commercial EHRs or changes to the product they are familiar with. Workers who move between institutions may transition between commercial EHR products more quickly than others with less exposure.

5.

Conclusion

Although a recent study reports predominately positive results from health IT, the authors note that negative studies of outcomes are associated with negative provider satisfaction; they conclude “the human element” is critical to the success of health IT implementation [51]. The National Academy of Sciences warns that IT applications often “provide little support for the cognitive tasks of clinicians or the workflow of the people who must actually use the system. . .designs can increase the chance of error, add to rather than reduce work, and compound the frustrations of executing required tasks” [52]. Our work demonstrates the substantially increased workload for relatively straightforward discrete tasks for users adapting to a new IT system. Given the number of healthcare workers experiencing EHR implementations, further research should be aimed at mitigating increased cognitive workloads and identifying staff who may need extra support through the adaptation phase. Even with extensive measures to ameliorate the strain, staff and managers should be mindful of the potential for degraded staff performance and potential patient safety risks. Long-term benefits can also not be presumed.

Author contributions Colligan conceived the study, obtained IRB approval and supervised participant recruitment. Colligan and Finn administered baseline instruments and serial NASA-TLX measures. Potts performed statistical analyses. All authors contributed significant intellectual content and drafting revisions.

475

Summary points • Study of cognitive workload during transition from paper to electronic health record. • Cognitive workload of data entry and retrieval measured with validated NASA-TLX (Task Load Index) instrument. • Differences in cognitive workload adaptation suggest that nurses adapt at varying rates. • “One-size-fits-all” training for commercial EHR implementation may not be optimal.

L. Cao helped with data collection. This work was partially funded by a Children’s Hospital Grant at the University of Virginia.

Conflicts of interest The authors declare no conflicts of interest.

references

[1] C. DesRoches, D. Charles, M. Furukawa, et al., Adoption of electronic health records grows rapidly, but fewer than half of US hospitals had at least a basic system in 2012, Health Aff. 32 (2013) 1478–1485 [published online First: 9 July 2013]. [2] D.C. Classen, D.W. Bates, Finding the meaning in meaningful use, NEJM 365 (2011) 855–858. [3] J. Jasperson, P. Carter, R. Zmud, A comprehensive conceptualization of post-adaptive behaviors associated with information technology enabled work systems, MIS Q. 29 (2005) 525–557. [4] E. Coiera, J. Aarts, C. Kulikowski, The dangerous decade, JAMIA 19 (2012) 2–5. [5] J. Ancker, L. Kern, Abramson, et al., The Triangle Model for evaluating the effect of health information technology on healthcare quality and safety, JAMIA 19 (2012) 61–65, http://dx.doi.org/10.1136/amiajnl-2011-000385. [6] B. Chaudhry, J. Wang, M. Maglione, et al., Systematic Review: impact of health information technology on quality, efficiency and costs of medical care, Ann. Intern. Med. 144 (2006) 742–752. [7] A. Garg, N. Adhikari, H. McDonald, et al., Effects of computerized clinical decision support systems on practitioner performance and patient outcomes, JAMA 293 (2005) 1223–1238. [8] Y. Han, J. Carcillo, S. Venkataraman, et al., Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system, Pediatrics 116 (2005) 1506–1512. [9] R. Koppel, J. Metlay, A. Cohen, et al., Role of computerized physician order entry systems in facilitating medication errors, JAMA 293 (2005) 1197–1203. [10] S. Timmons, Nurses resisting information technology, Nurs. Inq. 10 (2003) 295–310. [11] A. Haselkorn, A. Rosenstein, A. Rao, et al., New technology planning and approval: critical factors for success, Am. J. Med. Qual. 22 (2007) 164–169. [12] J.S. Ash, M. Berg, E. Coiera, Some unintended consequences of information technology in health care: the nature of

476

[13]

[14]

[15]

[16] [17]

[18] [19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28] [29] [30]

[31]

[32]

i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 8 4 ( 2 0 1 5 ) 469–476

patient care information system-related errors, JAMIA 11 (2004) 104–112. T. Greenhalgh, H. Potts, G. Wong, et al., Tensions and paradoxes in electronic patient record research: a systematic literature review using the meta-narrative method, Milbank Q. 87 (2009) 729–788. N. Lorenzi, Beyond the gadgets: non-technological barriers to information systems need to be overcome too, BMJ 328 (2004) 11–46. J. Handy, I. Hunter, R. Whiddett, User acceptance of inter-organizational electronic medical records, Health Inform. J. 7 (2001) 103–107. G. Gillespie, IT: often a tough sell to nursing staffs, Health Data Manage. 10 (2002) 56–59. R. Koppel, A. Localo, A. Cohen, et al., neither panacea nor black box: responding to three journal of biomedical informatics papers on computerized physician order entry systems, J. Biomed. Informat. 38 (2005) 267–269. J. Aarts, The future of electronic prescribing, Stud. Health Technol. Inform. 166 (2011) 13–17. R. Cook, D. Woods, Operating at the sharp end: the complexity of human error, in: Bogner M. (Ed.), Human Error in Medicine, Lawrence Erlbaum Associates, Hillsdale, NJ, 1994, pp. 255–310. S. Kossman, S. Scheidenhelm, Nurses’ perceptions of the impact of electronic health records on work and patient outcomes, Comput. Nurs. 26 (2008) 69–77. E. Sassen, Love hate, or indifference: how nurses really feel about the electronic health record system, Comput. Nurs. 27 (2009) 281–287. M. Sitterding, M. Broome, L. Eerett, et al., Understanding situation awareness in nursing work: a hybrid concept analysis, Adv. Nurs. Sci. 35 (2013) 77–92. D. Kirkley, M. Stein, Nurses and clinical technology: sources of resistance and strategies for acceptance, Nurs. Econ. 22 (2004) 216–222. M. Bloomrosen, J. Starren, N. Lorenzi, Ash, et al., Anticipating and addressing the unintended consequences of health IT an policy: a report from the AMIA 2009 Health Policy Meeting, JAMIA 18 (2011) 82–90, http://dx.doi.org/10.1136/jamia.2010.007567. J. Horsky, J. Zhang, V. Patel, To err is not entirely human: complex technology and user cognition, J. Biomed. Inform. 38 (2005) 264–266. R. Holden, Cognitive performance-altering effects of electronic medical records: an application of the human factors paradigm for patient safety, Cogn. Technol. Work 13 (2011) 11–29. H. Saitwal, X. Feng, M. Walji, et al., Assessing performance of an electronic health record (EHR) using cognitive task analysis, Int. J. Med. Inform. 79 (2010) 501–506. V. Patel, J. Arocha, D. Kaufman, A primer on aspects of cognition for medical informatics, JAMIA 8 (2001) 324e43. S. Hart, NASA- Task Load Index [NASA-TLX]; 20 years later, Proc. HFES Annu. Meet. 50 (2006) 904–908. T. Megaw, The definition and measurement of mental workload, in: J. Wilson, N. Corlett (Eds.), Evaluation of Human Work, third ed., Taylor and Francis, Boca Raton FL, 2005, pp. 541–543. S. Rubio, E. Diaz, J. Martin, et al., Evaluation of subjective mental workload: a comparison of SWAT. NASA-TLX and workload profile methods, Appl. Psychol. 53 (2004) 61–86. J.S. Warm, G. Matthews, V.S. Finomore Jr., Vigilance, workload, and stress, in: P.A. Hancock, J.L. Szalma (Eds.), Performance Under Stress, Ashgate Publishing Company, Burlington, VT, 2008, pp. 115–141.

[33] S. Grigg, S. Garrett, Using the NASA-TLX to assess first year engineering problem difficulty, in: Proc 2012 IEEE Res Conf, 2012, p. 992. [34] M. Weinger, A. Vredenburgh, C. Schumann, et al., Quantitative description of the workload associated with airway management 26. procedures, J. Clin. Anesth. 12 (2000) 273–282. [35] M. Shamo, R. Dror, A multi-dimensional evaluation methodology for new cockpit systems, in: Proceed 10th Intl Aviation Psych Symposium, 1999, pp. 1–9. [36] Personal communication, M. Metzger, Y. So from NASA, now Human Systems Information Analysis Centre (formerly CSERIAC) at the 56th Annual Meeting of the Human Factors and Ergonomics Society 24 Oct 2012. [37] K. Lopez, G. Gerling, M. Cary, et al., Cognitive work analysis to evaluate the problem of patient falls in an inpatient setting, JAMIA 17 (2010) 313–321. [38] S. Anders, A.R. Miller, M.B. Weinger, et al., Evaluation of an integrated graphical display to promote acute change detection in ICU patients, Int. J. Med. Inform. 81 (2012) 842–851. [39] Y. Yurko, M. Scerbo, A. Prabhu, et al., Higher mental workload is associated with poorer laparoscopic performance as measured by the NASA-TLX tool, Sim. Healthc. 5 (2010) 267–271. [40] G. Nickell, I. Pinto, The computer attitude scale, Comput. Hum. Behav. 2 (1986) 301–306. [41] I.M. Jawahar, B. Elango, The effect of attitudes,goal-setting and self-efficacy, in: Human Factors in Information Systems, Snodgrass CR. IRM Press, Hershey, PA, 2012. [42] M.M. Shah, R. Hassan, R. Embi, Experiencing computer anxiety, in: Second International Conference on Business and Economic Research Proceeding, 2011, pp. 1631–1645. [43] D. Potosky, P. Bobko, The computer understanding and experience scale: a self-report measure of computer experience, Comput. Hum. Behav. 14 (1998) 337–348. [44] C. McGinn, S. Grenier, J. Duplantie, et al., Comparison of user groups’ perspectives of barriers and facilitators to implementing electronic health records: a systematic review, BMC Med. 9 (2011) 46. [45] J. Kjeldskov, M. Skov, J. Stage, A longitudinal study of usability in health care: does time heal? Int. J. Med. Inform. 79 (2010) e135–e143. [46] C. Murphy, M. Maynard, G. Morgan, Pretest and post-test attitudes of nursing personnel toward a patient care information system, Comput. Nurs. 12 (1994) 239–244. [47] P. Nykanen, J. Kaipio, A. Kuusisto, Evaluation of the national nursing model and four nursing documentation systems in Finland–Lessons learned and directions for the future, Int. J. Med. Inform. 81 (2012) 507–520. [48] K. Smith, V. Smith, M. Krugman, et al., Evaluating the impact of computerized clinical documentation, Comput. Inform. Nurs. 23 (2005) 132–138. [49] D. Smith, A. Morris, J. Janke, Nursing satisfaction and attitudes with computerized software implementation: a quasi-experimental study, Comput. Inform. Nurs. 29 (2011) 245–250. [50] H. Potts, Student experiences of creating and sharing material in online learning, Med. Teach. 33 (2011) e607–e614. [51] M. Buntin, M. Burke, M. Hoaglin, et al., The benefits of health information technology: a review of the recent literature shows predominantly positive results, Health Aff. 30 (2011) 464–471. [52] W. Stead, H. Lin, Computational Technology for Effective Health Care: Immediate Steps and Strategic Directions., National Academies Press, Washington, DC, 2009, pp. 5–6.

Cognitive workload changes for nurses transitioning from a legacy system with paper documentation to a commercial electronic health record.

Healthcare institutions worldwide are moving to electronic health records (EHRs). These transitions are particularly numerous in the US where healthca...
1MB Sizes 0 Downloads 9 Views