Biomed. Eng.-Biomed. Tech. 2016; 61(2): 147–164

Kathrin Lange*, Miriam Nowak and Wolfgang Lauer

A human factors perspective on medical device alarms: problems with operating alarming devices and responding to device alarms Abstract: Medical devices emit alarms when a problem with the device or with the patient needs to be addressed by healthcare personnel. At present, problems with device alarms are frequently discussed in the literature, the main message being that patient safety is compromised because device alarms are not as effective and safe as they should – and could – be. There is a general consensus that alarm-related hazards result, to a considerable degree, from the interactions of human users with the device. The present paper addresses key aspects of human perception and cognition that may relate to both operating alarming devices and responding to device alarms. Recent publications suggested solutions to alarm-related hazards associated with usage errors based on assumptions on the causal relations between, for example, alarm management and human perception, cognition, and responding. However, although there is face validity in many of these assumptions, future research should provide objective empirical evidence in order to deepen our understanding of the actual causal relationships, and hence improve and expand the possibilities for taking appropriate action. Keywords: alarm; human factors; human-machine; interaction medical device. DOI 10.1515/bmt-2014-0068 Received July 15, 2014; accepted October 24, 2014; online first November 26, 2014

*Corresponding author: Kathrin Lange, Federal Institute for Drugs and Medical Devices, Research Division, Kurt-Georg-Kiesinger Allee 3, 53175 Bonn, Germany, Phone: +49(0)228 993 074 008, E-mail: [email protected] Miriam Nowak: Federal Institute for Drugs and Medical Devices, Research Division, Kurt-Georg-Kiesinger Allee 3, 53175 Bonn, Germany Wolfgang Lauer: Federal Institute for Drugs and Medical Devices, Research Division, Kurt-Georg-Kiesinger Allee 3, 53175 Bonn, Germany; and Federal Institute for Drugs and Medical Devices, Medical Devices Division, Kurt-Georg-Kiesinger Allee 3, 53175 Bonn, Germany

Introduction Taken literally, alarms (the word being derived from Italian all’arme, i.e., to the weapons) are stimuli intended to make the user aware of the necessity for action. Ideally, the alarm also provides information about its source and/ or about which specific action is required. For instance, a fire alarm draws the users’ attention to the necessity to leave the building – and shall therefore trigger this response. According to collateral standard IEC 606011-8 [55], an alarm signal is a “signal generated by an alarm system to indicate the presence or occurrence of an alarm condition”. An alarm condition is the “state of the alarm system, when it has determined a potential or actual hazard”. Depending on its priority level, the alarm signals the necessity of operator awareness (low priority alarm) or, additionally, of a prompt or immediate operator response (medium or high priority alarm). Medical devices emit alarms when a problem with the device or with the patient needs to be addressed by healthcare personnel. Problems with device alarms are currently discussed in the professional literature (e.g., [12, 23, 30, 31, 43, 128, 136]). The main message of these publications is that device alarms are not as effective and safe as they should be. Reasons for this include that the presence of an alarm does not validly indicate the presence of a deterioration of the patient’s status – and that the absence of an alarm does not validly indicate the absence of a deterioration of the patient’s status. Additional problems that are addressed are the high absolute numbers of alarms and their suboptimal psycho-acoustical properties (e.g., [12, 32, 33]). Importantly, all of these problems involve (or may involve) a contribution of the human user in one way or the other. When using medical devices equipped with alarms, there are two types of (alarm-related) interactions between the user and the device: (1) The user has to configure the device in order to ensure output of the intended alarm (user → device → alarm). (2) The (device-) alarm has to make the user aware of the necessity to act (device → alarm → user). This involves both perceptual and

Brought to you by | UCL - University College London Authenticated Download Date | 12/21/16 7:23 PM

148      K. Lange et al.: Human factors of medical device alarms cognitive processing on the side of the user, for instance detection, interpretation, and evaluation of the alarm. Moreover, by drawing the user’s attention to the problem causing the alarm, auditory alarms may also (negatively) affect performance in other tasks. In the following, we will provide an overview of the different components of human users’ interactions with devices and alarms. We will discuss existing evidence for postulated causal relationships between device/alarm features on the one hand and human perception and action on the other. We start with a short overview on two major groups of devices that use alarms (life-supporting devices and monitoring devices) and discuss the different functions of alarms. We then consider the different roles of users operating alarming devices. These include the configuration of the device (including the alarm function) and the perception of and response to the alarm.

Different groups of alarming devices and different functions of alarms Critically ill or injured patients are particularly treated in the operating room (OR), the emergency department (ED), and the intensive care unit (ICU). In these contexts, there is always the risk that vital functions (e.g., cardio vascular or respiratory functions) deviate from an acceptable range. Depending on the medical requirements, medical devices are used for two complementary means to continuously maintain vital functions: support and monitoring of vital functions. If a vital system (e.g., the respiratory system) does not fulfil its function sufficiently, it may be supported by a specific medical device (e.g., a ventilator). If the vital functions of the patient are principally working, but are unstable and the function is not within normal limits, it is essential to instantly become aware of any (further) deterioration or even loss of functioning to be able to start the appropriate intervention. To this end, vital functions are closely monitored. Support and monitoring of vital functions are typically performed automatically by specific medical devices. Automation is intended to reduce the task-related demands of the medical and the nursing staff, and – as a consequence – to allow them to perform other tasks in parallel. However, automation does not simply reduce the workload of the automated task, but rather changes its demands (see also [106, 116]). Because there is always the risk that a device fails, the user now has to closely monitor proper functioning of the device. To facilitate the latter, devices are designed to actively communicate actual or imminent device failure.

The primary means of a device to communicate urgent messages is to issue a warning or alarm. The meaning of an alarm depends on the type of device. For life-supporting devices, alarms are emitted when the device does not succeed in restoring the vital function in question. Conditions that may trigger these alarms include that the device is not fully operational (e.g., when the ventilator tube is not connected to the patient) or that the intended effect on vital parameters is not achieved (numerous lifesupporting devices also monitor vital parameters related to the supported functions). For monitoring devices, two different classes of events may trigger an alarm: the deterioration of the patient’s status (e.g., low blood pressure, asystole) and the inability of the device to successfully perform its intended function (e.g., detached ECG lead, low battery status). Obviously, alarms signaling device malfunctions have different implications whether they originate from life-supporting versus monitoring devices: For life-supporting – but not monitoring – devices, these alarms indicate an actual threat to the patient’s health or even life. Because of this, alarms related to monitor malfunctioning (technical alarms) are sometimes perceived as “false” and may eventually be ignored. Because device alarms are a crucial means to ensure safe and efficient operating of the human-machine system, problems such as these need to be solved. With respect to alarming devices, the users have two roles (see Figure 1). Firstly, they have to configure the device in a way that ensures audibility of all relevant alarms at the right location. This may include routing the alarm to a central station and/or to other monitors in a network. Secondly, once the alarm is played, the users have to adequately respond to it. This requires perceptual (detection, discrimination and localization), cognitive (identification, evaluation, and response selection) and motivational factors, as well as the actual (motoric)

Impact of the user on the device

Impact of the device on the user Alarm

User

Configures

Alerts Disrupts

User

Presents

Device

Figure 1 Overview of the two classes of interaction between the user and the device.

Brought to you by | UCL - University College London Authenticated Download Date | 12/21/16 7:23 PM

K. Lange et al.: Human factors of medical device alarms      149

execution of the response. Importantly, the clinician’s perceptual and cognitive response to – especially auditory – alarms may also affect their performance in other tasks.

Impact of the user on the device: Configuring alarm settings Both the US-American Joint Commission and the German Federal Institute for Drugs and Medical Devices [Bundesinstitut für Arzneimittel und Medizinprodukte (BfArM)] recently published analyses of cases associated with the “failure of a device to alarm” [70, 136]. As for the Joint Commission, cases (called “sentinel events”) are voluntarily reported by accredited organizations or are reported via a complaint process. By contrast, the cases included in the vigilance database of the BfArM are reported by manufacturers or professional users according to German federal law, if a failure of a medical device has contributed to or is supposed to have contributed to the death or an actual or potential serious deterioration of the health of a patient [Ordinance on Medical Devices Vigilance (Medizinprodukte-Sicherheitsplanverordnung)]. The Joint Commission performed root-cause analyses on their cases to identify a broad range of factors contributing to the event. Some of the major contributing factors for “alarm-related hazards” were directly or indirectly related to configuration problems: “improper alarm settings” (contributed in 21% of the cases), “alarm signals not audible in all areas” (contributed in 26% of the cases), and “alarm signals inadequately turned off” (contributed in 37% of the cases; cases could be associated with more than one contributing factor). The BfArM database comprises reports of analyses performed by the manufacturers in accordance with the Ordinance on Medical Devices Vigilance. These analyses aim at identifying systematic deficiencies of the device that may be a risk for patients, users, or others. Being focused on the identification of clearly device-related issues, the data collected by the manufacturers’ analyses do not necessarily allow for deeper analyses of the contributing factors when the event turned out to be more strongly related to use error – rather than to a technical deficit. Hence, the categories that can be derived from the available information represent failures that could be identified by analyzing the device, for instance “alarming not active” (configuration) or “no speakers provided” (implementation/maintenance). The analysis of Lange et al. [70] comprises 233 cases received by the BfArM between January 2009 and June 2012, where manufacturers or professional users reported that an

alarm of a patient monitor or of a pulse-oximeter was missing (for details see [70]). A technical failure was identified as causing the absence of an alarm in the majority of these cases (143 of the 233 reports). However, in almost one-quarter of the reports, use error or organizational failures significantly contributed. More specifically, the missing alarm was caused by a configuration error in 14% of the reports and an implementation/maintenance error in 3% of the cases. These results suggest that misconfigured devices contributed to a significant number of alarmrelated events, reported under the suspicion of a technical deficit (see also Figure 2).

Cognitive factors potentially involved in ­configuring a device We will structure the discussion of cognitive factors potentially contributing to a faulty device configuration along the influential taxonomy of “unsafe acts” by James Reason [115]. According to Reason, unsafe acts include both errors and violations. Unsafe acts are called violations when the individual intentionally disregards existing (safety) regulations in order to pursue a different goal (e.g., saving time). Unsafe acts are called errors, when an action is chosen that is not associated with the intended goal or when an action associated with the intended goal is executed faultily. Errors are further sub-divided depending on the stage of goal-directed behavior from which they originate: Errors associated with the planning stage of behavior (i.e., identifying the situation, deciding on goals, and planning actions) are called mistakes, errors in maintaining the current goals and action plans (in working memory) are referred to as lapses, and errors in the execution of actions (more precisely: in the initiation of action execution) are termed slips (see Table 1). The processes involved in the planning stage are supposed to differ between familiar and unfamiliar situations, hence Reason [115], following Rasmussen and colleagues [113], further differentiates rule-based and knowledgebased errors. In a familiar situation, we typically retrieve the goals and action plans from memory as a whole. The related memory structures are referred to as action schemas [97]. Hence, mistakes at the rule-based level involve the failure to recognize the defining features of the situation (because this precludes activation of the correct rule), which may lead to an erroneous decision on which goal(s) to pursue. Moreover, these errors can consist of the failure to recall the correct actions or action sequences to be taken. In an unfamiliar situation, by contrast, goals and associated actions need to be derived “bottom-up”. In Brought to you by | UCL - University College London Authenticated Download Date | 12/21/16 7:23 PM

150      K. Lange et al.: Human factors of medical device alarms Working memory Attention

Knowledge (semantic memory)

Motor skills

User

Reasoning/ problem solving

Alarm

etc. ...

etc. ...

Sensor: placing? Sensor: connection?

Volume settings?

Device

Alarm: routing?

Analysis of parameters: activated?

Alarm: suspension?

Analysis of parameters: patient specific configuration?

Alarm: activation?

Figure 2 Overview (not exhaustive) of individual aspects of device settings relevant to emitting an alarm and of potentially involved cognitive processes.

Table 1 Overview of unsafe acts according to Reason (1990) and examples relating to operating alarm-related functions of monitoring equipment. Types of “unsafe acts”



Examples

Slips: error in triggering the execution of the correct actions.



Inadvertently pressing the “silence” button instead of the “acknowledge” button to end an existing alarm (this will suppress other alarms from sounding for a given period in the future). Forgetting ones intention to activate the respiratory alarms on a monitor, e.g., when interrupted by another patient’s call. Choosing the wrong monitor-configuration for a specific patient, e.g., when disregarding specific use restrictions. Intentionally muting alarms, e.g., to reduce noise levels; this would be one of the behavioral manifestations of “alarm fatigue”.

Lapses: error in maintaining goals/action plans in   working memory. Mistakes: Error in identifying the situation, in deciding   on goals and in planning associated actions. Violation: intentionally disregarding existing rules/   policies.

this case, an analytical assessment of the characteristics of the situation is required, together with decisions on the to-be-attained goals and the to-be-employed actions or action sequences. In short, unfamiliar situations require processes typically associated with (complex) problem solving (see also [52]). Note, that the terms “rule-based” and “knowledge-based” may be somewhat misleading. Behavioral control at the rule-based level is also based on “knowledge” (in the sense of “information stored in memory”). At the “knowledge-based” level, however, the available knowledge needs to be used in a more flexible way, requiring insight – e.g., into the composition of the rules. A “faulty device configuration” may be the observable outcome of a variety of unsafe acts at different levels of

action control (Table 1). In the following, we will discuss “configuration errors” caused by mistakes at the rulebased and (so some extend) the knowledge-based level. We will not deal with attention- and working-memory issues, here (these factors will be briefly discussed in the context of the detection of alarms).

What do users need to know? Reducing the risk of rule-based and knowledgebased mistakes Successfully performing a goal-directed action requires, among other things, that the user has knowledge – for example on which goals are correct, on which actions have Brought to you by | UCL - University College London Authenticated Download Date | 12/21/16 7:23 PM

K. Lange et al.: Human factors of medical device alarms      151

to be taken, and on how to perform these actions. Regarding the knowledge necessary to operate a device, two basic questions need to be discussed. (1) Which knowledge is necessary to perform a given task? (2) Is there an ideal way to provide the related information (or do different types of information require different formats)? Knowledge is needed for two different situations: for the standard use in familiar situations and for dealing with unforeseeable problems, i.e., when part of the situation becomes unfamiliar. This corresponds to the distinction addressed above between mistakes occurring while action is under rulebased versus knowledge-based control, respectively. The necessary information may be inherent to the design (ergonomic), or may be provided by labels, manuals or in specific trainings. Importantly, the availability of information per se is not sufficient: Mistakes may also result when the required information is not used (e.g., [46], see also [52]). How much operators need to know is a subject of discussion. It is believed that a technical system is mostly controllable, in its intended main functions during standard situations, if the operator knows a certain set of rules [61]. As learnability is one of the components of usability this set of rules should be easy to learn. For operation of technical systems, Kluwe [61] distinguishes two different types of knowledge: “system knowledge” and “control knowledge”. Both include elements of knowledge about the interface (human-device interaction) as well as knowledge about the device. System knowledge of the interface means the user understands the interface, e.g., the menu structure or knows what displayed values mean. System knowledge of the device means the user understands the scientific principles on which the device is based and knows all functions. Including a patient, the system of a medical device is more complex. Therefore system knowledge is not limited to technical device knowledge, but also includes knowledge of patient factors, e.g., about the interaction between patient and device. Control knowledge of the interface is a set of rules that links a goal with an action (which button to press to reach goal A). Control knowledge of the device includes which parameters need to be set to achieve a goal (which value to set to reach goal B). Patient factors are also part of the control knowledge (e.g., medical indication or contraindication for use). Control knowledge covers the rule-based level of the human-device interaction. Rules for foreseeable worst case scenarios are also part of the control knowledge. But for unforeseeable worst case scenarios, no rules are provided, i.e., the operator has to rely on system knowledge to handle the device in the best possible manner. This happens on the knowledgebased level [61, 113]. For medical device application this means that for the indicated use of the device in a standard

situation, rules of operation should be sufficient for a safe usage of the device. When the situation differs from the norm, a more specific rule or profound system knowledge (including device/device and patient/device interdependency) of the device is needed to conclude the safest way to proceed. Examples include situations in which another medical device interacts with the currently applied one, environmental factors interact with the medical device, the device has to be customized, or in which a special version is needed because of a rare patient condition. The conditions under which the execution of an action is rulebased or knowledge-based cannot be defined in general as – because of the experience, education and capacity of the operator – there is an overlapping area, where both mechanisms would be possible.

Lack of device knowledge that can lead to configuration problems To be able to use the device in a safe manner, the user needs a correct mental model of its purpose and functions, but also of its limitations. Device-related knowledge of users has been evaluated only for few medical device groups. To the best of our knowledge, studies investigating users’ knowledge on alarming devices have mainly focused on pulse oximetry. Elliot and colleagues conclude in their review of 14 studies from the years 1994 to 2006 that the understanding of pulse oximetry is poor [35]. In most of the analyzed studies, a high amount of questioned device users (up to 93% of a sample) were not able to tell what parameter pulse oximetry measures. The knowledge of influencing factors such as nail polish, arrhythmias or stray light was also low. This lack of knowledge is attributed to a lack of training [35]. If users are not aware of how external parameters and patient parameters affect a measurement, they are not able to ensure a good measurement quality. If users do not know that, for example, certain colors of nail polish can cause false low values and do not remove the polish, there might be a higher rate of alarms that are not clinical relevant. In addition, if the users rely on these false values for therapeutic decisions, they might start unnecessary interventions (increasing the inspired oxygen fraction), which could also harm the patient. Another factor is that if the users are not proficient with the basic principles of the device they lack the rules for standard device use. In other words, they lack control knowledge, for example to set the right configuration. In the BfArM data, several examples can be found, where a lack of control knowledge led to an erroneous Brought to you by | UCL - University College London Authenticated Download Date | 12/21/16 7:23 PM

152      K. Lange et al.: Human factors of medical device alarms configuration of the alarm routing behavior of the device. Patient monitors are often part of a network with a central remote control unit. Several reports deal with users that were not able to configure the routing of alarms from one device to another one in the network or to the central station. The reported incident is that the alarm does not sound at the location expected by the user. The incident reported by the user may differ from the incident that happened. If the user was aware of the lack of control knowledge, configuration problems were reported; but if the user was not aware, the reported incident was unexpected device behavior – mostly a missing alarm. The investigation of the manufacturer found the erroneous configuration as the cause for the incident. Another set of incidents reported to the BfArM that show a lack of device understanding are reports of supposedly missing alarms that were caused by the mode how a previous alarm was turned off/muted. Most monitors have two options to turn an alarm off: (1) The option to turn off the actually ringing alarm completely (e.g., when fixing the problem), while other alarms will still sound when a relevant situation occurs; (2) The option to turn off all alarms for a defined time (e.g., to prevent alarms while relocating the electrodes), where alarms are paused and no alarm sounds at all. Several cases of missing alarms were reported because a situation that normally triggers an alarm occurred while all alarms were paused. Some of the users declared that they were aware of this condition but expected that high priority alarms would still ring if relevant situations occurred. The discrimination between these very similar functions seems very complex. But according to recent surveys on alarm problems conducted by The Healthcare Technology Foundation [49], see also [39, 64], clinicians do not consider proper setting of alarms and training on alarm systems to be particularly important issues, and only about one-fifth of the respondents thought that properly setting alarm parameters was overly complex in existing devices. This contrasts with the results obtained by recent analyses of adverse event data, which identified improper devices settings to be one of the major problems, e.g., [70, 136]. However, the survey data and the case analysis data were not obtained in the same study, and so any comparison needs to be drawn with caution. Several reasons may explain the potential discrepancy between operator perception of device complexity and reported use errors: –– The operator does not have proper knowledge (i.e., knowledge is wrong or incomplete) of the device but is not aware of this.

–– The operator has the knowledge to configure the device but commits a slip or a lapse when doing so. –– The operator has the knowledge to configure the device but misconfigures it because his goals differ from what is achievable with the correct configuration (violation, e.g., setting improper alarm limits to reduce the number of alarms). However, this should not result in incident reports, as the user is fully aware of the “wrong configuration”.

What do users actually learn about medical devices? To our knowledge, what operators of medical devices learn about the devices they use and how they learn it, has not been systematically investigated in Germany. A few studies do, however, exist for Australia [82–84], the USA [84], and the UK [29, 38]. As the education system in Germany differs from those in Australia, the USA, and the UK, the results of these studies are not necessarily transferable. A German framework for instruction is given by the law, i.e., by the “Operator Ordinance for medical devices” [90]. Considering §2 of the Operator Ordinance, the following apply: –– Medical products may only be used, mounted, operated and maintained regarding their intended use, the rules of the Operator Ordinance, generally accepted rule of technology and occupational safety. –– Medical products may only be used, mounted, operated and maintained by persons having the required education or knowledge and experience [90]. §5 clause 2 directs that medical products defined further in annex 1 may only be used by a person that has received device instruction by an authorized person (authorized by the operating company or person) or the manufacturer [90]. Annex 1 includes several classes of active devices, for instance patient monitors, infusion pumps and respirators but not pulse oximeters [26, 90]. To the best of our knowledge, it has not been investigated how these requirements are implemented in the workaday life of clinical personnel. In the 1990s, McConnell [82–84] investigated what nurses learn about the medical devices they use. Of interest concerning alarm-related devices is a survey about what nurses in Australia learn about complex and simple devices [83]. The study gives an insight about what nurses learned about infusion pumps in general. Most nurses learned “How to operate the device” (94.9%), “Its purpose” (87.3%) and “Its functions” (84.8%). Users also Brought to you by | UCL - University College London Authenticated Download Date | 12/21/16 7:23 PM

K. Lange et al.: Human factors of medical device alarms      153

learned how to troubleshoot minor problems (77.1%) and how to know if the device is working (79.3%). About half of the users learned “Its limitations” (49.8%) and “Patient factors that indicate use” (52.1%) and only a third was instructed about “Patient factors that contraindicate use” (33.6%). Finally, only 12.7% learned about the scientific principles on which the devices are based. Similar data were obtained for American nurses [84]. Resuming the finding of these studies, most operators learned the basic rules for operating the device. Only a minority of users learned the rules for actions in (foreseeable) exceptional circumstances, which is part of the operational knowledge. While having learned a rule does not guarantee that users can apply it, it is certain that users cannot apply rules they never learned. Users that did not learn these rules would have to fall back to knowledgebased actions and system knowledge when such a situation occurs. However, this is only possible if the user has an understanding of relevant scientific principles (physically and physiologically) and the device’s functionality. The studies by McConnell et al. [83, 84] show that most users are not provided with this knowledge when learning about the device. As not everything that is learned is also adequately represented in the user’s mind, it may be expected that an even smaller pool of rules is actually used. This is consistent with the limited knowledge about pulse oximetry, mentioned above [35]. Extant data suggest that a number of users of medical devices do not have the required rule-set/ operational knowledge for safe use of the devices [70]. While there is little data on what users of medical devices learn, there is even less data on how it was learned. According to McConnell and colleagues [83], most nurses were told about infusion pumps by another staff member (72.3%), or learned by trial and error (59.7%). Almost half of the nurses received instructions in nurse training (44.7%). Only 39.9% used the user manual. The Australian data cannot be transferred to Germany. To the best of our knowledge there is no published research on how German users of medical devices learn how to operate them. An indication that user manuals are rarely used in Germany is given by a survey by Matern and colleagues [79] focusing on “safety in surgery” administered to a convenience sample among the attendees of the “Deutscher Chirurgenkongress” in 2004 and 2005. Only a few (6.7% of the surgeons, 23.4% of the surgical nurses) of the questioned attendees declared that they read the user manuals of all devices in the operating room. With the growing availability of computers and new media in the last years it is expected that how medical device users learn and where they research for information concerning the devices, has changed since the studies of McConnell and colleagues.

It is safe to assume that device knowledge is one prerequisite for error-free use of a device, although we are not aware of any existing study investigating the association between users’ knowledge and errors in device configuration. Issues like “What is taught and how?” and “What do operators know in the end?” have been addressed by individual studies, but potential associations have not been investigated, so far. A related question is which factors influence the application of knowledge in real working situations, where users have to operate several different devices and to work under physical and mental stress.

Impact of the device on the user: perception and interpretation of alarms So far, we have considered the human contribution to guaranteeing that the device sounds an alarm when appropriate. However, correctly configuring the device is only necessary, but not sufficient: What counts in the end is how the information provided by the alarm is used, i.e., whether the appropriate response is performed. To be able to trigger the appropriate response, the alarm needs to catch attention and provide information. To perform the appropriate response, the user needs to detect the alarm, discriminate it from concurrent signals, identify its meaning, evaluate its response-relevance (urgency), and select the appropriate response (see also Figure 3). In addition, the user needs to feel responsible for responding and to possess the skills and resources (e.g., time, means) to execute the response. In short, to be able to assess the risk associated with device alarms, one needs to consider many processes, some of which are only loosely related to the device itself. In the following, we will focus on the question of whether device alarms (or the implementation thereof) are indeed successful at catching the users’ attention and informing them of the underlying alarm

Localization Identification

Onset detection

Alarm Segregation

User

Evaluation Response selection

Feature analysis Response execution Figure 3 Overview (not exhaustive) of the central processing steps involved in perceiving and responding to an alarm.

Brought to you by | UCL - University College London Authenticated Download Date | 12/21/16 7:23 PM

154      K. Lange et al.: Human factors of medical device alarms condition and associated urgency. Alarm-related risk is, however, not limited to the possibility of missing an alarm and, as a consequence, impairments in monitoring. We will conclude by addressing potential negative side effects of the occurrence of alarms on performance in other, concurrent tasks. A recent survey assessed the clinicians’ subjective appraisal of current alarm implementation in US hospitals [49], see also [39, 64]. The respondents of the survey deemed most important the frequent false alarms and the resultant behavioral consequences: “alarm fatigue” (e.g., [23, 39, 43, 66, 128]), followed by difficulties in identifying source and priority of the alarm (second and third, respectively), and difficulties in hearing the alarm, at all. Additionally, in the very same survey, more than 70% of the respondents agreed that “nuisance alarms disrupt patient care”. In a similar vein, frequent false alarms and the resulting “alarm fatigue” are among the major issues addressed by the current literature (for reviews see, e.g., [12, 23, 33, 128], and see also [114]). However, because objective data on users’ responses to clinical alarms are rare, the precise relation between characteristics of the alarm/the alarm system and user behavior is far from clear. Survey data provide important insight into the subjective opinions of the clinic personnel. However, these data need to be complemented by objective evidence to achieve a thorough understanding of the problem.

Detection of alarms Given the current focus on insufficient validity of alarms and the potentially resulting “alarm fatigue” [12, 23, 31, 124, 128], the detection of alarms (or alarm audibility) seems less of a problem. In line with this notion, when asked whether there were “frequent instances where alarms could not be heard and were missed”, the majority of respondents (i.e., 54%) to the survey of The Healthcare Technology Foundation [49] dissented. By contrast, because alarms are often seen as too numerous, loud, and irritating [33, 49], it is tempting to assume that alarms are so obtrusive that they cannot be missed at all. However, in the survey of The Healthcare Technology Foundation [49], there were still a significant number of participants (29%), who agreed that they had experienced “frequent instances where alarms could not be heard and were missed”. Moreover, some authors briefly address the problem of missed (rather than missing) alarms. For instance, Edworthy and Hellier [33] state that current alarms are suboptimal, e.g., because they consist of pure

tones, which are easily masked by concurrent sounds. The Joint Commission [136] identified “alarms not being audible in all areas” to be among the major factors contributing to alarm-related events and suggest the assessment of (room) acoustics in the relevant hospital areas to ensure audibility. Examples of alarms being presented but not being audible were also found among the 233 reports of “missing alarms” to the German BfArM that were analyzed by Lange et  al. [70]: In two cases, alarm volume was set too low to be heard in the specific surrounding. In an additional report, an external speaker fell down and emitted only dampened sounds that were inaudible to the caregivers. Hence, alarms that appear to have been missing may, in fact, have been missed. Alarm detection depends on different factors related to the signal/stimulus, the listener, and the listener’s current task. These will be addressed in turn in the following (see also Figure 4).

External/stimulus-related factors The detectability of clinical alarms has not been systematically assessed in applied contexts, (but see [132]). Generally, stimulus factors affecting sound detection (i.e., the subjective impression that a sound is present) are assessed by measuring the absolute (stimulus presented alone) or the masked threshold (stimulus presented in the presence of noise) while varying particular features of the signal. As absolute threshold is defined as the minimum energy necessary to evoke a perception, sound pressure can be regarded as essential. Other factors include frequency and duration and – for complex sounds – spectrum and bandwidth (e.g., [112] for an overview of hearing; see also [78] for an overview of stimulus detection). In the typical hospital setting, alarm signals do not typically occur in isolation, but rather in the presence of other, extraneous sounds. Therefore, it is the relation of the alarm to the noise that is important (i.e., the masked threshold), rather than the absolute values of any individual sound feature. The reason to consider the prevailing noise when assessing the detectability of alarms is the possibility of masking. Masking refers to the phenomenon, where the presence of a sound (the masker) renders another sound (the signal) inaudible (see, e.g., [105]). Masking varies as a function of frequency overlap and intensity, i.e., the closer in frequency the (potential) masker and the signal, the more intense the signal needs to be in order to be detected. In addition to increasing sound pressure, one may also increase detectability by including additional frequency components, in part because the neural responses evoked Brought to you by | UCL - University College London Authenticated Download Date | 12/21/16 7:23 PM

K. Lange et al.: Human factors of medical device alarms      155

Sound User - Level - Frequency - Signal-to-noise ratio - Envelope - ...

Detection

- Threshold/audiogram - Working memory capacity - Current workload - Expectancy - Attention - ...

Current task - Perceptual demands - Cognitive demands - Modality of task-related stimuli - Number of concurrent tasks ... Figure 4 Overview (not exhaustive) of stimulus-, task- and person-related factors potentially influencing auditory stimulus detection.

by individual frequency components will be integrated (for a discussion of non-energy cues that influence sound detection see, e.g., [105]). In summary, it is important to consider both level and frequency information (spectrum, bandwidth) of both the alarm sound and the background noise to estimate whether or not an alarm can be detected (see also [142]). Existing studies do not satisfyingly address all of these issues simultaneously: Several studies have investigated the level of hospital noise, with equivalent sound levels for ICUs and EDs between 50 and 60 dB(A) [16, 100, 122] or even higher [104, 121], and see also [117]. Some of these studies also measured frequency spectra [16, 100, 104, 121], but they did not systematically assess individual sound sources. Momtahan et al. [87] analyzed the interference between different clinical alarms and between clinical alarms and device sounds and found significant risk for masking. However, in their study, background noise was not taken into account. Other studies conducted comprehensive soundscape analyses with systematic annotations to sound sources, but did not analyze frequency data [107]. Therefore, future studies should systematically measure intensity and frequency information of individual alarms and of hospital background noise to provide basic data for estimating the stimulus-related risk for a user missing an auditory alarm.

Person- and task-related factors affecting sound detection So far, we have considered only the most basic stimulus-related factors affecting sound detection. However, the detection of alarms, or more generally of signals in noise, is not only determined by stimulus factors. Obviously, properties of the listener need to be taken into account, as well, above all their hearing threshold. In an attempt to estimate clinicians’ ability to detect alarms, Wallace et al. [139] found abnormal audiograms in more than 65% of the participating anesthetists, 37% of which were not even aware of their impairment. For as many as 7% of the participants, one or more alarms of a typical OR would have been below threshold. Even for participants with normal audiograms, inter-individual differences affect the ability to successfully process a sound in the presence of similar competing sounds. There is evidence that these differences depend heavily on the listener’s ability to selectively focus their auditory attention [119]. Notably, the ability to use attention to maintain or suppress information is tightly coupled to working memory capacity, for reviews see [36, 40, 59], and see also [5, 20, 21]. Working memory is a core function involved in many cognitive tasks. The role of working memory capacity becomes particularly evident when conditions Brought to you by | UCL - University College London Authenticated Download Date | 12/21/16 7:23 PM

156      K. Lange et al.: Human factors of medical device alarms are demanding, i.e., when the listener is not completely devoted to detecting the alarm, but simultaneously performs other tasks, as is the case in most clinical situations. In such multiple-task conditions, processing resources are distributed between the different tasks. Resource allocation depends on the relative priorities of the tasks and on their perceptual and cognitive demands – a process referred to as attentional orienting. Notably, the more challenging a task (and the higher its priority), the more resources will be dedicated to processing the stimuli related to this task – and the less resources will be left for detecting and processing other, distracting, signals, see also [71, 72]. Eventually, such a dual- (or multiple-) task condition may lead to a phenomenon called “inattentional deafness” [24, 25, 41, 76], but see [91]. In inattentional deafness, an auditory stimulus that should be clearly audible actually goes unnoticed, particularly when this stimulus resembles distractors falling outside the focus of attention [24]; see also [131] for a description of the better-known phenomenon of inattentional blindness. Results of a recent study imply the practical significance of this phenomenon: Dehais et al. [25] simulated a landing procedure in the cockpit of a flight simulator. They found that almost 60% of the participants did not notice a critical alarm (landing gear failure) while facing another critical condition (windshear) that required their attention. The theoretical importance of this finding is not only that participants failed to notice a physically salient event. Rather, contrary to most studies on inattentional blindness or ­deafness, participants missed a physically salient event that should have been part of their attentional set: a “landing gear failure” alarm is in principle a relevant information when performing a landing procedure. In other words, participants could not filter out this alarm in advance. Suggesting that there is “no hearing without listening” [41], the findings cited above imply that a minimum amount of spare attention is required for even salient (auditory) stimuli to be detected. This has to be considered in staffing and organizing taskassignments. Performing multiple tasks simultaneously is also common in clinical settings, hence giving rise to the potential for attentional deafness to occur. However, it remains to be investigated whether this is a problem in clinical settings, as well. Among the cases analyzed by Lange et al. [70] were several cases where a missing monitor alarm was originally reported, but where no explanation could be identified by subsequent analyses. These cases might be candidates for perfectly audible alarms being missed because the caregivers were occupied with demanding, concurrent tasks.

Meaning and urgency To immediately trigger the appropriate response, the user needs to decipher the meaning of the alarm (i.e., its source or its triggering condition) and the associated risk. Obviously, the two components are not independent: The condition triggering the alarm partly determines the associated risk (e.g., a low battery alarm on a respirator versus a low battery alarm on a monitor). However, the perceived risk is affected by additional factors, for instance background information on the patient (e.g., a priori chance of particular physiological changes) or overall validity of the particular alarm (e.g., many “false alarms” beause of motion artifacts).

Meaning Auditory device alarms serve as symbols to indicate a critical status of a particular vital or device function. The alarms used by a particular device or on a particular ward denote a system of (auditory) symbols indicating particular critical conditions. For device alarms, artificial sounds are typically used. Moreover, the mapping between these sounds and the associated devices and/ or alarm conditions is arbitrary. This results in auditory alarms not inherently resembling their underlying triggers. As a consequence, the personnel have to encode and remember the sound-to-meaning mappings. The underlying process resembles learning a new language: The acoustics of single words only rarely bear physical similarity to the objects or concepts they represent, i.e., the meaning of words is learned by remembering the arbitrary connections between the symbol and the reference [2], a process referred to as paired-associate learning [95]. Common experience tells us that paired-associate learning works well for language, but how about the arbitrary sound-to-referent mappings between device alarms and their underlying conditions? Few studies empirically investigated the success of learning of device alarms [22, 87, 123, 140]. Basically, all of these studies come to the conclusion that learning the meaning of alarms is far from perfect. Two studies investigated clinicians’ identification of the alarms used in their particular workplace [22, 87]. Momtahan et  al. [87] presented ICU and OR personnel alarms used in their area of work. Participants were asked to indicate for each alarm the type of equipment and the underlying alarm condition. While there were differences between specialties, no subgroup was able to relate more than 68.5% of alarm sounds to the correct type of equipment and more than 53.8% of Brought to you by | UCL - University College London Authenticated Download Date | 12/21/16 7:23 PM

K. Lange et al.: Human factors of medical device alarms      157

alarm sounds to the underlying condition. Similar data are provided by Cropp and colleagues [22]. In this study, even the best performing sub-group (respiratory therapists) identified only 62% of the critical alarm signals correctly. More recent work [67, 123, 140, 143] addressed the learnability of alarm melodies composed following the suggestions of International Standard IEC 60601-1-8 [55] on device alarms for medical electrical equipment, see also [11]. These suggestions are based on ergonomic considerations [109]. Nevertheless, neither medically lay participants [123, 143] nor experienced nurses [67, 140] showed acceptable performance levels in associating alarm labels to melodies within two sessions of learning. In the study of Williams and Beatty [143], participants identified only 48.4% of the melodies after two learning sessions and Sanderson et al. [123] found  

A human factors perspective on medical device alarms: problems with operating alarming devices and responding to device alarms.

Medical devices emit alarms when a problem with the device or with the patient needs to be addressed by healthcare personnel. At present, problems wit...
823KB Sizes 0 Downloads 6 Views