482327 2013

HFS55610.1177/0018720813482327Month XXXX - Human FactorsImproving Driver–Automation Interactions

Improving the Driver–Automation Interaction: An Approach Using Automation Uncertainty Johannes Beller and Matthias Heesen, German Aerospace Center, Braunschweig, Germany, and Mark Vollrath, Technische Universität Braunschweig, Braunschweig, Germany Objective: The aim of this study was to evaluate whether communicating automation uncertainty improves the driver–automation interaction. Background: A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving.The current study fills this gap. Method: We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs. low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Results: Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. Conclusion: The presentation of automation uncertainty through a symbol improves overall driver– automation cooperation. Application: Most automated systems in driving could benefit from displaying reliability information.This display might improve the acceptance of fallible systems and further enhances driver–automation cooperation. Keywords: driving, reliability, cooperation, automation, symbol

Address correspondence to Johannes Beller, German Aerospace Center, Institute of Transportation Systems, Lilienthalplatz 7, 38108 Braunschweig, Germany; e-mail: [email protected]. HUMAN FACTORS Vol. 55, No. 6, December 2013, pp. 1130­–1141 DOI: 10.1177/0018720813482327 Copyright © 2013, Human Factors and Ergonomics Society.

Introduction

Automation, defined as technology that actively selects data, transforms information, makes decisions, or controls processes (Lee & See, 2004), fundamentally changes the way drivers interact with their vehicle. To promote safety, the automation must be intuitive and comprehensible (Flemisch et al., 2012; Hoc, Young, & Blosseville, 2009; Inagaki, 2008). However, research shows that drivers often have difficulties developing accurate knowledge of the capabilities and limitations of the automation (see, e.g., Seppelt & Lee, 2007). This discrepancy can result in automation surprises (Sarter, Woods, & Billings, 1997), reduced situation awareness (Endsley, 1995), complacency as well as reliance (Parasuraman & Manzey, 2010), and numerous other instances of misuse and disuse of automation (Parasuraman & Riley, 1997). In the aviation and military domain, one approach to match the operator’s expectations with the actual automation capabilities has been to display automation uncertainty through, for example, line graphs or degraded icons (see for an example Finger & Bisantz, 2002; McGuirl & Sarter, 2006). The current study differs from those existing approaches in that (a) we suggest an uncertain face as a natural indicator for automation uncertainty in the light of the framework of human automation cooperation (Hoc, 2001; Hoc et al., 2009) and mainly in that (b) the uncertainty information is examined in the domain of driving. Communicating uncertainty during driving could, beneath other potential benefits, reduce perception-response reaction time, increase situation awareness, and help drivers develop more accurate knowledge about the functioning of their automation. Thus to match the drivers’ expectations with the actual automation capabilities, in the current study, we evaluate a symbol displaying automation uncertainty using experimental methodology.

Improving Driver–Automation Interactions

1131

Representing Uncertainty Information

Each automated system makes errors. For example, it performs badly or eventually breaks if the information it gets from its sensors is degraded or misinterpreted. An adaptive cruise control (ACC) system might not properly detect lead cars during snowfall, and a lane-keeping assistance system could incorrectly adjust to lane markings in a construction site and cause false emergency behavior. These errors may lead to numerous potential consequences, such as automation surprises, if people do not expect an incorrect behavior of the automation. Two human factors solutions reducing these kinds of problems might be the classical warnings approach and the use of uncertainty information. The classical warnings approach establishes warning systems and consequently warns the driver whenever certain parameters are violated. Clear situations and reliable sensor data make it then possible to transfer the control to the driver. However, unjustified and repeatedly issued warnings attributable to unclear situations with unreliable sensor data lead to serious problems. This situation has been discussed in the literature as the cry-wolf effect (Breznitz, 1983). In contrast to this classical warnings approach, based on the seminal work on likelihood alarm displays (Sorkin, Kantowitz, & Kantowitz, 1988), several displays of uncertainty information for imperfect diagnostic automation have been explored (e.g., Finger & Bisantz, 2002). In a binary decision-making task, for example, Wang, Jamieson, and Hollands (2009) investigated whether telling the participants of the aid’s reliability improved performance in a combat identification task. Results showed that informing participants about the system reliability level allowed them to rely more appropriately on the aid, leading to an improved performance. Further research confirms that uncertainty information can improve performance in binary decisionmaking tasks (Bisantz, Marsiglio, & Munch, 2005; Neyedli, Hollands, & Jamieson, 2011). Relatively few studies, however, evaluated the presentation of uncertainty in a non-decision -making task. In the context of aviation, McGuirl and Sarter (2006) used a continuously updated

Figure 1. The uncertainty symbol used in the current study showing a face with an uncertain expression and hand gestures.

line graph display to show system confidence about in-flight icing encounters and investigated whether specific confidence information as opposed to overall confidence could improve the operator’s trust and use of the system. The results showed that the specific confidence information improved the appropriateness of the pilot’s trust and generally improved performance. To our knowledge, only Seppelt and Lee (2007) investigated uncertainty representations in the context of driving. They used a fading color graphical shape together with a complex status display to show decreasing reliability of a sensor. However, there were no statistically significant differences to the control display. Thus, it remains to be answered whether the concept of uncertainty presentation proves useful in driver– automation interactions. In the current study, we strive to answer this question. The approach used here differs from the existing ones in two ways: First, the uncertainty information is evaluated dynamically in the context of driving; and second, the information itself is a relatively simple symbol compared with the one used by Seppelt and Lee (2007). Thus a schematic uncertain face was developed as an uncertainty symbol (Figure 1). The uncertainty symbol primarily notifies the user of uncertainty of the automation in an unclear situation, such as when system limits occur. The main motivation for choosing this symbol is that faces with emotional expressions have been found to possess several advantages

1132 December 2013 - Human Factors

compared with other potential stimuli. Compton (2003), for example, suggests that the primary way to determine the importance of possible attention allocations is to evaluate the emotional significance of a stimulus. As faces are arguably one of the most important visual stimuli in the human environment, facial stimuli are in most situations processed preattentively, rapidly, and nearly capacity-free (for a review, see Frischen, Eastwood, & Smilek, 2008; for an application, see Pak, Fink, Price, Brass, & Sturre, 2012). This approach to uncertainty communication builds on the concept of cooperative automation proposed by Hoc (2001) and Hoc et al. (2009). The abovementioned authors emphasize that human–automation interaction is in essence cooperation. Hoc et al. (2009) define cooperation as a situation in which two agents have the same overarching goals, such as driving safely, but the actions and subgoals of these two agents can interfere. In terms of cooperation, classical warnings tend to criticize the user. Informational warnings or displays, such as the uncertainty symbol, on the other hand, inform the user about the state of the automation. The uncertainty symbol might improve cooperation because it signals situational capability to the driver, which in turn enables a dynamic control distribution between the two agents (Flemisch et al., 2012). In terms of Hoc, the uncertainty symbol processes inference management. The symbol therefore enables the driver–automation team to let the most competent agent be in control. This is a prerequirement to successful human–automation cooperation (Flemisch et al., 2012). For a successful cooperation, the driver must trust the automation (Lee & See, 2004; Muir, 1994; Muir & Moray, 1996). Trust may be defined as the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability (Lee & See, 2004). Many studies have shown that trust toward the automation influences the human–machine interaction (e.g., Lee & Moray, 1994; Lewandowsky, Mundy, & Tan, 2000). On the other hand, the behavior of the automation also influences how much humans trust the automation. For example, an automation that fails to accomplish common goals (e.g., an unreliable automation) usually results in a breakdown of trust, which in turn may

lead to automation disuse (Parasuraman & Riley, 1997). However, recent research suggests that displaying information about the status of the automation may increase the trustworthiness of automations (Verberne, Ham, & Midden, 2012), thus supporting the importance to evaluate adequate driver–automation interfaces in terms of trust. The presentation of uncertainty information follows this line of thought. By presenting uncertainty information and thus by communicating that the automation is fallible, a false system understanding of infallibility can be prevented. Thus drivers might rely less on the system. Automation surprises might not be as surprising especially when uncertainty feedback lets drivers consider possible incorrect automation behavior. Consequently, the trust breakdown following automation failures might be less severe. Method

To assess whether uncertainty information improves the appropriateness of human behavior in cooperation with highly automated vehicles, we conducted a driving simulator experiment. Participants interacted with a highly automated driving system (the automation), which supported longitudinal and lateral control of the vehicle in form of ACC and run-off-road prevention. Furthermore, to demonstrate situation awareness, participants engaged in secondary tasks but were responsible to cooperate with the automation to drive safely (Schömig & Metz, 2012). Participants

A total of 28 participants (10 female) with an average age of 28.3 years (SD = 10.0) participated in the experiment. All participants had normal or corrected-to-normal sight and hearing and had a valid driver license with an average of 10.6 years of driving experience (SD = 10.2). Participants were recruited through the participant pool from the German Aerospace Center (DLR). Participants were paid €8 per hour for the participation in the experiment. Design

A 2 (between) × 2 (within) mixed design was used. The order of the factors was fully balanced between and within participants. The first factor

Improving Driver–Automation Interactions

1133

Table 1: Experimental Design Within-Subjects Factor (sequence was counterbalanced) Between-Subjects Factor

Reliable Automation

Unreliable Automation

Uncertainty information

22 situations; 10 uncertainty responses, 22 situations; 10 uncertainty 0 incorrect automation responses responses, 6 incorrect automation responses No uncertainty information 22 situations; 0 uncertainty responses, 22 situations; 0 uncertainty 0 incorrect automation responses responses, 6 incorrect automation responses

Figure 2. Four possible positions for the lead vehicle to appear. From left to right: (1) The automation drove past. In the unreliable condition, the automation sometimes produced a false braking response and had to be overruled. (2) The automation braked. In the unreliable condition, the automation sometimes did not brake, so consequently the participant himself or herself had to brake. (3) The automation elicited a brake response. (4) The automation did not intervene in the driving process.

was the presented symbol and had two levels (0, no symbol; 1, uncertainty symbol). The second factor was the reliability of the automation and had also two levels (0, reliable automation; 1, unreliable automation; Table 1). In the reliable condition, the automation produced correct responses (e.g., braking when it was necessary) to all situations shown in Figure 2. In the unreliable condition, the automation did not act correctly when the car was neither clearly in the participant’s own nor in the other lane (the two situations shown in the left half of Figure 2). Here the driver would either decelerate tremendously without necessity (leftmost situation) or collide with the car (left-center situation). In each level of the reliability factor, the participants encountered 22 slower lead cars. In 10 of the 22 situations per reliability level, the lead car was not clearly in the participant’s own or clearly in the other lane. This situation resulted in the presentation of the uncertainty symbol. In the unreliable condition, 6 of those 10 unclear situations led to an incorrect automation response. In 3 of them, the automation did not brake; in the other 3, the automation braked unnecessarily.

Driving Scenario

Each participant drove through a fogged two-lane highway scenario. When the suggested velocity of 100 km/h was kept, the drivers could stop within the visible area. During driving, the participants approached lead vehicles, which drove 50 km/h. The lead vehicle could appear in one of four positions (Figure 2). When the lead car was not clearly in the participant’s own or the other lane (Figure 2, the two left situations) and the participants were in the unreliable condition, the automation sometimes misbehaved (braking when it was not necessary; not braking when it would have been necessary). Thus in those situations, the driver had to correctly judge the situation and act accordingly by either braking or overruling the brake response of the automation to maintain safety. The whole driving scenario lasted approximately 60 min. Apparatus

A static driving simulator at the DLR was used in the present study. The driving scene was projected on a plane 2 m × 1.5 m, which was located approximately 2 m in front of the

1134 December 2013 - Human Factors

Figure 3. The modular display used in the current study.

seat. Modular displays were used to show the automation active symbol, tachometer, and dedicated uncertainty display in the front console behind the steering wheel (Figure 3). Additionally, a visual search task (the surrogate reference task; Mattes, 2003) was used as a secondary task and was displayed approximately 30 cm to the right of the driving wheel. In this task, participants had to detect a slightly larger circle within a field of smaller circles and to respond by indicating the location of the circle via a touch pad. Procedure

After filling out a demographic survey and a consent form, the participants got acquainted with the simulator and the ACC and runoff-road prevention through a 15-min training phase. In the training, the participants learned how to activate the automation (pulling the drop arm) and how to deactivate it (pushing the brake pedal). Every participant was informed about the possibility of automation misbehavior and was instructed to solve the secondary tasks without compromising safety. Each participant drove through the same 60-min scenario. The participants were instructed to activate the automation shortly after the drive started and to leave it activated as long as possible but that they could deactivate it whenever considered necessary. They were also instructed to reactivate the automation as soon as possible after deactivating it by braking, if the situation would allow it. Approximately every 90 s, a slower lead vehicle appeared. Overall, every participant encountered 44 lead cars divided in two

reliability segments, 22 vehicles in each segment. In the unclear situations, the uncertainty symbol was shown for 3 s to the experimental group before the lead car became visible. The order of the situations was fixed for every participant and counterbalanced in such a way that the same number of leading cars regarding the four positions in Figure 2 was experienced in the two reliability segments. After every situation, general trust toward the automation was measured by asking participants, “To what extent did you trust the automation in the situation just experienced?” The participants responded verbally on a 10-point Likert-type scale ranging from 1 (not at all) to 10 (completely), which was documented by the experimenter. Knowledge of fallibility was furthermore measured after the two reliability segments. After the experimental run, the participants were asked to choose whether they wanted to activate or deactivate the automation, thus measuring acceptance. At the end of the experiment, the participants were debriefed, interviewed to receive their opinion toward the experiment, paid, and thanked for the participation. Results

Multilevel models (e.g., Gelman & Hill, 2007; Hoffman & Rovine, 2007; for an introduction, see Twisk, 2006) were used to analyze the data. Multilevel models are appropriate when the data include several levels. For example, in repeatedmeasure designs, individuals are measured several times; therefore the measurements can be seen as being nested in the individual, forming a multilevel structure. Multilevel models account for this multilevel structure in their inferences. As multilevel modeling is a generalization of regression methods, the use of multilevel models offers several advantages compared with, for example, traditional least squares ANOVA in the case of data dependency (although in our case, the use of ANOVA does not alter the general conclusions obtained). There are several approaches to determine the significance of factors in multilevel modeling. We employed the analysis of deviance and the corresponding χ2 statistic. Quantile regression was used to further explore the effects of the uncertainty representation on the whole distribution of time to collision (TTC). Quantile

Improving Driver–Automation Interactions

1135

0.20

Density

0.15

Linetype Uncertainty Group

0.10

Control Group

0.05 0.00 5 10 Time To Collision in Seconds

Figure 4. Time-to-collision (TTC) distributions for the uncertainty and the control group. The uncertainty group differs from the control group foremost in the

Improving the driver-automation interaction: an approach using automation uncertainty.

The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction...
500KB Sizes 0 Downloads 4 Views