This article was downloaded by: [FU Berlin] On: 20 October 2014, At: 03:48 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Ergonomics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/terg20

Trust, control strategies and allocation of function in human-machine systems a

JOHN LEE & NEVILLE MORAY

a

a

Department of Mechanical and Industrial Engineering , University of Illinois at UrbanaChampaign , 1206 West Green Street, Urbana, IL, 61801, USA Published online: 31 May 2007.

To cite this article: JOHN LEE & NEVILLE MORAY (1992) Trust, control strategies and allocation of function in human-machine systems, Ergonomics, 35:10, 1243-1270, DOI: 10.1080/00140139208967392 To link to this article: http://dx.doi.org/10.1080/00140139208967392

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Trust, control strategies and allocation of function in human-machine systems JOHN LEEand NEWW MORAY Department of Mechanical and Industrial Engineering, University of Illinois at Urbana-Champaign, 1 206 West Green Street, Urbana, IL 6 1 801 , USA Keywords: Trust; Allocation of function; Human-machine system; Supervisory control.

Downloaded by [FU Berlin] at 03:48 20 October 2014

As automated controllers supplant human intervention in controlling complex systems, the operators* role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a 'trust transfer function* is developed using time series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.

*

1. Introduction The primary motivation for this paper is to add to our understanding of how people trust complex systems with which they interact, and the effect that trust may have on how they allocate function between themselves and automatic controllers. As systems become increasingly complex the role of the human operator has evolved from direct manual control to supervisory control. As supervisory controllers, operators can choose to interact with the system at different levels of manual and automatic control (Sheridan and Johannsen 1976). Commonly they are responsible for the allocation of function between automatic and manual control. Whether operators control the system automatically or manually can have a dramatic influence o n the performance of the overall system. Inappropriate reliance on automatic controllers may lead to circumstances in which operators depend upon an automatic system to perform in ways for which it was not designed, o r when the system is faulty. On the other hand, complete manual control may lead to excessive operator workload, compromising the overall performance of the system. Many factors may guide operators" allocation of function, including a variety of subjective performance criteria, experience, and trust. Although rarely studied, operators' trust in automatic controllers may have a large influence on choosing automatic over manual control. For example, highly trusted automatic controllers may be used frequently, while operators may choose to control the system manually rather than engage untrusted automatic controllers (Muir 1 989). The importance of trust in the relationship between operators and the systems 00 14-01 39/92$3.00 O 1992 Taylor & Francis Ltd.

Downloaded by [FU Berlin] at 03:48 20 October 2014

1244

J. Lee and N.Moray

they work with is underscored by Zuboff (1988), who has documented operators' trust in and use of automation following its introduction into the workplace. Her case studies revealed two interesting phenomena. First, operators' lack of trust in the new technology often formed a barrier, thwarting the potential that the new technology offered. Second, operators sometimes placed too much trust in the new technology, becoming complacent, and failing to intervene when the technology failed. The operators' comments, reported by Zuboff, reflect three aspects of trust: trialand-emor experience, understanding of the technology, and 'faith'. Each of these dimensions plays an important role in both the development and changes of trust in the new technology. For example, Zuboff states, 'Many believe that as their intellective skill improved they would learn to trust the computer, or that trial-anderror experience over a period of time would teach them to trust.' (1988:69). This comment reflects the importance of both the operators' understanding of the technology, and the experience with the technology over time. Other comments reflect the operators' 'leap of faith' associated with adopting new technology. In contrast to the case study based approach of Zuboff, Moray and Muir developed the first laboratory based study to investigate the role of trust in mediating human-machine relationships in a supervisory control situation (Muir 1989). Because Muir's work is at present unpublished we begin with a short discussion of her work.

1.2. Muir S theory of trust in machines Muir's study of trust included a model of trust, as it applied to human-machine relationships. In particular, she adapted Barber's (1 983) and Rempel et al.3 (1 985) sociologist definitions of trust to describe trust between humans and machines. Barber ( 1983) defined trust as the subjective expectation of future performance and described three types of expectation related to the three dimensions of trust persistence of natural and moral laws, technically competent performance, and fiduciary responsibility. According to Barber, persistence of natural laws provides the basis for all other forms of trust. This dimension provides a foundation of trust by establishing a constancy in the fundamental moral and natural laws. Persistence of natural and moral laws reflects the belief that '. . .the heavens will not fall', and that '. . . my fellow man is good, kind, and decent' (Barber 1983:9). These expectations provide the basic conditions for social and physical interactions. Technically competent performance, on the other hand, supports expectations of future performance based on capabilities, knowledge, or expertise. This dimension of trust refers to the ability of the other partner to produce consistent and desirable performance and can be subdivided to include three types of expertise: everyday routine performance, technical facility, and expert knowledge. Muir (1 989) identified these aspects respectively with skill-based, rule-based, and knowledge-based behaviour of Rasmussen (1 983). Barber's third dimension of trust, fiduciary responsibility, concerns the expectations that people have moral and social obligations to hold the interests of others above their own. Fiduciary responsibility extends the idea of trust beyond that based on performance, to one based on moral obligations and intentions. This dimension becomes important when agents cannot be evaluated because their expertise is not understood, or in unforeseen situations where performance cannot be

.

Downloaded by [FU Berlin] at 03:48 20 October 2014

Trust, control, and function

1245

predicted. Here expectations depend upon an assessment of the intentions and motivations of the partner, rather than past performance, or perceived capabilities. In addition to the dimensions of trust proposed by Barber (persistence of natural laws, technically competent performance, and fiduciary responsi bii ity), M ui r incorporated three dimensions of trust (predictability, dependability, and faith) from Rernpel et al. ( 1 985). According to Muir's interpretation, these three dimensions represent the dynamic nature of trust, where the basis of trust 'undergoes predictable changes as a result of experience in the relationship' (Muir 1989:22). For example, predictability, which represents the consistency and desirability of past behaviour, forms the basis of trust early in the relationship. With further experience in the relationship, dependability, which represents an understanding of the stable dispositions that guide the partner's behaviour, becomes an important basis of trust. In a mature relationship, faith, which is a reflection of partner's underlying motives, or intentions, forms the basis for trust. In particular, faith is crucial in novel situations, where the belief in the expected behaviour must go beyond the available evidence. Muir interpreted the model of Rempel et al. (1 985) as a hierarchical stage model, where trust develops over time, first depending upon predictability, then dependability, and finally faith. As such, it provided an orthogonal counterpart to the definition suggested by Barber (1 983). Table 1 summarizes Muir's conception of trust, with the dimensions identified by Barber crossed with those identified by Rempel et al. (1 985), producing 15 distinct aspects of trust. Table 1. Muir's framework for studying trust in supervisory control environments produced by crossing Barber's ( 1983)taxonomy of trust (rows)with Rempel et al.'s ( I 985) taxonomy of the development of trust (columns); from Muir (1 989).

Expectat ion Persistence Natural physical Natural biological

Basis of expectation at different levels of experience Predictability Dependability Faith (of acts) (of dispositions) (in motives) Events conform to natural laws Human life has survived

Moral social

Humans and computers act 'decent'

Technical competence j's behaviour is predictable Fiduciary responsibility

j's behaviour is consistently responsible

Nature is lawful

Natural laws are constant Human survival is Human life will survive lawful Humans and Humans and computers will computers are 'good' and 'decent' continue to be by nature 'good' and 'decent' in the future j has a dependable j will continue to be nature dependable in the future j has a responsible j will continue to be nature responsible in the future

Considering the account of operators' trust described by Zuboff ( 1 988), as well as the original descriptions of trust in Barber (1 983) and Rempel el al. (1 985), another interpretation might be possible. In particular, it seems that while Rempel el of.

'

Downloaded by [FU Berlin] at 03:48 20 October 2014

1246

J. Lee and N. Moray

introduce the idea of a dynamic nature of trust, the dimensions predictability, dependability, and faith, might be more complementary to the dimensions of Barber than orthogonal, as Muir describes them. While Muir describes Rempel et a/.'s dimensions of trust as corresponding to the dynamic nature of trust, they are really a developmental progression only because of the level of attributional abstraction that each demands. This has a very direct implication for trust between humans and machines because one of the first things that a human might learn about a machine is its intended use (corresponding to the dimension of faith). In human-human relations by contrast, it may take years to develop a relationship where a human partner understands the intents of the other, thereby developing a basis for faith. In other words, the dimensions of faith correspond very closely to the idea of fiduciary responsibility. Barber and Rempel use very similar language to describe both the role and basis of faith and fiduciary responsibility. In both cases this dimension forms the basis of trust in illdefined, novel situations, where the expertise is poorly understood, and both faith and fiduciary responsibility are based on an expectation of underlying motives and intentions. Similarly, the relationship between predictability and technically competent performance seems to be more complementary, than orthogonal. Again, the descriptions of the dimensions are very similar, describing the basis of this dimension as stable and desirable behaviour or performance. As with faith and fiduciary responsibility, the primary difference between predictability and technically competent performance lies in the time dependent aspect that Rempel et a/. have suggested. Not only do the dimensions of trust described by both Barber (1983) and Rempel el a/. (1 9 8 5 ) seem to coincide, they are similar to the aspects of trust in Zuboff (1 988). According to Zuboff s account of operators' trust in new technology, trial-and-en-orexperience and understanding seem very closely related to predictability and dependability. Like predictability, the basis of trial-and-error-experience is behaviour or performance over time. Understanding, on the other hand, seems to correspond to dependability, where future behaviour is anticipated through an understanding of the partner's stable dispositions of characteristics. Likewise, faith as described by Rempel el a/. (1985) seems similar to the 'leap of faith' that Zuboff describes. These ideas about trust are summarized in Table 2. Trust depends upon four dimensions. The first dimension is the foundation of trust, representing the fundamental assumption of natural and social order that makes the other levels of trust possible. This level corresponds exactly to the persistence of natural laws described by Barber (1 983). The second dimension of trust, performance, rests on the expectation of consistent, stable, and desirable performance or behaviour. The third dimension, process, depends on an understanding of the underlying qualities or characteristics that govern behaviour. With humans this might be stable dispositions or character traits. With machines this might represent data reduction methods, rule bases, or control algorithms that govern how the system behaves. The final dimension of trust, purpose, rests on the underlying motives or intents. With humans this might represent motivations and responsibilities. With machines, on the other hand, purpose reflects the designer's intention in creating- the system. The progression from the foundation to purpose reflects increasing levels of attributional abstraction, as in Rempel el a/.(1 985). In addition to providing a broad theoretical framework for studying trust, Muir (1989) also conducted two experiments which contribute to an understanding of trust

.

Trust. control, and function

1247

Table 2. Proposed relationship between the different dimensions of trust.

Barber(1983) Purpose

Process Performance

Downloaded by [FU Berlin] at 03:48 20 October 2014

Foundation

Fiduciary responsibility Technically competent performance Persistence of natural laws

Rempel, Holmes, andZanna.(l985)

Zuboff(1988)

Faith

Leap of faith

Dependability Predictability

Understanding Trial-anderror experience

between humans and machines. Both her experiments studied operators controlling a simulated pasteurisation plant. In the first experiment she demonstrated that operators could generate meaningful subjective ratings of trust in machines. Furthermore, she found that while some aspects of operators' trust developed according to the progression specified by Rempel el a/. (1985), the development differed in several important ways. In particular, faith was the most important at the start of the relationship, while Muir's interpretation of the theory suggests that it should become important only after extended experience. While Muir's first experiment failed to show a strong relationship between trust (overall trust in a feedstock pump) and percentage'of time spent using the automatic control, her second experiment demonstrated a strong correlation between trust in, and use of the automatic controller of the feedstock pump, and a strong inverse relation between trust and monitoring. There seem to be two plausible reasons for the differences between her experiments. In her first experiment the nature of. the simulation, the reward structure, and the effectiveness of the automatic controller discouraged the use of the automatic feedstock pump, resulting in almost complete manual control. This ceiling effect may have reduced the correlation between trust and use of the automatic controllers. Second, operators estimated their trust in the pump, while in the second experiment operators estimated their trust in the automatic controller of the pump. The increased specificity of this rating may have lead to a higher correlation between pump use and trust. The purpose of this research is to develop a better understanding of trust between humans and machines, and investigate its influence on the operators' allocation of function in a supervisory control situation. Our research addresses four issues. First, we hope to identify the factors that influence trust, and determine how trust, and the dimensions of trust (predictability, dependability, and faith) change in response to variations in these factors. Second, we hope to relate this description of trust to the more general problem of understanding operators' allocation of automatic and manual control. Third, we wish to extend Muir's work. She trained her operators to steady state performance over many hours using a completely reliable system, and only then asked them to deal with faults. We will investigate the acquisition of trust and the impact both of a transient loss of reliability (an 'intermittent' fault in' everyday terms) and a chronic loss of reliability on the development of trust, to see whether a transient change alters the way in which trust develops. Fourth, we

1248

J. Lee and N. Moray

increase the complexity of the operators' task by allowing them more freedom than did Muir in the number of subsystems which can be switched between manual and automatic control.

Downloaded by [FU Berlin] at 03:48 20 October 2014

2. Method 2.1. Participants A total of 19 (male and female) undergraduate students, participated in this study. Each operator completed three 2 h sessions and was paid $3-50for each hour with an additional bonus based on performance. Of these 19 participants the data from three were not analysed because their ability to operate the plant and perform the subjective ratings was questionable, suggesting a lack of motivation or interest in the experiment.

2.2. Thesimulation The experiment required operators to control the simulated orange juice pasteurization plant shown in figure 1. A computer program running on a Macintosh II computer generates a medium fidelity simulation of the process. Realistic thermodynamic and heat transfer equations govern the dynamics of the process, giving the simulation a reasonable degree of complexity. The dynamics of the simulation incorporate some of the complexities of actual process control systems, such as time lags and feedback loops. For example, when either the operators or the automatic controller called for a change in the pump rate, the change took about 10-20 s to occur as the pump accelerated or decelerated. These dynamics make the seemingly simple system a challenging control problem. The simulated plant included provisions for both automatic and manual control. The operators could control feedstock pump rates, steam pump rates and heater settings either by manually entering commands or by engaging the automatic controllers. Manual changes to the pump and heater settings could be entered from the keyboard. Likewise, operators could request automatic control for the pumps and heaters from the keyboard. Because any of the three sub-systems could be controlled either by using automatic or manual control a wide variety of control strategies were available to the operators. (In Muir's experiment only the feedback pump could be manually controlled.) After the training trials operators could use any combination of automatic or manual control that they wished. The alternatives of automatic or manual control, combined with the relatively complex dynamics, make the simulation a plausible abstraction of an actual semiautomated continuous process plant. In addition, the wide variety of control strategies available to the operators reflect some of the diversity of control options available in many process control situations. Using naive operators facilitated an analysis of how trust and control strategies develop as operators are trained with new equipment. While the operators were initially naive, the stable control strategies they developed by the second hour indicate an understanding and pattern of controlling the system that might be comparable to trained operators. The model which we develop takes account of the learning curve, so that training to steady state as done by Muir is neither necessary nor appropriate in this experiment. Figure 1 shows the mimic diagram which appeared on the VDU and with which the operators interacted. Raw orange juice entered through a pipe in the upper left of the screen. The juice flowed from the inflow pipe into the input vat. The feedstock pump in the lower left portion of the diagram drew the juice from the input vat and

Trust, control, and function

Raw

Downloaded by [FU Berlin] at 03:48 20 October 2014

Wasle Julce

Pasteurized

Figure 1. The simulated pasteurization plant.

.

sent it through passive and active heat exchangers. The two heat exchangers raised the temperature of the jucie. Changing the heater settings and the steam pump rate of the heater subsystem (shown in the upper right of figure 1) modulated the temperature of the juice passing through the heat exchangers. Following the heat exchangers, an automatic three-way valve routed the flow of the juice. If the temperature of the juice leaving the active heat exchanger was too high it was considered burnt and was directed to the waste vat. Juice with too low a temperature needed further pasteurization and flowed back to the input vat. Juice in the proper temperature range (between 75" and 85") flowed through the passive heat exchanger, to the output vat. The flows and temperature changes in the system approximated continuous variables, being updated every 1 -8 s. The level of the input vat was illustrated graphically as the height of the shaded area of the input vat on the mimic diagram. In addition t o illustrating the volume of the input vat graphically, changes in the flows through pipes were displayed as changes in the colour of the pipe. Normally the pipes were black; when juice flowed through them they changed colour to indicate flow. The graphical representation of the plant linked the state variables of the system to a mimic diagram of the plant to facilitate an unambiguous perception of the plant state. 2.3. Experimental task Like actual operators, operators in this experiment balanced the competing goals of safety and performance, using automatic control, manual control, or any combination of the two. Performance was measured by the amount of input flow which was successfully pasteurized divided by the total input. Operators received bonuses (1 0 centdtrial) for achieving a performance above 90%. Safety, on the other hand, depended on the operator maintaining sufficient volume in the input vat. If the vat emptied (volume=O.O) the plant shut down and the operator lost all of the

Downloaded by [FU Berlin] at 03:48 20 October 2014

1250

J. Lee and N.Moray

accrued rewards. If the vat overflowed that juice was classified as waste, reducing the performance of the system. This wasted juice incurred a penalty in calculating the operators' payments. At the end of each trial the computer calculated the performance of the pasteurization system, and displayed it to the operator. Each of 19 subjects operated the plant for 3 days, 2 h a day. Before operating the system, the operaton received an extensive written description of their objectives in controlling the plant, the possibility of faults, and the thermo-hydraulic processes involved in the control of the plant. During the first hour of the first day each operator spent 10 trials learning to control the plant. During these trials the operators controlled the plant on alternate trials using only manual control or only automated control. After the training trials the operators were able to control the plant as they liked, switching between automatic or manual control of the three' subsystems whenever desired. Each of the first 10 trials was 3 min long. Following the 10 training trials operators had 10 trials that lasted 6 min. On the second and third days operators had 20 trials, each 6 min long. After the training trials, operators could control the three subsystems with manual control, automatic control, or any combination of automatic or manual control, to manipulate the flow rates and temperatures of the juice and steam to maximize the performance of the plant. Operators could use each of the automatic controllers for the whole trial or any part of a trial, switching between automatic and manual control as they wished. Complete reliance on the automatic controllers of all the subsystems (feedstock pump, steam pump and steam heater) of the system produced juice at an efficiency of 75% to 80%. In addition to the demands involved in controlling the simulation, the operators were responsible for a second task, lo&ng data about the process. The data logging task required operators to record three system variables every 15 s. The purpose of this task was to replicate some of the other responsibilities of a process control task. At the same time, this task was meant to encourage the operators to use the automatic controllers to cope with the workload, as Muir (1 989) reported a tendency for subjects to adopt complete manual control. As will be seen, data logging was not entirely successful in this respect. 2.4. Fault conditions To investigate the influence of faults on the leveI of trust and performance, the feedstock pump failed to respond correctly on the sixth trial on the second day (the transient fault), and for all trials on day 3. Figure 2 illustrates the occurrence of faults during the course of the experiment. The operators were divided into four groups with each group experiencing a fault of one of the following magnitudes: 15%, 20%, 30°h, and 35%. The magnitude of the fault corresponded to the difference between the actual and the target pump rate. When the faults occurred the actual pump rate failed to converge to the target pump rate, whether chosen by the operator or the automatic controller. Instead, it converged to the selected value* the percentage of the fault. That is, if a value of 50 were selected, and a 20% fault was present, the pump would converge to either 40 to 60. The positive or negative value was selected at random, with equal probability, each time a new target rate was selected. This occurred whether the command was issued by the operator or by the automatic controller. Therefore, the faults not only influenced the manual control of the feedstock pump, but also degraded the performance of the automatic feedstock pump.

Tiust, control, and function

DAY 1

DAY 2

1251

DAY 3

Downloaded by [FU Berlin] at 03:48 20 October 2014

Figure 2. The design of the experiment, showing the training trials and the occurrence of faults.

a

There was no necessary difference between the impact of the fault on manual or automatic control. The relative severity of the effect of a fault of a particular size on manual and automatic control depended upon the strategy adopted by the controller in response to the fault. If the manual controller used the same control strategy as the automatic, the effect would be identical. If the manual controller adopted different strategies, the effects of the fault could be more or less severe than the effect under automatic control. This experimental design reveals the effect of fault size on both the loss of trust and recovery of trust in response to both continuous and transient faults. 2.5. Subjective rot ing scales The operators' levels of trust in the system were measured with a subjective rating scale modelled on those used by Muir (1989). After each trial ended, and the computer had displayed the efficiency of the system, the computer displayed a series of questions to establish the operators' subjective feelings about the plant. Operators evaluated the predictability and dependability of the overall system, as well as their faith and trust in the system. The measures of predictability and dependability of the overall system, as well as faith correspond to different dimensions of trust hypothesized by Muir (1 989). Operators responded to the queries in figure 3 on a computer generated ten point scale. The '1' extreme of the scale was marked with 'NOT A T ALL'. A t the other extreme, the '10' was marked with 'COMPLETELY'. The operators received detailed instructions to ensure that they had a clear conception of the meanings of their subjective ratings. These instructions included a description of trust and how it applies to inanimate objects. They are shown in Appendix 1. Following these instructions, the operators received a series of four questions about their trust in everyday objects. Operators responded to these questions with rating scales identical to the ones used throughout the experiment. We believe that the instructions and practice using the subjective scales ensured a uniform conception of trust between subjects, and stressed the importance of the subjective measures as a part of the operators' task. Since the operators responded to 60 sets of rating scales, fatigue and loss of motivation effects might have occurred. However we believe the operators made a conscientious effort to provide accurate ratings in response to our stressing the importance of the ratings scales. As we shall see the results provide evidence to this effect. The performance curves rise steadily along a learning curve which requires only one equation throughout the experiment. Trust follows a similar curve, during the steady state portion of the experiment. Both trust and performance show

J. Lee and h! Moray

1252

I

T o what extent can you cwnt on h e system to dD izsjob?'

"What dcgnc of faith & you have that t k systan will bt able to oope with a l l system 'states in the future?'

Downloaded by [FU Berlin] at 03:48 20 October 2014

"OvcraU how much do you aust the systtm?"

Figure 3. The questions used to evaluate the operators' trust in the system. An example of the scales, as they appeared on the computer screen is also shown. responses to faults which are qualitatively what would be expected. There is no evidence for major fatigue or loss of motivation. 3. Results 3.1. Performance and tmr Operators quickly became accustomed to the plant and had tittle trouble maintaining a stable system. With the help of the automatic controllers, they were able adequately to control the system, producing an average of 79.9% efficiency by the end of the 10 training trials. Performance, as measured by the percent efficiency, increased as operators learned to control the system. Additionally, when the system contained a fault, (in the last 20 trials), operators' performance initially dropped and then recovered as they learned to accommodate the fault. Trust, predictability, and dependability followed a similar pattern. As the operators became familiar with the system, trust increased. When faults occurred, trust decreased but then recovered. Figures 4 and 5 illustrate the fluctuations of performance and trust over the three days of the experiment. In these figures the data are averaged over the four operators in each condition. From figures 4 and 5 it is clear that there are two main effects on the dependent variable: (1) both trust and performance show prominent learning curves; and (2) plant failures have a marked impact. Both effects are orderly, but several features deserve comment. We turn first to the dynamics of performance.

3.2. The dynamics of performance The violent oscillations during the trials 1-10 should be disregarded. During this period the operators were undergoing forced training, and were compelled to use only manual and only automatic control on alternate trials. After trial 10 they could use any strategy and tactic they wished. The effect of the transient fault on trial 26 is clearly visible, and the magnitude of the change in performance appears roughly proportional to the magnitude of the fault. The overall learning curve, shown in figure 7 appears to be continuous from trial 1 1 to 40, apart from the disruption on trial 26. (The reason for the deviant point on trial 20 is not known.)

T w t , control, and function

1253

Pettormanca vs. Trlal

Downloaded by [FU Berlin] at 03:48 20 October 2014

100 1

Figure 4. The fluctua~ionin performance over the course of the experiment. The score is the juice pas~eurizedas a percentage of the maximum possible amount which could have been pasteurized.

Trust v a Trlal lo

1

* V

FouH St20 (16%) Faun She(2QY) Fault Sb.(Um) h u l t S b (35%)

Figure 5. The fluctuation in trust over the course of the experiment. Subjective judgement with a maximum possible score of 10, meaning complete trust in the system.

During the permanently faulty condition in trials 41-60 there is an immediate drop in performance, followed by a steady recovery, with the equation fitting the recovery similar to the initial learning curve. Figures 6 and 7 show the similarity of the two learning curves. For trials I 1 to 40 the slope of the learning curve is 0-0055,

J. Lee and N. Moray '

Downloaded by [FU Berlin] at 03:48 20 October 2014

1

Performance = 4 3 + .0055'Trlal

f? = 0.93

Figure 6. Logarithmic plot of performance for trials 1 1 to 40 (excluding trial 26).

Performance = 4.3

Figure 7.

+ ,0087-Trial R ~ 0.72 -

Logarithmic plot of performance for trials 41 to 60.

while the slope of the learning curve for the trials with the fault (trials 41-60) is 0-0087.This indicates that the operators' performance increases slightly faster during their recovery from the fault as compared to their rate of increase at the start of the experiment. But in essence, the .transient fault has only a transient effect on performance, and the chronic fault does not seem to cause a different learning strategy to appear, neither helping nor hindering learning. What is being learned is an overall strategy of operation, a combination of manual and automatic control distributed over all subsystems which is acceptable to the operators in fulfilling the demands of the task. Superimposed on the performance learning curves is the effect of the faults: performance drops instantly with the occurrence of a fault and after the transient fault recovers completely, not exhibiting any lasting effect of the transient fault. The

Trwit, conrrol, and function

1255

effect on trust, however, is different and will be discussed in the next section. A striking feature of trials 4 1-60 is how small the lasting effect even a severe fault has on performance. After trial 46 performance recovers, and production from trial 46 onwards is only 8-1% below that of trials 35-40. Interestingly, the recovery of trust begins on about trial 46. Although trust recovers during this period, its recovery is not as rapid as the recovery of performance. During the same period trust has declined by 41 -9016, compared to the average on trials 35-40. Table 3 summarizes these results. Table 3. The initial effect on trust and performance of the chronic fault, and the subsequent recovery during the latter part of the third day. The values are means pooled over all operators.

Trials 41-46 (SD)

Change as a % of the values for trials 35-40 compared with trials 41-46

76.5

- 16.6%

(13.2) 4-5

-47.9%

Downloaded by [FU Berlin] at 03:48 20 October 2014

.

Trials 35-40

(SD) Performance

91.7

(4.5) Trust

8.7

(1.0)

( 1 -9)

Trials 47-60 (SD) 84-3 ( I 1-8)

5.0

Change as a % of the values for trials 35-40 compared with trials 47-60 -8.1 1 % '

-41.9%

(2.7)

3.3. The dynamics oftrust The data for trust show a somewhat similar pattern, with several differences in detail. The loss of trust caused by the transient fault appears to be approximately proportional to the magnitude of the fault. There appears to be little after-effect when the transient fault disappears: the overall learning curve appears to be adequately fitted by a single function from trial 1 1 to 39. However, close inspection of the trials following trial 26 suggests that, at least for severe faults, recovery of trust is not instantaneous. The effect of the transient seems to last for several trials, being detectable at least out to trial 30 by visual inspection. This effect can be seen more prominently on day 3, from trial 40 onwards, where the level of trust fades for about six trials before reaching its lowest level. After that trial, although the faulty pump is constantly present until the experiment ends at trial 60, trust begins to recover along a curve which bears a marked similarity to the curve following trial 26. The effect of the different magnitude faulrs on performance and trust for trial 26 was not statistically significant with F(3,12)= 1.665 for trust and F(3,12)= 1.548 for performance. The fault magnitude significantly affected the level of trust, but did not significantly affect performance for trials 41-60. For these trials the effect on trust was significant with F(3,323)= 7.905 and p

Trust, control strategies and allocation of function in human-machine systems.

As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controll...
1MB Sizes 0 Downloads 0 Views