Visualizing the Relationship between Human Error and Organizational Failure
Chris Johnson
Department of Computing Science,
University of Glasgow, Glasgow G12 8QQ.
Fax: 0141 330 4913, Telephone: 0141 330 6053,
Email:
johnson@dcs.gla.ac.uk, http://www.dcs.gla.ac.uk/~johnsonManagerial ‘failure’ plays an important role in major accidents and incidents. Operators have been authorized to deliberately remove safety mechanisms. They have also been instructed to guide application processes into dangerous operating environments. Given the consequences of such intervention, it is surprising that so little attention is paid to the relationship between organizational failure and operator error. One explanation for this is that tools and techniques, which have been developed to analyze human and system failures, cannot easily be applied to reason about organizational problems. This paper argues that Fault Trees help to visualize the ways in which organizational failures create the necessary preconditions for human failure. This approach is also used to focus on the organizational problems that can exacerbate the consequences of those errors in the aftermath of an accident. It is argued that too much attention has been paid upon human errors in the causes of accidents and not enough attention has been paid to organizational failure in post-accident events. A collision between a Maryland Commuter train and an American National Railroad Passenger Corporation train on February 16th, 1996 is used to illustrate this argument.
Keywords: human ‘error’; organizational failure; fault trees; safety; accidents.
1. Introduction
Many accident reports cite human failure as a primary cause (Johnson, 1998). The workers at the Bhopal chemical plant pumped Methyl-isocyanate into a leaking tank (Morehouse and Subamaniam, 1986). The officers and crew of the Herald of Free Enterprise set to sea with their bow doors open (Sheen, 1987). The pilot and co-pilot throttled back their one working engine rather than the failed engine during the Kegworth air crash (AAIB, 1990). The first line of fire-fighting operations was delayed by communications failures between various operators during the Channel Tunnel Fire (Department of the Environment, 1996). This paper explains why operator error is such a prevalent cause of major accidents: insufficient attention is being paid to the managerial weaknesses that make systems vulnerable to human ‘failure’.There has been a considerable amount of research into the causes of human error. Much of this has stressed the psychological and physiological influences that shape operator performance. There has been a focus on the effects of high workload on a user’s ability to respond to warning messages (Woods, 1994). Other research has focussed on the impact of noise and vibration on an operator’s decision making processes. There has also been a focus on individual attitudes to risk taking (Stanton and Glendon, 1996). Unfortunately, much of this work focuses on the symptoms of human error rather than its underlying causes (
van Vuuren, Shea, and van der Schaaf, 1997). Relatively little work has focussed on the underlying organizational and regulatory weaknesses that lead to high workload or noise in an operator’s environment (Hale, Wilpert and Freitag, 1997). These less direct forms of human failure help to establish the working practices that result in operator ‘error’ (Reason, 1997). Unless we understand the managerial and regulatory causes of human failure then there is little prospect that we will ever be able to reduce the number of accidents and incidents that are being blamed on operator ‘error’. Unfortunately, there are few techniques that can be used to reason about the interaction between organizational failure and human error. Cognitive models cannot easily be extended to represent regulatory requirements. Conversely, it is difficult to reason about individual responses to particular system failures using optimization models from operations research (Johnson, 1995). This paper, therefore, shows how fault trees help to visualize the relationship between organizational problems and human failure. This notation is appropriate because it reveals that:Fault trees can also be integrated into other analytical tools that support process improvement, such as Management Oversight and Risk Trees (MORT). A further benefit is that the fault tree notation is well understood by existing generations of engineers.
There are, however, many reasons why the standard fault tree notation is not appropriate for our purpose. These issues are discussed in more detail in a previous paper (Love and Johnson, 1997). In contrast, the remainder of this paper demonstrates that the approach can be used to analyze the complex, "messy" blend of operator errors and organizational failures that characterize real world accidents. A collision between a Maryland Commuter train and an American National Railroad Passenger Corporation train on February 16th, 1996 is used to illustrate this argument.
1.1 The MARC 286 Case Study
The National Transportation Safety Board (NTSB) report provides the following executive summary of the collision that forms the case study for this paper:
"About 5:39 pm on February 16, 1996, Maryland Rail Commuter (MARC) train 286 collided with National Railroad Passenger Corporation (Amtrak) passenger train 29 near Silver Spring, Maryland. En route from Brunswick Maryland to Union Station, Washington DC, MARC train 286 was travelling under CSX Transportation Inc (CSXT) operation and control on CSXT tracks. MARC train 286 passed an APPROACH signal before making a station stop at Kensington, Maryland; proceeded as if the signal had been CLEAR; and then, could not stop for the STOP signal at Georgetown Junction, where it collided with Amtrak train 29. All 3 CSXT operating crew members and 8 of the 20 passengers on MARC train 286 were killed in the derailment and subsequent fire. Eleven passengers on MARC train 286 and 15 of the 182 crewmembers and passengers on AMTRAC train 29 were injured. Estimated damages exceeded $7.5 million" (page vii, NTSB, 1997).
This accident provides an appropriate case study because at first sight it appears to have been caused by a relatively simple instance of operator error. The engineer on-board MARC train 286 forgot that the previous signal had been APPROACH instead of CLEAR. They, therefore, reached the Georgetown Junction with a velocity that prevented them from stopping in time to avoid AMTRAC train 29. Later sections will argue, however, that such a superficial analysis ignores the managerial and regulatory factors that contributed this accident. These organisational factors helped to create a system that relied upon the crewmember’s memory of a signal that they had seen minutes before an unscheduled stop.
1.2 Using Fault Trees to Support Accident Analysis
This paper uses fault trees to represent and reason about the relationship between organisational failure and human error. This notation provides a simple graphical syntax based around circuit diagrams. Figure 1 presents a brief overview of this approach. Andrews and Moss (1993) provide a more detailed introduction.
Figure 1: Fault tree components
Fault trees are, typically, used pre hoc to analyse potential errors in a design. They have not been widely used to support post hoc accident analysis. They do, however, offer considerable benefits for this purpose. The leaves of the tree can be used to represent the initial causes of the accident (Leplat, 1987). The gates in Figure 1 can be used to represent the ways in which those causes combine. For example, the combination of operator mistakes, hardware/software failures and managerial problems might be represented using an AND gate. Conversely, a lack of evidence about user behaviour or system performance might be represented using an OR gate. Basic events can be used to represent the underlying failures that lead to an accident (Hollnagel, 1993).
Figure 2 uses a fault tree to represent some of the findings in the NTSB report:
"The MARC train 286 engineer apparently forgot the signal aspect, which required him to be prepared to stop at Georgetown Junction, due to interference caused by various events, including performing an unscheduled station stop, that occurred between the presentation of the APPROACH aspect at signal 1124-2 and the STOP signal at Georgetown Junction." (NTSB, 1997, Conclusion 4, page 73)
"Neither the conductor nor the assistant conductor while in the cab control compartment appeared to have effectively monitored the engineer’s operation of MARC train 286 and taken action to ensure the safety of the train" (NTSB, 1997, Conclusion 5, page 73)
Figure 2 also shows that there is no direct means of translating from natural language into the structures of a fault tree. Analysts must identify key events from the prose. These events must then be structured using the gates that were introduced in the previous paragraph. Later sections will integrate the first three conclusions from the NTSB report into the fault tree shown in Figure 2.
Figure 2: An example of a fault tree representing part of the MARC accident
A number of important differences distinguish this use of fault trees from their more conventional application. The output from an AND gate is true if and only if all of its inputs are true. It is difficult to analyse an accident in this way. For example, Figure 2 shows that the collision was the result of four events. The derailment would have been prevented if any one of these events had been prevented from happening. In accident analysis, however, it is difficult to be certain that an event would actually have been avoided in this way. The derailment may or may not have been avoided if the Conductor had intervened. This potential conflict between the pre hoc use of fault trees to support risk assessment and their post hoc use to support accident analysis can be resolved. In the post hoc application of fault trees we are building our model upon a known set of events. Any inferences that depend upon events that are not part of that set must be regarded as speculation unless further evidence can be provided. In the previous example, this might involve empirical or observational studies of the interaction between Conductors and Engineers on MARC trains.
2.0 Immediate Causes
Figure 2 focussed on the immediate causes of the MARC collision as they were described in the concluding section of the NTSB report. These findings focus upon the operator ‘errors’ that directly caused the derailment. This analysis can be extended to provide a more complete overview of the events that led to the accident. Figure 3 provides an overview of the findings in the NTSB report. It accurately reflects the direction and focus of the argument in the concluding section. Most of the findings relate to the Engineer’s error rather than to the monitoring activities of the Conductor or their assistant. This is shown by the way in which the left-hand branch of the tree is developed from the intermediate event in which MARC 286 approaches Georgetown Junction as if signal 1124-2 was set to CLEAR. The lack of intervention by the Conductor and Assistant Conductor is less of a focus than the Engineer’s error.
Figure 3: An Extended Fault Tree Showing Events Leading to the MARC Collision
The previous fault tree provides a graphical representation of the focus in the NTSB report. This offers a number of important benefits:
Figure 4 focuses on part of the fault tree presented in Figure 3. In particular, it represent some of the non-contributory factors that were mentioned in the previous section:
"Neither the three MARC train 286 crewmembers nor the two Amtrak train 29 locomotive crewmembers were impaired by alcohol or drugs. All train crewmembers were in good health, had no evidence of fatigue, and were experienced in and qualified for their duties." (NTSB, 1997, Conclusion 1, page 73)
These non-contributory factors are represented as house events. They can either be "turned" on or off during the analysis of a fault tree. The NTSB report indicated that neither drugs nor illness affected the Engineer in the MARC collision. Technically, this can be represented by assigning a probability of 1 to the two house events in Figure 4. However, the ability to switch events on and off also provides analysts with means of exploring alternative hypotheses about the course of an accident. For instance, a house event can be turned off if it is assigned a probability of 0. This can be used to explore what might have happened if the Engineer’s performance had been impaired by drugs or by alcohol. The OR gate would then indicate that the Engineer could forget the APPROACH aspect of signal 1124-2 even if their memory were not impaired by the unscheduled stop. This specific example illustrates how the non-contributory factors in the NTSB report help readers to identify alternative scenarios or hypotheses about the events that might have led to the accident. The level of analysis presented in Figure 4 might seem simplistic. However, it is important to point out that this is the level at which the NTSB report was written. Fault trees simply help to reason about the consequences of the alternative scenarios that were implicit in the conclusions of the report.
Figure 4: Using House Events to Represent Alternative Scenarios
Inhibit gates provide an extension to the approach discussed in the previous paragraph. Rather than assigning Boolean probabilities to house events, these gates can be associated with a wider range of probabilities. Figure 5 exploits inhibit gates as a means of describing further hypotheses about the potential impact of non-contributory factors in future accidents and incidents. In this case the hypotheses relate to the effects of bad weather and signal failure on the course of the collision:
"The weather conditions did not impair the ability of the MARC train 286 crewmembers to distinguish the indication of the Kensington signal 1124-2." (NTSB, 1997, Conclusion 2, page 73)
"The signal system functioned as designed." (NTSB, 1997, Conclusion 3, page 73)
Figure 5 uses probabilistic inhibit gates because the Engineer’s ability to view signal 1124-2 need not be impaired every time that there was bad weather. Similarly, a signalling failure need not always lead to an incorrect indication for 1124-2. This ability to assign probabilities to representations of human error should not be underestimated. It provides the opportunity for Monte Carlo simulation techniques in which analysts can investigate probable and improbable, frequent or infrequent, traces of interaction. The obvious pitfall is that there must be some means of validating the statistics that are used to prime models such as that shown in Figure 5. The most appropriate means of obtaining these figures after an accident is through reconstruction and empirical tests with other operators. Of course, these studies are inevitably biased by the individual’s knowledge that their performance is being monitored in the aftermath of an accident. These studies have, however, been widely used in previous accident reports (AAIB, 1990).
Figure 5: Using Inhibit Gates to Represent Alternative Scenarios
The previous paragraphs have argued that fault trees can be used to provide an overview of the immediate human errors that contribute to accidents. House events and inhibit gates can also be used to analyse the non-contributory factors that did not play a part in past failures but which might lead to similar errors during the future operation of the system. The following section builds on this analysis and shows how our fault tree model can be extended to capture the managerial and regulatory factors that created the potential for the direct human error in Figures 3 to 5.
3.0 The Organisational Origins of Direct Human Error
The first five findings in the NTSB report focused on non-contributory factors and the interaction between the Engineer, the Conductor and the Assistant Conductor. The remaining twenty-two findings centred on the organisational factors that contributed to the accident. These organisational problems involve both managerial and regulatory failure. Many of them stemmed from a failure to review the human factors implications of increasing the capacity on the Brunswick line. Increasing the capacity implied reducing the train headway to 15 minutes during dense scheduling periods. This, in turn, implied changes to the signalling system. In consequence, the Engineer had to remember the aspect of signal 1124-2 both before and after any stop at Kensington station:
"Additionally, signal 100 was less than the 11,000 feet minimum braking requirement from the EAS-2 signals at CP Georgetown Junction. As a result of the signal modifications, signal 100 was replaced by signal 1124-2, which was now the last automatic wayside signal before EAS-2 for Georgetown Junction and was west of the Kensington station platform" (NTSB, 1997, page 43).
The NTSB report summarised the managerial and regulatory failures that created this situation in the following conclusions from their accident report:
"Had the Federal railroad Administration and the Federal Transit Administration required the CSX Transportation Inc. to perform a total signal system review of the proposed signal changes that included a human factors analysis with comprehensive failure modes and effects analyses, this accident may have been prevented". (NTSB, 1997, Conclusion 7, page 73)
"Federal funds granted for the signal modifications on the CSXT Brunswick Line to accommodate an increase in the number of Maryland Rail Commuter trains did not ensure that the safety of the public was adequately addressed" (NTSB, 1997, Conclusion 8, page 73)
"The Federal Railroad Administration relied on the need for increased vigilance of wayside signals and special actions in operating rules, such as the crew communication rule of emergency order 20, does not adequately safeguard the public" (NTSB, 1997, Conclusion 10, page 73)
"Had a train control system that could utilise the cab signal equipment on the Maryland Rail Commuter cab control car been a part of the signal system on the Brunswick Line, this accident may not have occurred." (NTSB, 1997, Conclusion 11, page 73)
Figure 6 illustrates how these findings can be integrated into the fault tree model. A cursory inspection reveals the additional complexity that is introduced when investigators consider the deeper sources of organisational failure that contribute to major accidents and incidents.
Figure 6: Using Fault Trees to Represent the Organisational Precursors to Human Error
Previous sections have introduced non-contributory causes into our fault tree model of human error and organisational failure. Drugs and illness were rules out as influences on the operators’ behaviour. By considering these causes, analysts can identify alternative scenarios that might lead to similar accidents in the future. Conversely, accident reports often speculate about events that might prevent similar accidents in the future:
"A fully implemented positive train separation control system might have prevented this accident by recognising that MARC train 286 was not being operated within allowable parameters, based on other authorised train operations, and would have stopped the train before it could enter into the unauthorised track area" (NTSB, 1997, Conclusion 12, page 73)
Figure 7 captures this finding. The diagram again illustrates the relationship between the natural language comments of the NTSB report and the formal analysis techniques that can be applied to fault tree diagrams. In this case, the house events are assigned a probability of 1 to simulate the events leading to the collision. The Federal Railroad Administration and the Federal Transit Administration allowed the proposed signal changes without a comprehensive failure modes and effects analysis. According to figure 7, if the house events had been false and an analysis had been conducted then a positive train separation control system would have been introduced. In consequence, the highest level conjunction would not have been true. The derailment would not have taken place.
Figure 7: Using Fault Trees to Represent the Organisational Precursors to Human Error
The previous analysis raises many questions about the role of organisational failure in major accidents. It is not certain that a failure modes and effects analysis would have led to the introduction of a positive train separation system, as suggested in Figure 7. This objection can be represented by replacing the house events with an inhibit gate. Analysts could then assign a probability to the introduction of a train separation system given that a failure modes and effects analysis had been conducted. This approach, in turn, raises further questions about quantified approaches to group decision making. It is unclear how reliable data might be obtained and validated for such an analysis. What figure would an accident investigator be justified in assigning to a particular outcome of the failure modes and effects analysis? Such questions point towards a need to explore the relationship between economics or management theory and cognitive science (Johnson, 1995). The former approaches provide accounts of group decision making under uncertainty. The latter provides more qualitative insights into individual instances of human ‘error’.
4.0 The Aftermath
Previous sections have shown that fault trees provide one means of integrating observations about organisational failure into an analysis of more direct forms of human error. Previous diagrams have, however, suffered from a weakness that is common in many human factors investigations. There is a preoccupation with the causes rather than the consequences of an incident. This is a significant limitation because most lives are lost in the aftermath of an accident than are lost through its immediate effects. In the MARC case study, at least eight of the eleven fatalities were caused by events that occurred after the immediate collision. Figure 8 alters the perspective of previous fault trees by focussing on the immediate aftermath of the collision. It represents the following conclusions from the NTSB report:
"The emergency egress of passengers was impeded because the passenger cars lacked readily accessible and identifiable quick-release mechanisms for the exterior doors, removable windows or kick panels in the side doors, and adequate emergency instruction signage." (NTSB, 1997, Conclusion 13, page 73)
"The catastrophic rupture of the Amtrak unit 255 fuel tank in the collision with the MARC cab control car 7752 released fuel, which sprayed into the interior of the cab control car, and resulted in the fire and at least 8 of the 11 fatalities." (NTSB, 1997, Conclusion 18, page 74)
"Even though the Montgomery County Fire and Rescue Service personnel responded promptly to the emergency, they could do nothing to save the lives of the accident victims because passenger coach cab control car 7752 was already completely engulfed in flames when the fire fighters arrived on the scene." (NTSB, 1997, Conclusion 21, page 74)
Figure 8: Fault Tree Showing Events Following the MARC Collision
As can be seen, there is an even greater emphasis on organisational problems in the aftermath of an accident than in its immediate causes. This analysis highlights an important distinction between two different forms of organisational failure:
The previous diagram focussed on managerial failure. The train operators failed to provide adequate emergency instructions or escape mechanisms in their passenger car. However, the NTSB report also found regulatory problems that exacerbated the consequences of this accident:
"The absence of comprehensive Federal passenger car safety standards resulted in the inadequate emergency egress conditions." (NTSB, 1997, Conclusion 14, page 73).
"A need exists for Federal standards requiring passenger cars be equipped with reliable emergency lighting fixtures with a self-contained independent power source when the main power supply has been disrupted to ensure passengers can safely egress." (NTSB, 1997, Conclusion 15, page 74)
"Prescribed inspection and maintenance test cycles are needed to ensure the reliable operation of emergency windows in all long-distance and commuter rail passenger cars." (NTSB, 1997, Conclusion 16, page 74)
Figure 9 shows how these regulatory weaknesses contributed to the managerial failures shown in Figure 8. Such an integrated approach is critical if readers are to gain an overview of the relationship between operational practices and the regulatory structures that guide those practices.
Figure 9: Representing Regulatory Problems in an Extended Fault Tree
The previous fault tree illustrates how the lack of Federal standards for reliable emergency lighting combined with the lack of suitable signs in the MARC car to prevent passengers from finding the emergency exits. Similarly, the lack of more general Federal safety standards combined with the inadequate maintenance cycles of the MARC operators to prevent passengers from using the exits once they had located them. Even this level of analysis simplifies the organisational problems that were uncovered by the NTSB:
"Because other commuter passenger cars may also have interior materials that may not meet specified performance criteria for flammability and smoke emission characteristics, the safety of passengers in those cars could be at risk." (NTSB, 1997, Conclusion 19, page 74).
"The Federal guidelines on the flammability and smoke emission characteristics and the testing of interior materials do not provide for the integrated use of passenger car interior materials and, as a result, are not useful in predicting the safety of the interior environment of a passenger car in a fire." (NTSB, 1997, Conclusion 20, page 74)
Each stage of this analysis takes us further and further away from the Engineer’s initial error. It also moves us further and further away from most of the analysis techniques that are being developed by human factors and systems engineering. The NTSB also identified a range of further management problems:
"The CSX Transportation Inc. personnel operating Maryland Rail Commuter passenger trains are not adequately trained to understand and, therefore, execute their responsibilities for passengers in emergencies." (NTSB, 1997, Conclusion 24, page 74)
Figure 10 extends the previous diagrams to introduce these additional findings. This fault tree provides graphic evidence both of the complexity and diversity of organisational factors that exacerbate the effects of an accident and can frustrate a co-ordinated response to any incident.
Figure 10: Fault Tree Showing Organisational Failures and Direct Operator Error.
5.0 Conclusion
This paper has extended the application of fault trees beyond their normal use in systems engineering. This notation provides important visualisation properties that enable readers to gain an overview of complex, interconnected events. For example, the previous diagram shows how the lack of Federal guidelines contributed to the passengers’ difficulties in escaping during the fire. Our use of fault trees also provides visual support for the argument that post accident events are just as important, if not more important, than the more immediate human errors that lead to many accidents. Most of the fatalities in the MARC collision could have been avoided if managerial and regulatory structures had ensured the provision of effective escape mechanisms.
The previous argument introduces the second focus of this paper. It has been argued that too little attention has been paid to the role of organisational failure in major accidents. The role of managerial and regulatory practices as pre-conditions for human error has been particularly neglected. There are some exceptions to this criticism (Reason 1997, Hale, Wilpert and Freitag 1997, van Vuuren, Shea and van der Schaaf, 1997). However, most human factors and systems engineering has focused on the immediate causes of operator error and system failure rather than on the organisational context of those errors.
Distributed cognition, situation awareness, high-workload and mode confusion have become part of a mantra that is being repeated with an increasing frequency in accident reports. Their prominence as causal factors in these documents often obscures wider issues to do with workplace organisation and industrial regulation. The NTSB case study is an exception to this general criticism. It is rare to find an official report that is so candid in its analysis of managerial and regulatory practice. This is a result of the NTSB’s position outside the Federal regulatory mechanisms that protect the railroads. Many other reporting agencies lack this independence. This reduces the likelihood that they will examine the managerial and regulatory practices that create the context for individual human errors.
Acknowledgements
Thanks go to the members of the Glasgow Interactive Systems Group (GIST) and to the Glasgow Accident Analysis Group. This work is supported by the UK Engineering and Physical Sciences Research Council, grants GR/JO7686, GR/K69148 and GR/K55040.
References
Air Accidents Investigations Branch, Department of Transport. Report On The Accident To Boeing 737-400 G-OBME Near Kegworth, Leicestershire on 8th January 1989, number 4/90, Her Majesty's Stationery Office. London, United Kingdom, 1990
J.D. Andrews and T.R. Moss, Reliability and Risk Assessment, Longman Scientific and Technical, Harlow, United Kingdom, 1993.
Department of the Environment, Transport and the Regions, Inquiry into the Fire on a Heavy Goods Vehicle Shuttle 7539 on 18 November 1996.
D. Busse and C.W. Johnson,
Modelling Human Error within a Cognitive Theoretical Framework. In F.E. Ritter and R.M. Young (eds.) The Second European Conference on Cognitive Modelling, Nottingham University Press, 90-97, 1998.A. Hale, B. Wilpert and M. Freitag, After the Event: From Accident to Organisational Learning, Pergamon Press, New York, United States of America, 1997.
E. Hollnagel, The Phenotype Of Erroneous Actions, International Journal Of Man-Machine Studies, 39:1-32, 1993.
C.W. Johnson, Decision Theory And Safety-Critical Interfaces. In K. Nordby, P.H. Helmersen, D. Gilmore and S. A. Arnesen (eds.), Interact '95, Chapman and Hall, London, United Kingdom, 127-132, 1995.
J. Leplat. Accidents and Incidents Production: Methods of Analysis. In J. Rasmussen, K. Duncan and J. Leplat (eds.), New Technology and Human Error. John Wiley and Sons Ltd, 1987.
L. Love and C.W. Johnson, Using Diagrams to Support the Analysis of System 'Failure' and Operator 'Error'. In H. Thimbleby, B. O'Conaill and P. Thomas (eds.), People and Computers XII: Proceedings of HCI'97, Springer Verlag, London, United Kingdom, 245-262, 1997.
W. Morehouse and M.A. Subamaniam, The Bhopal Tragedy. Technical Report. Council for International and Public Affairs, New York, United States of America, 1986.
National Transportation Safety Board, Collision and Derailment of Maryland Rail Commuter MARC Train 286 and National Railroad Passenger Corporation AMTRAK Train 29, Near Silver Spring, Maryland on February 16, 1996. NTSB Report RAR-97/02. Washington, United States of America, 1997.
J. Reason, Managing the Risks of Organisational Accidents, Ashgate, Aldershot, United Kingdom, 1997.
Sheen, Formal Investigation into the Sinking of the mv Herald of Free Enterprise, UK Department of Transport, Report of court 8074, Her Majesty’s Stationery Office, 1987.
N. Stanton and I. Glendon, Risk Homeostasis and Risk Assessment, In the Journal of Safety Science (22)1-3:1-13, 1996.
W. van Vuuren, C.E. Shea, T.W. van der Schaaf, The Development of an Incident Analysis Tool for the Medical Field, Technical Report EUT/BDK/85, Eindhoven University of Technology, Faculty of Technology Management, 1997.
D. Woods, Cognitive Demands and Activities in Dynamic Fault Management, In N. Stanton (ed.), Human Factors of Alarm Design, Taylor and Francis, London, United Kingdom, 1994.