In this short series, I highlight seven foes and seven friends of system safety, both for explanation and intervention. Each is a concept, meme, or device used in thinking, language, and intervention (reinforced by more fundamental foes that act as barriers to thinking). They are not the only foes or friends, of course, but they are significant ones that either crop up regularly in discussions and writings about safety, or else – in the case of friends – should do.
In this post, I outline seven foes of explanation. To keep it short (because I usually intend that, but rarely succeed), I limit each explanation to an arbitrary limit of 100 words.
In this series:
- Seven foes of explanation in system safety (this post)
- Seven foes of intervention in system safety
- Seven friends of explanation in system safety
- Seven friends of intervention in system safety
‘Human error’ is a vague, ambiguous and poorly defined bucket concept that tends to combine psychological variables (such as intention and expectation) and outcome variables (unwanted, by someone). From a psychological perspective (concerning departures from one’s own intentions or expectations), the concept is less problematic, but focuses on the head to the expense of the world. As an explanation in a complex system, the concept it widely misused and abused, especially to infer causation.
2. Causal chains
The idea of causal chains, including domino or ‘5 Whys’ approaches, implies a linear ordering of cause and effect. Causal chains force people to think of complex socio-technical systems as if they were ordered technical systems, with clear, linear cause-effect relationships between components. In reality, complex socio-technical systems are defined more by non-linearity, temporariness of influence, and emergence.
3. Root cause(s)
At face value, the idea of a root cause “that, if removed, prevents recurrence” is obviously nonsensical; why-because arguments can go on ad infinitum. The oft-used stopping point “that management can control” is convenient and easily abused. The concept encourages the idea of a single root cause, ignoring causal loops, emergent, synergistic or holistic effects, and often even multiple, jointly necessary, contributory causes. It is, of course, an efficiency-thoroughness trade-off by the analyst, but hidden behind an illusion, that going ‘down and in’ will get you to the ‘real cause’, which is actually a social construction.
4. Causes, generally
The way we think of ’causes’ in the analysis of complex work situations is often at odds with the conceptual and theoretical basis of causation. While the concept may seem unproblematic when it comes to physical cause-effect relationships, such as a hand pressing a button or pulling a lever, the same cannot be said for relationships involving less visible, less tangible system components. As one goes up and out into the system and context or environment, or – at a psychological level – down and in to the human mind, it more advisable to refer to interaction and influence.
5. Loss of situation(al) awareness/crew resource management
‘Situation(al) awareness’ is an aggressive concept that emerged from the pilot community, and subsequently human factors engineering, before taking on a life of its own, gobbling up more useful and theoretically valid concepts with long histories in psychological research, and which better define and specify the cognitive processes. CRM has a similar heritage. Both are often used counterfactually as a proxy for ‘human error’, individually or collectively. In the case of loss of SA, it refers to the ‘loss’ of awareness – of past, present, or future (!) – with implications for individuals and system safety.
The term ‘violation’ has an intensity of connotation and implication that – especially in the context of its more common uses – makes it inherently violent. It is one of a few terms in safety that tends to prejudge and label work behaviour without really understanding why work-as-done is not always in accordance with work-as-prescribed, and very often is not and cannot be completely so. Rather than truly understanding these differences, we tend to classify the violations. The term itself acts as a barrier to discussion and reporting of messy reality situations.
7. Monolithic explanations, generally
Monolithic explanations act as proxies for real understanding, in the form of big ideas wrapped in simple labels. The labels are ill-defined and come in and out of fashion – poor/lack of safety culture, lack of CRM, human error, loss of situation awareness – but tend to give some reassurance and allow the problem to be passed on and ‘managed’, for instance via training and safety campaigns. Often, the same term in reverse may be used to ‘explain’ success, meaning that almost all wanted/unwanted outcomes are due to the same one thing, absent or present.
If you want to learn more about how complex systems fail, in the most concise and easy-to-digest form, read this by read Richard Cook.