System Safety: Seven Friends of Explanation

In this short series, I highlight seven foes and seven friends of system safety, both for explanation and intervention. Each is a concept, meme, or device used in thinking, language, and intervention (reinforced by more fundamental foes that act as barriers to thinking).  They are not the only foes or friends, of course, but they are significant ones that either crop up regularly in discussions and writings about safety, or else – in the case of friends – should do.

In this post, I outline seven friends of explanation. To keep it short (because I usually intend that, but rarely succeed), I limit each explanation to an arbitrary limit of 100 words.

In this series:

  1. Seven foes of explanation in system safety
  2. Seven foes of intervention in system safety
  3. Seven friends of explanation in system safety (this post)
  4. Seven friends of intervention in system safety
heathrow 1

Tim Caynes CC BY-NC 2.0 https://flic.kr/p/6BNpAf

1. The [degraded] system

The ‘system‘ in system safety does not operate as designed or as prescribed. It is neither fully understood nor fully understandable, and only slithers of performance can be measured. There are degraded resources (staffing, competencies, equipment, procedures, time) and – often – inappropriate constraints, punishments and incentives, whose effects are not as imagined. There are also gaps between these elements of the system, and people – the flexible system element – have to stretch to bridge these gaps, resolving the unforeseen pressures and dilemmas that result. While we are mostly successful, sometimes the reality of the system surfaces in unwanted ways.

2. Goal conflicts 

Safety is just one of several goals, among cost-efficiency, productivity, capacity, security, and environmental factors such as noise and emissions. Safety is rarely of highest priority in any permanent sense. Instead, there are almost always conflicts or tensions between goals, presenting stakeholders with dilemmas. As situations change over time, different goals and relative priorities will be perceived differently by different individuals and groups. Goal conflicts will also look different in hindsight, when one has access to more information, including the outcome. While the solutions to goal conflicts may seem ‘obvious’ looking back, they were gambles when looking forward.

3. Work-as-done

We often base safety-related work on work-as-imagined, -prescribed, and -disclosed. In doing so, we often neglect the real thing – work-as-done. Work-as-done is what people do to meet their goals during expected and unexpected situations. It is characterised by patterns of activity to achieve a particular purpose in a particular context. It may look messy, but in fact it is the environment that is messy. The work is adaptive. Work-as-done varies between people and situations, and much of it is in the head. So any understanding – gained via listening, observing, recording, and modelling – will only ever be partial and approximate.

4. Trade-offs and adjustments

People work not in order to ‘be safe’, but to meet demands. Constant performance adjustments and trade-offs are required order to meet variable, unpredictable demands, and to resolve goal conflicts. When we look at human performance, even when simply walking in a crowd, all we do is adjust and adapt to a dynamic, uncertain environment. We have to make trade-offs and choose among (often sub-optimal) courses of action, and make adjustments to our plans and responses as situations unfold. This is mostly very successful, and needs to be understood from an inside perspective, whether the outcome is as expected or not.

5. Local rationality

Work-as-done is guided by the local rationality principle: people do things that make sense to them given their goals, the evolving situation, and their understanding of it at a particular moment. Our rationality is not only bounded by human limitations, complexity, and time available, but local to the situation and our experience. Everyone has their own local rationality. We need to understand how people make sense of situations and how they choose to act. This requires empathy and careful discussion and observation to understand work-as-done (in the head and the world) and what helped and hindered it.

6. Interactions and patterns

In a system, everything is connected to something. While we often attend to components, it is the nature of interactions, along with goals, that characterises the system. These interactions – between human, social, organisational, regulatory, political, technical, economic, procedural, informational, and temporal components – should be a focus of attention, whether considering the past, present or future. Viewing the system as a whole, emerging patterns of activity – including flows of activity and information – become evident. These wanted and unwanted patterns can be understood using systems methods, which help to reveal influence in the system, and possibilities for intervention.

7. Strengths and assets

Systems operate successfully, for the most part, because of strengths and assets in the system (especially human strengths such as flexibility, creativity, learning, collaboration, pattern recognition, curiosity, insight, and perspective-shifting). Strengths and assets are often almost missing from system safety research and practice. In any discussion or analysis, we should start with what’s strong, not what’s wrong. Instead of focusing only on perceived deficiencies, we must find out what capacities ensure that system goals are balanced appropriately. If we don’t explicitly appreciate what we have, how can we know if interventions – including efficiency-focused cut backs – are wise?


If you want to learn more about how complex systems fail, in the most concise and easy-to-digest form, read this by read Richard Cook.

About stevenshorrock

This blog is written by Steven Shorrock. I am interdisciplinary humanistic, systems and design practitioner interested in human work from multiple perspectives. My main interest is human and system behaviour, mostly in the context of safety-related organisations. I am a Chartered Ergonomist and Human Factors Specialist with the CIEHF and a Chartered Psychologist with the British Psychological Society. I currently work as a human factors and safety specialist in air traffic control in Europe. I am also Adjunct Associate Professor at University of the Sunshine Coast, Centre for Human Factors & Sociotechnical Systems, and Honorary Clinical Tutor at the University of Edinburgh. I blog in a personal capacity. Views expressed here are mine and not those of any affiliated organisation, unless stated otherwise. You can find me on twitter at @stevenshorrock or email contact[at]humanisticsystems[dot]com.
This entry was posted in Human Factors/Ergonomics, Safety, systems thinking and tagged , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.