In this short series, I highlight seven foes and seven friends of system safety, both for explanation and intervention. Each is a concept, meme, or device used in thinking, language, and intervention (reinforced by more fundamental foes that act as barriers to thinking). They are not the only foes or friends, of course, but they are significant ones that either crop up regularly in discussions and writings about safety, or else – in the case of friends – should do.
In this post, I outline seven friends of intervention. To keep it short (because I usually intend that, but rarely succeed), I limit each explanation to an arbitrary limit of 100 words.
In this series:
- Seven foes of explanation in system safety
- Seven foes of intervention in system safety
- Seven friends of explanation in system safety
- Seven friends of intervention in system safety (this post)

1. Acceptance of uncertainty
Whether one is intervening* to try to understand a situation or intentionally to bring about change, it is important to accept that one probably does not and cannot fully understand a complex situation or sociotechnical system. Once one accepts this, unwarranted confidence reduces, and the need for competency, time, and information becomes clearer. With competency, time, and information, the form of practical arrangements for understanding the system at all stages of its lifecycle become clearer, including during implementation, where surprises that result from intervention actions will tend to emerge.
2. Competency, expertise and involvement
If you want to intervene in a system, you need expertise in system safety. It is astonishing how often this simple fact is neglected. Suitably qualified and experienced persons (SQEPs) are needed with recognised multidisciplinary competencies and perspectives, such as from safety science, safety engineering, human factors/ergonomics, psychology, anthropology, and related disciplines. Such expertise is often missing (e.g., HF/E competency in healthcare). And of course competency is needed from those who do the work. Learning teams and action research are examples of the use of competency in intervention.
3. Research
When intervening in a system for understanding or intentional change, an important initial step is to get knowledge. For system safety, this may include original research for new knowledge, or summaries, reviews or syntheses of existing sources of knowledge. The knowledge may relate to a topic within a safety-related discipline (e.g., in scientific journals), a sector (e.g., aviation, healthcare), or an organisation (e.g., history of interventions). In system safety, this important step is often missing in practice, resulting in ineffective interventions. Greater attention to research provides data, concepts, theories and methods to guide practice, benefiting safety and effectiveness more generally.
4. Listening and observing
Two fundamental methods for understanding systems are observing people at work and listening to people talk about their practice – how and why they intentionally make and transform the world – including the context of practice. These activities, while often lacking in practice, are vital to increase congruence between work-as-imagined and work-as-done, via appropriate alignment rather than simple compliance. Accepting the equivalence of failure and success in terms of their origins in ordinary work, we try to understand not only unusual events, but work in all its forms, whether the outcome is expected or unexpected, wanted or unwanted.
5. Human-centred, activity-focused design
Human-centered design (HCD, e.g. ISO 9241-210) is a design philosophy and process that aims to align systems with human needs. It is relevant to anyone involved in the design or modification of procedures, equipment, or other artefacts. HCD requires that stakeholders are involved throughout design and development, which is based on an explicit understanding of people, activities, tools, and contexts. The process is refined by iterative user-centred evaluations and learning cycles. A strong focus on activities helps to understand not only how the world should adapt to people, but how people adapt to the world.
6. Multiple perspectives and thick descriptions
There tend to be multiple perspectives on situations, events, problems and opportunities. Each may be partial, but together can give a more complete picture. Shifting between different perspectives illuminates different experiences, perceptions and understandings, and how these interact. Different aspects of systems and situations come to light, along with the trade-offs, adjustments and adaptations that are or were locally rational. Multiple perspectives help generate thick descriptions of human behaviour. Facts, along with commentary and interpretations, explain work-as-done in context, such that it becomes more meaningful to an outsider, and possible implications of situations and proposed ‘solutions’ come to the surface.
7. Systems methods
Systems methods help to understand system boundaries, system structure, and system interactions across time and scale. They can make patterns of system behaviour visible, and can reveal previously unknown or unforeseen influences and interactions between parts of the system. Methods can be used for describing, analysing, changing, and learning about situations and systems. Common methods include system maps, influence diagrams, causal loop diagrams, multiple cause diagrams, stock and flow diagrams, activity theory/systems, FRAM, AcciMaps, and STAMP, among others. Such methods can help to go ‘up and out’ to the system context instead of just ‘down and in’ to components, ‘causes’, or events.
* A note on intervention: The term intervene comes from the Latin intervenire, from inter- ‘between’ + venire ‘come’. To intentionally try to understand a situation, or take action to change it (e.g., improve it or prevent it from getting worse) is to intervene. While there may be no intention to change a situation while observing or measuring, that is very often an unintended consequence.
If you want to learn more about how complex systems fail, in the most concise and easy-to-digest form, read this by read Richard Cook.
Reblogged this on Systems Community of Inquiry and commented:
Really good safety thinking has always encompassed sensemaking and systems thinking – and Steven Shorrock is a good example of this.