HindSight 30 on Wellbeing is out now

HindSight Issue 30 on Wellbeing is now online at SKYbrary. You can download the full issue, and individual articles. HindSight magazine is free and published twice a year, reaching tens of thousands of readers in aviation and other sectors worldwide. You will find an introduction to this Issue below, along with links to the magazine and the individual articles.


Welcome to Issue 30 of HindSight magazine – the EUROCONTROL magazine on the safety of air traffic management. The theme of this Issue is ‘Wellbeing’, which has an undeniable link to safe operations, though this is not often spoken about. 

This Issue coincides with the COVID-19 pandemic. The authors of the articles in this Issue were considering wellbeing in the context of aviation, and other industries. But the articles touch on topics that are deeply relevant to the pandemic. The spread of the virus and its effect on our everyday lives has brought the biological, psychological, social, environmental, and economic aspects of wellbeing into clear view in a way we have never seen before. 

With HindSight, we hope to help support conversations about wellbeing, not only now during the coronavirus pandemic, but after coronavirus. Please let your operational and non-operational colleagues know about this Issue of HindSight

The next Issue is on ‘Learning from Everyday Work’. What have you, your peers and your organisation learned by paying attention to what goes on in everyday work, whether things go well or not so well? Let us know, in a few words or more, for Issue 31 of HindSight magazine. See inside back cover for details.

HindSight 30 Articles



Why should your wellbeing matter to anyone else?by Suzanne Shale.

The Long Read

Views from the Ground

Views from the Air

Organisational and Professional Initiatives

Views from Elsewhere

In Conversation

Research Showcase

HindSight 30 On-line Supplement

See all editions of HindSight magazine

Posted in Culture, Human Factors/Ergonomics, Humanistic Psychology, Mental Health, Safety, Systems Thinking | Tagged , , , , , , , , , , , , , , | Comments Off on HindSight 30 on Wellbeing is out now

PTS(D) and Me

Jim White (sports broadcaster): What baffled me – and I don’t know if you’ll just give me a couple of sentences on it – you’re a good-looking man, you played at the highest level, you were a good player, you’re a good husband, a good football administrator, you’re an intelligent fella. So why? 

Clarke Carlisle (professional footballer): Well, here is part of the problem, Jim. Because all of that is an irrelevance. It’s an illness. So that’d be like applying that and saying, why have you got diabetes? It’s all an irrelevance. That circumstantial stuff is irrelevant. When you’ve got an illness, and it takes hold, and it’s not diagnosed or it’s not treated correctly, it will get to that disaster stage. Now that’s why we’re here today. Fellas, we talk to each other. We do talk to each other, but we can be very blasé or flippant. “What’s going on?” “Aw, you know, she’s doing my head in, or he’s doing my head in, they’re doing my head in.” Then we’re like, “Oh OK, let’s pop off.” Do you know what? Ask again. “What can I do for you? Can I help you?” You know, it’s not for me to fix your problems, but what it is for me to do is listen to you, Jim. Because sometimes, as a guy, all you need is to be is listened to and acknowledged. You feel dismissed in this generation, in this 24/7. Everyone wants a piece of you. Just listen to me for once. When guys are going through tough times, there’s often that thought that no-one wants to listen. Nobody’s going to help me. You’ve got to ask yourself, have you given someone the opportunity to help you? You know, in the first ten years of my suffering, I didn’t let anyone help me because I thought I had to deal with it.

A good player in deep distress

This is a video-recorded conversation between Jim White, a well-known Scottish sports broadcaster, and Clarke Carlisle, an English former professional footballer and former Chairman of the Professional Football Association. Carlisle made over 500 appearances during his 17-year career, playing for nine clubs across all four English divisions. At 1.91 m (6 ft 3 in), he was an imposing centre-back and also known to be a highly intelligent footballer (and with a clean sweep of A grades at the end of high school).

In December 2014, Clarke stepped out in front of a lorry in North Yorkshire. He survived physically relatively unscathed, but his mental health deteriorated and he disappeared in 2017, again considering taking his own life. A year later, in the video, he described himself as “very, very content today”.

A 2015 study by FIFPRO (Fédération Internationale des Associations de Footballeurs Professionnels), the worldwide representative organisation for 65,000 professional footballers, found 38% of 607 active professional players and 35% of 219 former players reported suffering from symptoms of depression or anxiety, or both. In a typical 25-person squad, this equates to nine members of the team. Sleep disturbance affected one in four, overall. How much do we talk about mental health, especially in professions where a large majority of staff are male?

Boys don’t cry

Kevin Dooley https://flic.kr/p/5RsnP7 CC BY 2.0

Two lessons seem to be learned by many males growing up in this world, perhaps even by most of us. One lesson, learned from a young age through parenting and early socialisation, is “Boys don’t cry”. The second, learned and reinforced in social groups and via the media, is “Men don’t talk about feelings”. These phrases don’t need to be said, as such. Observing and interacting with others is powerful enough to embed these rules in our psyches. Yet these are perhaps two of the worst lessons in human development, and contribute significantly to many problems of mental wellbeing throughout life. 

My paternal grandfather died in 2001 at 92 years of age. A few years before he died, he disclosed to my father some hitherto unknown family history: in World War II, my grandfather served as a sniper in the British army. Most families know about a relative’s history in military service, but we never did. My father asked his father, “Why did you never tell me?” My grandfather said, “Son, if you saw what I saw, you’d never want to talk about it.” His remark perhaps betrays a common male attitude to disclosure. That day, he did disclose some of his experiences of life as a sniper, but not many. What was clear is that he experienced post-traumatic stress (PTS) as a young man, and he didn’t talk about it.

A journey into PTS(D)

Kevin Dooley https://flic.kr/p/q1ecMj CC BY 2.0

I was not in the armed forces, but throughout my late teens and in my mid-20s I experienced what might be referred to flippantly as ‘a series of unfortunate events’. Many of the events centred around death and near-death experiences. This is not the place to describe them, but they were the sort that many people experience once in their life – loss of a parent at a young age, a high-speed car crash, violence (real and threatened), and vicarious experience of life-threatening health problems, among other events, some which are described in poetry here. I just happened to experience a series of such events in rapid succession over a few years at a young age. Some of the traumatic events occurred against a backdrop of a generally stressful day-to-day environment, growing up in a family business. 

On leaving home to study psychology, I had already experienced post-traumatic stress symptoms but further events were to occur during and after university studies, which led to reactivation and new symptoms. Some of these stuck for years. I’d heard of ‘post-traumatic stress disorder’ (PTSD), but associated that with soldiers like my grandfather, and didn’t consider it further. Ironically, that is how I felt, like a soldier under persistent attack. The symptoms as a cluster lasted around 12 years overall. Some lay dormant thereafter and re-emerged occasionally.

Throughout the whole experience, I was high-achieving in education and subsequently in employment, and was known to have a very high capacity for hard work. I probably channelled much of my energy into this as a distraction from what I was feeling, while also trying other means to avoid certain thoughts and feelings, because when I opened the door to all of this, it was very difficult to close. And you can’t simply close the door on PTS and PTSD. If you try, it will get in through the windows. 

For the rest of the article, I’ll refer to PTS(D) to cover both PTS and PTSD. The difference in practice is a set of diagnostic criteria and a diagnosis, but PTSD often remains undiagnosed. Post-traumatic stress (PTS) is a normal and generally adaptive response to experiencing a traumatic or stressful event, such as an accident or assault. PTS is very common and most people will experience it and experience some of the signs. If symptoms persist for months or years, they may fit the diagnosis of post-traumatic stress disorder (PTSD), a clinically diagnosed condition listed in the Diagnostic and Statistical Manual of Mental Disorders (fifth revision, May 2013). According to the National Institute of Mental Health, PTSD will affect 6.8% of U.S. adults in their lifetime. 

My primary experience of PTS(D) was in my teens and 20s. But in later adulthood, pre-empting a major life change, I experienced another episode of PTS(D), after a very long hiatus. In this case, the thoughts and feelings mostly did not relate to memories of the original events (which no longer triggered the same responses), but rather the memories of the PTS(D) symptoms from my younger days. 

The experience of PTS(D)

Kevin Dooley https://flic.kr/p/5SHesX CC BY 2.0

Symptoms of PTS(D) will depend on the person, but below are some characteristic symptoms which I experienced over many years. Each has psychobiological explanations. For instance, there are physical changes to brain structures. The amygdala helps control emotion, memories, and behaviour, and the right hemisphere, which controls fear and aversion to unpleasant stimuli, can become enlarged. The hippocampus, which helps to consolidate the transfer of information from short-term memory to long-term memory, can become smaller. Brain signals are affected, as are hormone levels, with higher noradrenaline/norepinephrine levels and (counterintuitively) lower cortisol levels. There are also psychological, socio-interpersonal, and cultural explanations. Each kind of explanation may be of more or less help to different people. But I’ll not go into those further here. Rather, I’ll describe the symptoms from an experiential point of view, along with some general information. 


Flashbacks are perhaps the most well-known symptoms of PTS(D) in popular culture. Traumatic events are reexperienced from memory, as if you are back in the scene, with the emotions and often physical sensations that were present at the time. The flashbacks can take the form of all of the senses, or just one. They can also include feelings of guilt and shame, even if others would see no justification for them. Flashbacks are the first thing that I remember about PTS(D) and were linked to a few traumatic events. Avoiding a trigger or waiting for the scene to play out was often the only thing I could do, but it required a great effort not to get sucked in the vortex of reexperiencing. Flashbacks dissipated over the years, but this took a very long time because I didn’t get proper help at the time, either from friends or professionals. Ultimately, psychotherapy did help but I waited far too long to get it. 

Nightmares and night sweats

Imagine waking up in the middle of the night, night after night, experiencing the death of a loved one – someone different each night – until almost everyone you love has died. They’ve not all died in reality, but rather in your dreams. It hardly bears thinking about, and I didn’t consciously think about it. My unconscious mind did that for me. I would also wake full of sweat – the bed sheets soaking – and this could happen with or without any nightmare. Nightmares can be one of the most acutely distressing symptoms of PTS(D). As they reoccur, and you come to expect them, sleep can become further affected. This occurred over a number of weeks, several years into the experience, until I sought help from a clinical hypnotherapist. Only two or three sessions of clinical hypnotherapy later, and the nightmares stopped. They returned many years later, but only temporarily, as a sign that all was not well. 

Fight, flight and freeze

So-called hypervigilance and heightened startle reactions are well known symptoms of PTS(D). For me, they were the most tiring and debilitating. Everyday things and situations can become potential threats. I became hypersensitive to noise and situations where people were gathered together, especially if I sensed trouble. In some senses, this could be useful: I became an expert at spotting potential disorder or violence. But in most ways, it wasn’t useful. A dropped object, slightly raised voices, unexpected claps, or a demonstration elicited the fight, flight or freeze response. Flight was the usual action – escaping the situation, such as a crowd. Occasionally the fight response prevailed – the body and mind ready to fight (usually a non-existent threat). But sometimes, freeze – the most debilitating of the three – would take over instead. With the freeze response, the body literally becomes immobile, stuck like glue. Fight, flight, and freeze is also characteristic of anxiety more generally, even without PTS(D), and can be visible to others, even though observers probably won’t understand what’s going on for the person.

Dissociation: depersonalisation and derealisation 

With PTS(D), the person can begin to experience dissociation, including depersonalisation (feeling as if the self is not real) and derealisation (feeling as if the world is not real). Disassociation is a common and normal response to trauma. Like fight, flight and freeze, it stems from a survival instinct. But if PTS(D) is left to its own devices, it can become more persistent and problematic, when survival is no longer the issue. It is more common among those who have experienced repeated traumas. I experienced derealisation. It is, for me, the hardest symptom to explain, partly because it is hard to explain something that feels so unreal. It is like seeing the world as an observer through a frosted glass window, or as a dream. The episodes – somewhat like flashbacks but without the imagery of past experiences – didn’t seem to last long. The most common sign to others may be an appearance of being spaced out or frozen, or both. Memory loss is also common. 

Physical manifestations and illnesses  

In my teens, in the context of trauma, I experienced tics (involuntary body movements) for the first time. I had no idea what they were, though later came to learn that tics and Tourette syndrome runs in my family. I learned to suppress them by what I thought of as keeping a ‘clean internal environment’ – a sort-of mindfulness mixed with avoiding known emotional triggers. In my experience of PTS(D) in later life, the tics returned, but this time I was unable to prevent or suppress them. Having forgotten that I ever had them as a teen, I had to adjust to a totally new self-concept – an adult with tics. This was intensely upsetting to me, even though few probably even noticed them, and certainly were not bothered by them. Depending on circumstances, they still come and go, to the extent that I can maintain this internal environment. Deliberate suppression is possible at a price due to the energy required, and when this energy is depleted, it is not possible at all – like trying to resist a sneeze.

Many who experience PTS(D) also experience physical illnesses, often associated with changes to the immune system. I developed shingles (herpes zoster) during my second experience of PTS(D) in adulthood. (As if the emotional pain of PTS(D) symptoms were not enough, shingles is like A Greatest Hits of Physical Pain!). 

Depression and low mood

It is not surprising that depression and PTS(D) often co-occur. The day-to-day toll of PTS(D) is enormous. The flashbacks, the nightmares, the sleep disruption, the fight/flight/freeze state, the physical manifestations and illnesses. Any of these alone can be enough to trigger depression. And yet none of these is even necessary. The event(s) that triggered PTS(D) can be enough. But in combination, the symptoms can be overwhelming. While it is different for everyone, for me it resulted in a combination of numbness, being less interested in doing things I used to enjoy (I could no longer paint or draw, for example), and trouble feeling positive emotions. It was a sort of death of part of my emotional life. There were also bouts of crying, which were often unexpected, and returned later in life as a sort of PTS(D) about PTS(D). Perhaps the most distressing aspect of depression, though, was a loss of sense of self, a kind of detachment from the world, and an inability to envision a positive future. Depression is a very common condition. Each year, 25% of the European population suffer from depression or anxiety (WHO, 2020). Unfortunately, about 50% of major depressions are untreated (WHO, 2020). Thankfully, as I found, help and hope can be found in counselling and psychotherapy, nature, exercise, mindfulness, hobbies, friendship, community and personal development. 

Healing and growth 

Kevin Dooley https://flic.kr/p/7n2tGK CC BY 2.0

It is not for me to give any advice on recovering from PTS(D). This is the job of an experienced doctor, psychologist or psychotherapist trained in PTS(D), not a psychologist who has experienced it. Still, it feels appropriate to share some things that did help me and that seem like fairly low risk things to suggest. What I want to emphasise first though is that PTS(D) is not just a ‘mental’ condition. PTS(D) is profoundly physical. It feels like it is stored in the body and seems to live in you like a virus. Research also indicates a variety of biological changes.

Any effort to recover may need as much focus on the physical as the mental, as well as the interpersonal and the spiritual. For me this included exercise, tai chi, yoga, walking in nature, meditation, and plenty of sleep.

Taking to trusted friends who are able to listen without interruption, judgement or advice is essential. Seeing a professional is especially important if symptoms last for a long period of time – more than a few days or weeks. (For a person to be diagnosed with PTSD, symptoms last for more than a month but often persist for months and sometimes years, as was the case for me.). A general practitioner doctor may be a good place to start, but contact with psychologist or psychotherapist with experience of traumatic stress may well be a necessary step. 

And then there are the spiritual aspects of PTS(D) and recovery. In my late 20s, I explored these via recovery groups, Buddhist meditation, and Quakerism. Most surprisingly for me, in later life, when I reexperienced PTS(D), I found myself writing poetry. Having never written a poem since high school, and previously having no interest in it whatsoever, I wrote dozens of poems within weeks, and published some at https://ptsdays.home.blog/poems/. There is something about writing poetry that is very different to talking or writing prose. I still can’t explain it, but it seemed to help process and wrap up certain events. Perhaps, for me, poetry is most connected to the spiritual aspects of PTSD and recovery. 

“Son, if you saw what I saw, you’d never want to talk about it”

Kevin Dooley https://flic.kr/p/7Me5Yc CC BY 2.0

I thought for a long time before writing this article, and throughout writing it, I have wondered whether I should publish it. The sentiment expressed by my grandfather lives within me. Indeed, this article is very different to every other article that I have ever written. Many of my articles concern risk, but there is no risk to me in writing them. This one does concern me and feels very risky, even if that there is – in reality – very little risk. 

But I’m writing this article for a few reasons. The first is for me. Writing is the way that I make sense of the world and in this case, my experience. PTS(D) is the bull in the china shop, and it’s hard to make sense of what’s going on from within the china shop. Writing prose or poetry is like watching a film of the bull from outside of the window. Getting out of the china shop in order to take this perspective is difficult, and writing does not come naturally for everyone. I never imagined writing poetry either. Letting go of perfectionism and just writing from experience seems to be the gentle way. There are other ways out of the china shop, such as via art, talking, and therapy.

The second reason is that, while I can write about PTS(D), and can now talk about the events, I cannot yet talk about the symptoms and experience of PTS(D), except one-to-one in a safe environment with people I trust. I have given talks in front of over 1000 people, but could not (yet) talk about PTS(D) even in a small group. It may be something to do with a profound lingering sadness at being affected for so long in my young years. So I guess this is the talk I cannot give. 

The third reason is to help normalise PTS(D). Any one of the symptoms of PTS(D) can feel stigmatising to some. While an understanding of the cluster of symptoms that constitute the diagnosis of ‘PTSD’ may well have encouraged me to seek help in my teens, one word would probably have discouraged me: ‘disorder’. As retired U.S. Army General Peter Chiarelli said, “no 19-year-old kid wants to be told he’s got a disorder.” (There is now pressure from outside of the psychiatric profession to ‘drop the disorder’ from such labels.) I didn’t want to think of myself as having a ‘disorder’. It seemed stigmatising. It also didn’t fit my self-concept of being in control; PTS(D) actually heightened a need for control, which dissipated significantly during recovery. But it should be understood that post-traumatic stress is a normal response to traumatic experiences. Most people will experience trauma, and people will have different perspectives on what constitutes trauma for them. If you try to manage it alone, the cluster of symptoms known as PTSD is more likely.

The fourth reason is to encourage others to do what I didn’t do enough – give someone the opportunity to help you. Coupled with this is a hope that those who know someone with the signs and symptoms of PTS(D) might ask twice when the person says they are “fine”. You are not there to cure the person, but can give time, a safe space, and a non-judgemental listening ear, perhaps along with gentle encouragement to seek professional help.

The fifth reason is to give hope. From post-traumatic stress can come post-traumatic growth. I could not have imagined what I have been able to do in the years after PTS(D), including moving around the world and getting through some of the most stressful as well as joyful life events. While the bull does occasionally return to the china shop, it has mostly been relaxed or asleep in the field, which is where it ought to be. 

Overcoming denial 

Kevin Dooley https://flic.kr/p/8YFKs2 CC BY 2.0

Traumatic experiences are so common that you or someone close to you are likely to experience them at some point in your life, and many of you will experience PTS, which for some will progress to PTSD. It is so ubiquitous that we don’t acknowledge it or talk about it. PTS(D) symptoms can remain hidden for months or years after a triggering event, and many will never come to understand or accept that their cluster of experiences really do amount to PTS(D). 

Going back to Clarke Carlisle’s observation, Dr Peter A. Levine, author of of the book Waking the Tiger: Healing Trauma, wrote that “Because the symptoms and emotions associated with trauma can be extreme, most of us (and those close to us) will recoil and attempt to repress these intense reactions. Unfortunately, this mutual denial can prevent us from healing. In our culture there is a lack of tolerance for the emotional vulnerability that traumatized people experience. Little time is allotted for the working through of emotional events. We are routinely pressured into adjusting too quickly in the aftermath of an overwhelming situation. Denial is so common in our culture that it has become a cliché.” 

It seems that many of us, and especially men, deny our experiences or else try to fight them alone. This prolongs a struggle that is already too much for any of us individually. If you think you may be experiencing PTS(D), or depression, anxiety, or indeed any mental health problem or experience of distress, ask yourself, have you given someone the opportunity to help you?

Getting Help

The following are some English-language resources.

  • TED-Ed. An educational video about PTSD.
  • PTSD UK. PTSD UK is currently the only charity in the UK dedicated to raising awareness of post-traumatic stress disorder – no matter the trauma that caused it.
  • Mind. A UK-based mental health charity.
  • US Department of Veteran Affairs, PTSD: National Center for PTSD. A research and educational center of excellence on PTSD and traumatic stress.
  • Helpguide. HelpGuide is a nonprofit mental health and wellness website. Our mission is to provide empowering, evidence-based information that you can use to help yourself and your loved ones.
  • Black Dog Institute. The Black Dog Institute is dedicated to understanding, preventing and treating mental illness.
  • PTSD Association of Canada. A non-profit organization dedicated to those who suffer from post-traumatic stress disorder (PTSD) those at risk for PTSD, and those who care for traumatized individuals.

Some books that may be helpful:

Haig, M. 92015). Reasons to stay alive. Canongate Books Ltd.

Levine, P. A. (1997). Waking the tiger: Healing trauma. North Atlantic Books.

van der Kolk, B. (2015). The body keeps the score: Mind, brain and body in the transformation of trauma. Penguin.

Sincere thanks to the friends who read this article prior to publishing.

Posted in Mental Health | Tagged , , , | 2 Comments

Four Kinds of Thinking: 2. Systems Thinking

Several fields of study and spheres of professional activity aim to improve system performance or human wellbeing. Some focus on both objectives (e.g., human factors and ergonomics, organisational psychology), while others focus significantly on one or the other. Disciplines and professions operating in these areas have a focus on both understanding and intervention. For each discipline, the focus of understanding and method of intervention will differ. For instance, for human factors and ergonomics, understanding is focused on system interactions, while intervention is via design. Understanding alone, when intervention is required, may be interesting, but not terribly useful. Intervening without understanding may have unintended consequences (and indeed it often does). With appropriate understanding and intervention, both system performance and human wellbeing have a chance of being improved.

Understanding and intervention for system performance and human wellbeing is rooted – to some extent – in four kinds of thinking. In this short series, I outline these.

  1. Humanistic thinking
  2. Systems thinking (this post)
  3. Scientific thinking (forthcoming)
  4. Design thinking

Unless we engage in the right kinds of thinking, it is likely that our understanding will be too flawed, partial, or skewed. In this case, intervention will be ineffective or even counterproductive. Integrating all four kinds of thinking involves compromises and trade-offs, as the kinds of thinking can conflict, presenting dilemmas that we must resolve.

NASA Goddard Space Flight Center https://flic.kr/p/vj7kj2 CC BY 2.0

1. Systems Thinking

“In systems thinking, increases in understanding are believed to be obtainable by expanding the systems to be understood, not by reducing them to their elements.”

Russell L. Ackoff, in “Creating the corporate future“, in Understanding Business: Environments edited by Michael Lucas and Vivek Suneja


It should be self-evident that thinking about systems, or thinking in systems, is important to improving system performance and human wellbeing. Systems cannot be understood, let alone improved, without thinking. These ‘systems’, however, are social constructs, defined for a particular purpose with respect to a boundary of interest. This explains Ackoff’s quote about ‘expanding’ the systems to be understood. He, and others (e.g., Checkland, Meadows), are referring to expanding our perspective on the system boundary that we define (zooming out). And when it comes to wellbeing, the broader system in which people work and live has the lion’s share of influence on wellbeing: social conditions, working hours, pressure, opportunities to rest, exposure to hazardous substances and conditions, nutrition, housing, and so on.

Systems thinking is an alternative to analytical thinking – taking things apart, conceptually or physically, and trying to infer the behaviour of the whole from the behaviour of the parts (reductionism). This approach takes us down a never-ending path that, often in business, leads to the individual, even their brains (or other organs), down to microscopic units of analysis. This strips out the context that is so vital to understanding and intervention. Ignorance of systems leads to interventions based on poor understanding, and hence are ineffective (e.g., unsustainable), have unintended consequences, and are often counterproductive.

The best (e.g,, most efficient, safest) parts, designed and managed separately, will not result in the best system. It may well result in a system that works very badly, or not at all. Similarly, an organisation broken down into parts (departments, measures), without significant attention to how it functions as a whole, will not function effectively. Frequently, the result is separate silos and activities that run at cross purposes and are in competition.

Systems thinking is also an alternative to linear cause-effect thinking – considering one thing to be the cause of another, if it is necessary and sufficient to produce the behaviour (determinism). Neither reductionism nor determinism have a practical stopping point; it is always possible to go further (though the stopping point is, in practice, often defined by disciplinary boundaries). Neither also allow for the humanistic principles of choice and intentionality.


When asking ‘what is systems thinking?’, it is helpful to ask ‘what is a system?’ In her book Thinking in Systems, Donella Meadows described a system as “a set of elements or parts that is coherently organized and interconnected in a pattern or structure that produces a characteristic set of behaviours, often classified as its ‘function’ or ‘purpose'”. Russell Ackoff, meanwhile, defined a system as two or more elements that satisfies the following conditions: 1. The behaviour of each element has an affect on the behaviour of the whole; 2. The behaviour of the elements and their effects on the whole are interdependent; 3. However subgroups of elements are formed, each has an effect on the behaviour of the whole and none has an independent effect on it.

Therefore, a system cannot be understood via reductionism and determinism – our dominant modes of thinking. Systems thinking offers a complementary approach that works with the following axioms, among others:

  1. A ‘system’ is a social construct. Systems are not out there waiting to be found, but in us waiting to be identified.
  2. System boundaries are not fixed and are often permeable. Systems exist alongside and within other systems. Boundaries are social constructions.
  3. Systems have a purpose, which can be seen in what the system does. Some systems have purposes of their own, and their parts have purposes of their own. Other systems have purposes that we ascribe to them, or purposes that belong to a bigger system. Purposes at different system levels interact, and often conflict.
  4. A system does something that none of its parts can do, so the essential properties of any system cannot be inferred from its parts (holism). The performance of a system depends more on how its parts interact than how they function independently.
  5. Influence is more important to systems thinking than cause-effect (determinism). Patterns of system behaviour allow us to observe influence. Where cause-effect relations can be ascertained in complex systems, they are often non-linear; small changes can produce disproportionately large (and unpredictable) effects. Effects usually have multiple causes, however. These causes may not be traceable and are socially constructed. 
  6. Complex systems have a history and have evolved irreversibly over time with the environment. Apparent order and tractability is often an artefact of hindsight. 
  7. There will be different assumptions about the ‘system’ under consideration. Systems-as-imagined rarely correspond fully to entities in the world.
  8. Synthesis and analysis are critical to systems thinking, but synthesis distinguishes systems thinking from reductionist thinking.
  9. Understanding can only ever be partial, and can only be approached via interdisciplinary efforts. No single discipline is sufficient and understanding is multi-layered.
  10. There are multiple perspectives on a system from different stakeholders. These multiple perspectives are not a weakness; they are necessary for understanding.


Ackoff explains that systems thinking reverses the three steps of the ‘machine age’. We move from reductionist thinking to systems thinking. The three steps are therefore:

  1. Identify a containing whole (system) of which the think to be explained is a part;
  2. Explain the behaviour or properties of the containing whole.
  3. Then explain the behaviour or properties of the thing to be explained in terms of its role(s) or function(s) within its containing whole.

The process of synthesis therefore involves zooming out, while analysis involves zooming in. For instance, I work predominantly in aviation, and in particular in air traffic management. I work with air traffic controllers and pilots, and most other professions involved in air traffic management. But in understanding the behaviour of air traffic controllers, I do not go first down to the cognitive and the neuropsychological and the biological. I go up – in the case of air traffic controllers – to the working position, the sector, the control unit, the airspace, the organisation, the regulatory environment, transport system, the government, the judiciary, and the press, for example. Where I choose to draw the system boundary will depend on my purpose, my understanding, and in perhaps my scope for intervention.

So let’s say we seek to understand and intervene with respect to something so local as occurrence reporting. Analysis may allow us to describe certain phenomena, such as low rates of reporting and perhaps self-protective reporting behaviour (e.g., basic description of an outcome). That does not give us understanding. For that, we need to go up and out, to the organisation and perhaps to the judicial system, for both will influence reporting behaviour. Effectively, we are zooming out before zooming in, in order to understand why things work in the way that they work. Now, we can approach an understanding, but never attain one. We therefore remain humble and anchored by uncertainty – a friend of intervention. Now we find that our ability to influence the judicial system is severely limited by hard constraints (e.g., penal codes), but with some room for manoeuvre (e.g., interpretation). So we have some possibilities (e.g., education of judiciary), but otherwise must now go down and in. With more understanding of the whole, we may be able to intervene where this will do some good (and more good than harm).

There are a number of methods and tools that can help us along the way, such as stakeholder maps, system maps, influence diagrams, multiple cause diagrams, rich pictures, stock and flow diagrams, system archetypes, and so on. But the insights, understandings and perspectives that emerge along the way (e.g., via conversation) are always more important than the outputs.

This kind of thinking might prompt questions such as:

  • What is the purpose of the system, evident from system behaviour?
  • What goal conflicts does the system produce?
  • What is the system boundary?
  • What are the elements of the system?
  • How do these elements interact, and what kinds of patterns emerge?
  • Who dos the people that contribute to, influence, and are affected by the system?
  • What is the mental model that affects systems structure and patterns of system behaviour?
  • What kind of outcomes emerge from system behaviour?
  • What patterns or archetypes may help explain the system behaviour that we are seeing?
  • How might a proposed intervention influence system behaviour?
  • What perspectives might we take?

Shadow side

The shadow sides of systems thinking are less about systems thinking than the way in which we typically think, especially in a business context. But this kind of thinking is arguably the most difficult of the four. We are trained to think analytically from schooling (where subjects are studies, which are further reduced to topics), and throughout work life. We also, as Ackoff remarked, think analytically quite intuitively – taking things apart in order to understand how they work. We therefore tend to think in terms of parts and linear cause-effect thinking, within a restricted system boundary. This kind of thinking is more suited to simple or obvious systems than the complex world in which we now live. Systems thinking is difficult, tiring, and we are not trained to do it.

Systems thinking also often frustrates our objectives. The problem here is not systems thinking, though, but our objectives. For instance, we may wish to introduce a performance target, league table, or new incentive. Systems thinking may well suggest unintended consequences, as these interventions sub-optimise the system, perhaps introducing internal competition and unwanted adaptive behaviours. Even the edition of three words in radio-telephone communication phraseology for clearances to pilots or seemingly small changes to communication with respect to drivers crossing a runway may have significant unwanted consequences, not predicted by analytical methods. Systems thinking exposes problems with our understanding and intervention. Sometimes, scientific evidence is demanded for these objections, even though the same was not provided for the intervention itself (and the underlying understanding), or else the evidence was so constrained by a reductionism and determinism as to be meaningless in messy, real world settings.

Additionally, the tools that are routinely in use tend to be reductive, rather than synthetic, aimed at analysing components, not understanding interactions. Where interactions are modelled, they are typically assumed to be fixed (unchanging) and linear (lacking feedback loops), assuming direct cause-effect relationships, with no consideration of emergence. Such tools may also focus only on certain outcomes (e.g., failures), thus giving a partial view of system performance. There is an important distinction between thinking systemically (thinking in an ordered or structured way) and systems thinking (thinking about systems). The former may reinforce doing the wrong thing right (e.g., consistently), making our efforts ever more problematic.

Systems thinking, like scientific thinking, can be depersonalising. The person can seem to be an anonymous system component, less interesting than system interaction. To counter this, systems thinking must be combined with humanistic thinking.

Finally, this kind of thinking can make issues of responsibility and accountability difficult to ascertain. Responsibility for system outcomes now appears to be distributed among complex system interactions, which change over time and space. Outcomes in complex sociotechnical systems are increasingly seen as emergent, arising from the nature of complex non-linear interactions across scale. But when something goes wrong, we as people, and our laws, demand that accountability be located. The nature of accountability often means that this must be held by one person or body. People at all levels – minister, regulator, CEO, manager, supervisor, front line operator – have choice. With that choice comes responsibility and accountability, but choices are also made in a context, including conflicts between goals, production pressure, and degraded resources, including restricted or contradictory information. While it is simplest to ignore aspects of systems that influence decisions and behaviour, it is also unfair, as well as counterproductive in our efforts to improve systems performance and human wellbeing.

Posted in Human Factors/Ergonomics, Safety, Systems Thinking | Tagged , | 1 Comment

Four Kinds Of Thinking: 1. Humanistic Thinking

Several fields of study and spheres of professional activity aim to improve system performance or human wellbeing. Some focus on both objectives (e.g., human factors and ergonomics, organisational psychology), while others focus significantly on one or the other. Disciplines and professions operating in these areas have a focus on both understanding and intervention. For each discipline, the focus of understanding and method of intervention will differ. For instance, for human factors and ergonomics, understanding is focused on system interactions, while intervention is via design. Understanding alone, when intervention is required, may be interesting, but not terribly useful. Intervening without understanding may have unintended consequences (and indeed it often does). With appropriate understanding and intervention, both system performance and human wellbeing have a chance of being improved.

Understanding and intervention for system performance and human wellbeing is rooted – to some extent – in four kinds of thinking. In this short series, I outline these.

  1. Humanistic thinking (this post)
  2. Systems thinking
  3. Scientific thinking (forthcoming)
  4. Design thinking

Unless we engage in the right kinds of thinking, it is likely that our understanding will be too flawed, partial, or skewed. In this case, intervention will be ineffective or even counterproductive. Integrating all four kinds of thinking involves compromises and trade-offs, as the kinds of thinking can conflict, presenting dilemmas that we must resolve.

Steven Shorrock https://flic.kr/p/aBYyUH CC BY-NC-SA 2.0

1. Humanistic thinking

“It is not enough that you should understand about applied science in order that your work may increase man’s blessings. Concern for the man himself and his fate must always form the chief interest of all technical endeavors; concern for the great unsolved problems of the organization of labor and the distribution of goods in order that the creations of our mind shall be a blessing and not a curse to mankind. Never forget this in the midst of your diagrams and equations.”

Albert Einstein, from a speech to students at the California Institute of Technology (in “Einstein Sees Lack in Applying Science“, The New York Times, 16 February 1931)


There are several reasons why humanistic thinking is important to (sociotechnical) system performance and human wellbeing. One reason relates to human wellbeing as something more than an absence of disease, illness or injury, encompassing the body, mind and spirit (or whatever term one wishes to use), individually and collectively. Having worked in psychopathology, Abraham Maslow, the father of humanistic psychology, found that the concepts, language, tools and methods did not serve the needs of the mass of relatively healthy and well-functioning people. According to Maslow, psychology was being viewed from the wrong end of the lens (pathology). The same might still be said of health, and of safety – akin to psychiatry when it comes to organisational functioning. Humanistic thinking encourages us to think more from the perspective of: what we want than what we don’t want; what works than what doesn’t; assets and human potential than deficits and constraints. Human wellbeing is ultimately linked to human flourishing, individually and collectively, or the idea of ‘actualisation’ in humanistic psychology.

A second reason for the need for humanistic thinking relates to holism. A reductionist focus (typical of science and engineering, including HF/E) tends to mask the person and their unique context. This is exacerbated by the industrial context, characterised by reductionism in the design, analysis, measurement and evaluation of work. The humanistic perspective provides another way of thinking about human beings and human work.

Another reason for the importance of humanistic thinking concerns the relationships through which our work flows. Any practitioner will be aware of the constraining or facilitating influence of relationships, regardless of the technical nature of the work. HF/E, for instance, is officially seen as a “scientific discipline” by the International Ergonomics Association but is more properly described as a blend of elements of science (to explain and predict), engineering (to design for improved performance), and craft (to implement and evaluate) (Wilson, 2000). Humanistic thinking helps to avoid scientism and hard engineering thinking. It also steers us away from ‘technical rationality’ (Schön, 1983) and its assumptions about ‘research application’, recognising that strong theories and inflexible methods can break down in messy situations, requiring reflection-in-action. Humanistic thinking orients our practice so that craft – including reflective practice – is properly valued.


While typically associated with counsellors and psychotherapists, humanistic practitioners (or practitioners of anything, working humanistically) may work in fields such as medicine, education, and social work. Humanistic theory has also been applied to human work and organisational functioning. But what does it mean, exactly? The Association for Humanistic Psychology in Britain usefully summarise five basic postulates, which focus on a view of human beings rather than a discipline or profession.

  1. Human beings, as human, supersede the sum of their parts. They cannot be reduced to components.
  2. Human beings have their existence in a uniquely human context, as well as in a cosmic ecology.
  3. Human beings are aware and aware of being aware – i.e., they are conscious. Human consciousness always includes an awareness of oneself in the context of other people.
  4. Human beings have some choice and, with that, responsibility.
  5. Human beings are intentional, aim at goals, are aware that they cause future events, and seek meaning, value and creativity

There has been relatively little direct cross-pollination of humanistic psychology into many disciplines associated with human wellbeing. But humanistic thinking is ultimately concerned with people, relationships and contexts, and it has this in common with several disciplines (especially HF/E).


Integrating humanistic thinking means going beyond the ‘tools and methods’ that can be found in so many textbooks (including those of HF/E). One of the essential approaches of humanistic psychology is otherwise known as ‘listening’, hanging out with people, relating to them, trying to empathise with them and their unique situation and perspective.

Empathy is a rich and complex concept, it may be viewed as a trait or state of a person, a process and a skill. Empathy is colloquially seen as ‘walking in another’s shoes’. In this sense, empathy can be thought of as ‘perspective taking’; the ability to perceive accurately the frame of reference of another person while maintaining a sense of separateness – the ‘as if’ quality.

Humanistic thinking in the context of improving system performance and human wellbeing, may involve empathising emotionally, cognitively, physically, and socially. But effective empathy is not as intuitive as we may think. Bohart et al suggests that we can distinguish between three different modes of empathy, these are:

  • Empathic rapport: using empathy to build rapport and support the person.
  • Process empathy: a moment-by moment empathy for the person’s experience, cognitively, emotionally, and physically.
  • Person empathy: this is known as ‘experiencing near understanding of the person’s world’ or ‘background empathy’.

Time spent training in fundamental counselling skills is particularly helpful. Also helpful is time spend understanding and practising ethnographic approaches, which are likely to be of more value to humanistic thinking than reductionist scientific and engineering methods. Combined, these can give an insight into the person that analytical methods, even systems thinking methods, cannot.

This kind of thinking might prompt questions such as:

  • What is this person’s story?
  • What does this (situation, job, etc) mean to this person, within the broader context of her or his life?
  • To what degree does the work context respect the person’s autonomy?
  • What tensions exist between freedom and constraint, and how does the person and other people address these?
  • What does a good job look like to this person?
  • How do people perceive their self and their situation, and how might this differ from their ideals?
  • How can work create space for greater flexibility and creativity?
  • How might work contribute to growth, and also to suffering?

Shadow side

Methods that come more from the biological, psychological, and engineering sciences tend to focus on reliability and validity. Humanistic approaches may seem to lack the same rigour, meaning that there are different perspectives on people and situations that cannot be controlled by method. While reductionism makes measurement and diagnosis easier, but also less meaningful, holism questions the idea of measurement and diagnosis, at least as it is often applied (e.g., as can be seen in person centred counselling vs psychiatry). Still, humanistic thinking may be accused of lacking ‘validity’ when viewed from a traditional scientific frame of reference.

Empathy is core to humanistic thinking, but the term has – especially recently – been misunderstood and misused in some quarters. In UX, quick and dirty ’empathy’ has sometimes become a proxy for research (and even a proxy for people, in some persona development). Another problem in practice is the line between empathy and sympathy. Sympathy involves losing a sense of separateness and impartiality or losing the ‘as if’ quality. Sympathy can block the capacity for empathy and so can be counterproductive, blurring the boundaries between ‘me’ and ‘you’, and associated issues (e.g., choice and responsibility).

Another shadow side consideration is that humanistic thinking, when considered on an individual level, may seem to result in unsustainable solutions, from a systems thinking perspective. In many cases, however, this conflict arises not from humanistic thinking per se (the five basic postulates above), but from from sympathy and charity.

Posted in Human Factors/Ergonomics, Humanistic Psychology | Tagged , , , , , | 1 Comment

How To Do Safety-II

Safety-II, its cousin Resilience Engineering (and offshoots such as resilient healthcare), as well as predecessor concepts and theories, have attracted great interest among organisations and their staff. People, especially front-line staff, understand the need to understand all outcomes – wanted and unwanted – and the systems and associated patterns of system behaviour that generate these outcomes. The trouble is, people are not sure where to start with ‘doing Safety-II’. Some methods and seemingly complicated words and ideas might seem off-putting. They don’t need to be. In this post, I will provide some initial ideas and inspiration for getting started. The ideas are in plain language without reference to any specific techniques.

Steven Shorrock https://flic.kr/p/qaBiNp CC BY-NC-SA 2.0)

Idea 1: Collaborate

Safety-II and Resilience Engineering are not solo efforts. You can do little of practical benefit alone. In fact, going alone will almost guarantee a miserable work life. You will start to see the reality of how patterns, system structures and mental models are connected to produce events, both wanted and unwanted. But you will have to stand back and watch how this complexity is boiled down to mechanistic thinking and methods that don’t describe how safety is created, or even how unsafe events really occur. You will also have to observe foes of intervention in action, which almost guarantee unintended consequences. For the sake of sanity, it is almost better not to know how complex systems fail, let alone how they work on a day-to-day basis. Finding a small number of open-minded people who are willing to expand their thinking and listen to ideas and experiences without prejudgement, and not hamstrung by personal barriers, is a good place to start. A diverse group that traverses organisational silos is helpful.

Idea 2: Read

If you want to do Safety-II, you have to read. At least a bit. You might find that you don’t have enough time to read technical books. You don’t have to, though you may well want to, at some point. Start by reading some short articles on Safety-II, and associated concepts, by authors with a pedigree in this area. You might want to expand your search terms to ‘systems thinking‘, ‘resilience engineering‘, ‘systems ergonomics and human factors‘. From here you might start to explore methods from social science (e.g., action research, practice theory, ethnography). See where the search takes you, from blog posts (search this blog for a few, as a start), through to White Papers, articles (email the author if you can’t access them), and books. A couple of short articles a week and you’ll be on your way to understanding the key ideas. Be mindful that some of what is written may be way off the mark (what Safety-II isn’t), as Safety-II, like anything else, is subject to the bandwagon effect.

Idea 3: Think

It might seen strange to suggest thinking as a way to do Safety-II or Resilience Engineering. But in many lines of work, we somehow manage to avoid taking a step back to think more holistically about outcomes, work, systems, and the mental models that give rise to all of this. I teach a systems thinking course which is about…thinking. At the end of the most recent course, one participant said that it was the first course that they had participated in where they actually had to think, and not just learn content or follow a process. The course doesn’t provide a process, but rather a space to think and challenge one’s own assumptions. The thinking required involves going up and out to the system as a whole, switching perspectives (stakeholders and situations), and generally questioning how things go. Thinking through situated examples is especially useful, so long as there are links to theory.

Idea 4: Listen and Talk

From the above, and the below, prepare some topics or questions on concepts, methods and everyday work for discussion. Find a room, get some drinks and snacks, and arrange some chairs in a circle. Try to get rid of tables and anything else that gets between you. The questions may emerge from your reading or from your experience…preferably both. E.g. If you had to explain to a neighbour why your organisation operated safely, what would you say? What do we do well? What dilemmas do we face? What surprises do we experience? How do we handle them? What unintended consequences have we experienced from interventions? What factors are at play when things go right and wrong? What is the role of designed artefacts and processes versus adaptive performance in creating safety? A good discussion will harvest new insights, including multiple perspectives and thick descriptions.

Idea 5: Write and Draw

Write about your experiences of work in the frame of Safety-II or Resilience Engineering. Think deeply about your own work and the situations you encounter and write in a way that you would explain it to a neighbour. Start to think about patterns of interactions inside and outside of your organisation – micro, meso, and macro. But keep it concrete. How do things influence each other at technical, individual, team, organisational, regulatory, governmental, media, and economic levels, to create patterns and associated wanted and unwanted outcomes? Put the concepts that you read about into the context of your practice and experience of the systems that you are a part of, or interact with. The concepts you encounter will make sense not only from the points of view of what you observe in others’ work, but in what you experience in your own. Keep it short and snappy. Think short vignettes, not a treatise. Sketch out the images that come to mind (e.g., rich pictures) and start to map out some influences that you come across. Remember, thinking is more important than method, and should always precede it.

Idea 6: Observe

Arrange to observe ordinary work. It is best to observe work that you are not intimately involved with, but that you can understand well enough to know what’s going on. This might be another hospital or ward, or another air traffic control room or sector, for instance. It is essential is that you have the right attitude – apprentice, not master. It is also essential that the people you are observing consent, and understand the purpose of the observation. If you have another role that may conflict with learning how things work (e.g., competency assessor) then you have some work to do to deconflict these roles and the mindsets and perceptions that may be associated. Don’t go with a checklist. Just hang out. Notice how people resolve the dilemmas created by goal conflicts, what trade-offs and compromises are necessary, how people work around a degraded environment (staffing and competency gaps, equipment problems, procedural complexity, etc), and how – despite the context – things work reasonably well most of the time.

Idea 7: Design

At this point, you may well have ideas about improving the system structure and patterns of system behaviour (including work), to help create the conditions for success to emerge. This effort will always start with understanding the system. You’ll need to understand interactions between people, their activities, their tools, and the contexts of work (micro, meso and macro). It is advisable to avoid major initiatives and ‘campaigns’. Small designed interventions are a good way forward. You may wish, for instance to: a) make small changes to work-as-done that help balance multiple goals; b) review procedures to remove or reconcile those that are problematic (e.g., conflicting, defunct, over-specified); c) help managers and support staff to become familiar with how the work works; d) adjust buffers or margins for performance; e) review onerous analyses of events could be better directed at patterns (e.g., onerous safety analysis of multiple events outside of one’s control); f) create a means of getting regular outside perspectives on your work (perhaps an observer swap arrangement); g) create a means to simulate unusual circumstances and allow experimental performance (not a competency check). The interventions may aim at reducing unhelpful gaps between the varieties of human work (e.g., the ignorance and fantasy, taboo, PR and subterfuge, defunct archetypes). After designing, iterate to the previous ideas.

Posted in Human Factors/Ergonomics, Safety, Systems Thinking | Tagged , , , , , , , , , | Comments Off on How To Do Safety-II

The Reality of Goal Conflicts and Trade-offs

by Steven Shorrock

This article is the Editorial published in HindSight 29October 2019 by EUROCONTROL (available soon at SKYbrary)

Jesper Sehested https://flic.kr/p/EHKy4a CC BY 20

“Safety is our number 1 priority!” It’s a phrase that’s sometimes used by trade and staff associations alike, and occasionally by pilots when we are encouraged to listen to the safety briefing, or when a departure is delayed for technical reasons. But I’ve noticed something. Over the last couple of decades that I’ve worked in aviation, I am hearing the phrase less and less. 

Perhaps this is something to do with the so-called ‘rhetoric-reality gap’. There are two kinds of goals, which relate to individuals and organisations. On the one hand, we have stated, declared goals. On the other, we have the goals that are evident from behaviour. In other words, ‘the purpose of a system is what it does’ (POSIWID) – a phrase coined by business professor Stafford Beer. The purpose of aviation is not to be safe per se, but to transport people and goods. In doing so, there are a number of goals. So how can we focus on what the system does and why it does what it does, in the way that it does? What a system does is subject to demand and pressure, resources, constraints, and expected consequences. 

So let’s look at the situation now. Demand is rising faster than at any time in history. According to Airbus, the number of commercial aircraft in operation will more than double in the next 20 years to 48,000 planes worldwide. And according to Boeing, 790,000 new pilots will be needed by 2037 to meet growing demand. But capacity is a critical concern. While average delays in Europe are down, capacity and staffing takes the lion’s share of delays, according to EUROCONTROL data. Airports are another major part of the capacity problem. IATA chief Alexandre de Juniac said last year, “We are in a capacity crisis. And we don’t see the required airport infrastructure investment to solve it.” 

Growing demand and increased capacity conflicts with environmental pressures. At a local level, this can be seen in the ongoing third runway saga at Heathrow, the busiest airport in Europe by passenger traffic. Despite receiving approval from Members of Parliament, expansion is opposed by local and climate groups. In Sweden, the word ‘flygskam’ or flight shame is becoming more than just a buzzword. Fewer passengers are flying to or from Swedavia’s ten airports. At a global level, Greta Thunberg recently headlined the UN Climate summit. She was photographed arriving not by plane, but by yacht, fitted with solar panels and underwater turbines. 

While aviation is particularly newsworthy with regard to climate change, the Intergovernmental Panel on Climate Change has estimated that aviation is responsible for around 3.5 percent of anthropogenic climate change, including both CO2- and non-CO2- induced effects. However, the media and public interest in aviation creates significant pressure. In 2008, aviation sector leaders signed a declaration committing to carbon-neutral growth from 2020, and by 2050 a cut in net emissions to half 2005 levels. 

As well as capacity and environmental demands and pressures, there are increasing concerns about cybersecurity (e.g., GNSS spoofing) and drones. Then there are more familiar financial pressures. At the time of writing, Thomas Cook, the world’s oldest travel company, collapsed and Adria Airways suspended flights. 

And now we come to safety. Accidents remain few in number, and flying continues to be the safest form of long-distance travel. But 2018 was a bad year for aviation safety, with 523 on-board fatalities, compared to 19 in 2017, according to IATA. Accidents involving B737 MAX aircraft raised new questions about safety at all levels. Unlike most goals, safety is a ‘background goal’ that tends to come into the foreground only when things suddenly go very badly wrong, or ‘miraculously’ right.

This is only one way in which goals differ. Some goals have a short-term focus, while others are longer term. Some goals are externally imposed, while others are internally motivated. Some goals concern production, others concern protection. Some goals relate well to quantitative measures, while others don’t. Some goals are more reactive, while others are more proactive. Sometimes, goals are compatible and can work together, while at other times they conflict and compete for resources and attention. 

Goal conflicts create dilemmas at all levels, from front line to senior management, regulation and government. Dilemmas create a need for trade-offs and compromises. These decisions are influenced by how we perceive capability, opportunities, and motivation. There are many kinds of trade-off decisions. A familiar trade-off to everyone is between thoroughness and efficiency. Too much focus on either can be a problem. Day-to-day pressures tend to push us toward greater efficiency, but when things go wrong, we realise (and are told) that more thoroughness was required. Another familiar trade-off is between the short- and long-term – the acute-chronic trade-off. Combined with pressure on efficiency, short-term goals tend to get the most attention. And we trade off individual and collective needs and wants, or a focus on components and the whole system. All of these trade-offs have implications for goals relating to safety, security, capacity, cost-efficiency, and the environment. To understand them, we need to understand five truths. 

Five Truths about Trade-offs 

1. Trade-offs occur at all levels of systems. Trade-offs occur in every layer of decision-making, from international and national policy-making to front-line staff. They occur over years and seconds. They occur in the development of strategy, targets, measures, policies, procedures, technology, and in operation. They are often invisible from afar. 

2. Trade-offs trickle down. Trade-offs at the top, especially concerning resources, constraints, incentives and disincentives, trickle down. If training is reduced for cost or staffing reasons, then staff will be less able to make effective trade-offs. If user needs are not met in a commercial-off-the-shelf system, staff will have to perform workarounds. 

3. Trade-offs combine in unexpected ways. Trade-offs made strategically, tactically and opportunistically combine to create both wanted and unwanted outcomes that were not foreseen or intended. We often treat this simplistically.

4. Trade-offs are necessary for systems to work. Trade-offs are neither good nor bad. They are necessary for systems – transport, health, education, even families – to work. And most trade-off decisions can only be made and enacted by people. 

5. Trade-offs require expertise. Trade-off decision-making often cannot be prescribed in procedures or programmed into computers. Decision-making therefore requires diverse expertise, which in turn needs time and support for development. In effect, expertise is about our ability to make effective trade-offs. 

An interesting thing about trade-offs is that they are tacitly accepted, but rarely discussed. Might ‘Safety first!’ risk making us complacent about safety? Reality always beats rhetoric in the end. So we have to talk about goal conflicts and trade-offs. Let us bring reality into the open.

Posted in Human Factors/Ergonomics, Safety, Systems Thinking | Tagged , , , , | 1 Comment

Shorrock’s Law of Limits

Last year, I noticed a tweet from The European Cockpit Association (ECA), on EU flight time limitations (Commission Regulation (EU) 83/2014, applicable from 18 February 2016). The FTLs have been controversial since their inception. The ECA’s ‘Dead Tired‘ campaign website lists a number of stories from 2012-13, often concerning the scientific integrity of the proposals, and goal conflicts between working conditions and passenger safety versus commercial considerations. Consecutive disruptive schedules, night-time operations and inadequate standby rules have been highlighted as problems by the ECA. Didier Moraine, an ECA FTL expert, stated that “basic compliance with EASA FTL rules does not necessarily ensure safe rosters. They may actually build unsafe rosters.”

In May 2018, the ECA twitter account reported that EASA’s Flight Standards Director Jesper Rasmussen reminded a workshop audience that FTLs are to be seen as hard limits, not as targets.

A February 2019 study published by the European Union Aviation Safety Agency (EASA) found that that prescriptive limits alone are not sufficient to prevent high fatigue during night flights.

“When you put a limit on a measure, if that measure relates to efficiency, the limit will be used as a target.

This relates to Goodhart’s Law, expressed succinctly by anthropologist Marilyn Strathern as follows: “When a measure becomes a target, it ceases to be a good measure.” It also relates to The Law of Stretched Systems, expressed as follows by David Woods: “Every system is stretched to operate at its capacity; as soon as there is some improvement, for example in the form of new technology, it will be exploited to achieve a new intensity and tempo of activity.” Woods also notes that this law “captures the co-adaptive dynamic that human leaders under pressure for higher and more efficient levels of performance will exploit new capabilities to demand more complex forms of work.” But this particular aspect of system behaviour concerning limits, simple as it is, is not quite expressed by either.

An everyday example of the Law of Limits can be found in driving. As in most countries, British roads have speed limits, depending on the road type. In 2015, on 30 mph speed limit roads, the average free flow speed at which drivers choose to travel as observed at sampled automatic traffic counter (ATC) locations was 31 mph for cars and light goods vehicles. (The figure was 30 mph for rigid and articulated heavy goods vehicles [HGVs], and 28 mph for buses.) In the same year, on motorways with a 70 mph speed limit for cars and light goods vehicles, the average speed was 68 mph for cars and 69 mph for light goods vehicles. Most drivers will be familiar with the activity of driving as close to the limit as possible. Many things contribute to this, primarily a drive for efficiency coupled with a fear of consequences of exceeding the limit. Many more examples can be found in everyday life, where limits relating to any measure are imposed, and treated as targets when efficiency gains can be made.

The following is a post on Medium by David Manheim, a researcher and catastrophist focusing on risk analysis and decision theory, including existential risk mitigation, computational modelling, and epidemiology. It is reproduced here with kind permission.

Shorrock’s Law of Limits

Written by David Manheim, 25 May 2018

I recently saw an interesting new insight into the dynamics of over-optimization failures stated by Steven Shorrock: “When you put a limit on a measure, if that measure relates to efficiency, the limit will be used as a target.” This seems to be a combination of several dynamics that can co-occur in at least a couple of ways, and despite my extensive earlier discussion of related issues, I think it’s worth laying out these dynamics along with a few examples to illustrate them.

When limits become targets

First, there is a general fact about constrained optimization that, in simple terms, says that for certain types of systems the best solution to a problem is going to involve hitting one of the limits. This was formally shown in a lemma by Dantzig about the simplex method, where for any convex function the maximum must lie at an extreme point in the space. (Convexity is important, but we’ll get back to it later.)

When a regulator imposes a limit on a system, it’s usually because they see a problem with exceeding that limit. If the limit is a binding constraint — that is, if you limit something critical to the process, and require a lower level of the metric than is currently being produced, the best response is to hug the limit as closely as possible. If we limit how many hours a pilot can fly (the initial prompt for Shorrock’s law), or that a trucker can drive, the best way to comply with the limit is to get as close to the limit as possible, which minimizes how much it impacts overall efficiency.

There are often good reasons not to track a given metric, when it is unclear how to measure it, or when it is expensive to measure. A large part of the reason that companies don’t optimize for certain factors is because they aren’t tracked. What isn’t measured isn’t managed — but once there is a legal requirement to measure it, it’s much cheaper to start using that data to manage it. The companies now have something they must track, and once they are tracking hours, it would be wasteful not to also optimize for them.

Even when the limit is only sometimes reached in practice before the regulation is put in place, formalizing the metric and the limitation means that it becomes more explicit — leading to reification of the metric. This isn’t only because of the newly required cost of tracking the metric, it’s also because what used to be a difficult to conceptualize factor like “tiredness” now has a newly available albeit imperfect metric.

Lastly, there is the motivation to cheat. Before fuel efficiency standards, there was no incentive for companies to explicitly target the metric. Once the limit was put into place, companies needed to pay attention — and paying attention to a specific feature means that decisions are made with this new factor in mind. The newly reified metric gets gamed, and suddenly there is a ton of money at stake. And sometimes the easiest way to perform better is to cheat.

So there are a lot of reasons that regulators should worry about creating targets, and ignoring second-order effects caused by these rules is naive at best. If we expect the benefits to just exceed the costs, we should adjust those expectations sharply downward, and if we haven’t given fairly concrete and explicit consideration to how the rule will be gamed, we should expect to be unpleasantly surprised. That doesn’t imply that metrics can’t improve things, and it doesn’t even imply that regulations aren’t often justifiable. But it does mean that the burden of proof for justifying new regulation needs to be higher that we might previously have assumed.

Posted in Systems Thinking | Tagged , , , , | Comments Off on Shorrock’s Law of Limits

What Human Factors isn’t: 4. A Cause of Accidents

‘Human Factors’ (or Ergonomics) is often presented as something that it’s not, or as something that is only a small part of the whole. Rather than just explain what Human Factors is, in this sporadic series of short posts I will explain what it isn’t. The posts outline a number of myths, misunderstandings, and false equivalencies.

In this series:

  1. What Human Factors isn’t: 1. Common Sense
  2. What Human Factors isn’t: 2. Courtesy and Civility at Work
  3. What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training
  4. What Human Factors isn’t: 4. A Cause of Accidents (this post)

Royal Navy Media Archive CC BY-NC 2.0 https://flic.kr/p/NqZrz5

Human Factors Isn’t a Cause of Accidents

An unfortunate use of the term ‘human factors’ in industry, and in the media, is as an explanation for failure. Through this lens, human factors is (or ‘are’, since the phrase tends to be used as a plural in this context) seen as a cause of accidents or other unwanted events. This immediately confuses the discipline and profession of Human Factors with a narrow, unsystemic view of factors of humans – human factors in the vernacular. (Much as I dislike capitalisation, I will use it here to separate the two.) While human limitations are relevant to accident analysis (and the analysis of work more generally), and indeed form part of many analytical methods, neither the vernacular ‘human factors’ nor the discipline of Human Factors is an explanation for failure. Below, I outline a few problems with this all-too-common perspective.

‘Failure’ means not achieving planned objectives. Since people set objectives, make plans and execute actions to achieve objectives, then almost all failure is associated with humans, unless there is some chance agency or natural phenomena involved (e.g., weather). Even then, one could take a counter-factual perspective, as is often done in accident analysis, and say that humans could have or should have predicted and planned for this.

Logically, ‘success’ has the same characteristics. Humans set objectives, make plans, and execute actions at all levels of system functioning, from law-making to front-line performance. So if failure is down to ‘human factors’ then so is success, which arguably accounts for the majority of outcomes in day-to-day work.

By this reasoning, ‘human factors’ as a cause of accidents is a monolithic explanation – even more so than ‘safety culture’. ‘Human factors’ as a cause of accidents explains both everything and nothing. Having said this, ‘human factors’ is often seen more specifically as a set of factors of humans (humans being unreliable and unpredictable elements of an otherwise well-designed and well-managed system) that are proximal to accidents.

This interpretation has been reinforced by the use of the word ‘organisational’ alongside ‘human’ in some quarters. For instance, the UK Health and Safety Executive used the term ‘Human and Organisational Factors‘ to broaden out the perceived scope of the ‘HOF’ contribution (to incidents and accidents), and there is a growing ‘Human and Organisational Performance’ movement, which has grown from ‘Human Performance‘. This is curious to many Human Factors professionals, because organisations – being created by, comprised of, and run by humans – were always within the scope of Human Factors (sometimes called ‘macro ergonomics‘) from the beginning.

The proximalisation and narrowing of ‘human factors’ becomes especially important with the post hoc ergo propter hoc fallacy, that because an event happened after something (an action or omission) then it happened because of that something. This is especially problematic in complex, high-hazard systems that are highly regulated and where systems are required to account for performance variability, in terms of design, management, and operation.

An example of proximalisation can be seen in the aftermath of the train that crashed at Santiago de Compostela in July 2013. Human error was immediately reported as the cause. A safety investigation by CIAF (here in Spanish), published in June 2014, found that driving staff failed to follow the regulations contained in the train timetable and the route plan”. Subsequently, the European Railway Agency (now the European Union Agency for Railways) found that the emphasis of the CIAF report is put on the direct cause (one human error) and on the driver’s (non-) compliance with rules, rather [than] on the underlying and root causes. Those causes are not reported as part of the conclusions of the report and typically are the most likely to include the organisational actions of Adif and Renfe.” As reported here, “many survivors, campaigners and rail analysts…questioned why rail officers in charge of the train and rail network had not factored in the possibility of human error – particularly at a bend as potentially dangerous as the Angrois curve – and had failed to put in place technology that could mitigate it”.

The safety investigation seemed to mirror a view of causation that allows for counterfactual reasoning only in the proximate sense – who touched it or failed to touch it last. In this case, and many others, it seemed that omissions are only causal when they occur at the sharp-end, even though sharp-end omissions typically occur over the course of seconds and minutes, not months and years.

In the case of Santiago de Compostela, the driver Francisco José Garzón Amo was the only person facing trial for much of the time since July 2013. However, several officials have been named in, and dropped from, judicial proceedings over the years. Their causal contributions seem to be harder to ascertain. At the time of writing, Adrés María Cortabitarte López, Director of Traffic Safety of ADIF, is also facing charges for disconnecting the ERTMS (European Railway Traffic Management System) without having previously assessed the risk to make that decision. (Ignacio Jorge Iglesias Díaz, director of the Laboratory of Railway Interoperability of Cedex said that ERTMS has a failure every billion hours, while part of the security provided by the ASFA system “rests on the human factor”.) As yet, over seven years later, there is no date set for the oral trial to find out if the accused are finally convicted of eighty crimes of involuntary manslaughter and 144 crimes of serious professional imprudence.

All of this is to say that there are consequences for both safety and justice of the framing of ‘human factors’ as a cause of accidents, and the scope of ‘human factors’ that is expressed or implied in discourse also has consequences. By framing people as the unreliable components of an otherwise well-designed and well-managed system, ‘human factors as a cause of accidents’ encourages brittle strategies in response to design problems – reminders, re-training, more procedures. But this is not all. This perspective, focusing on ‘human factors’ as the source of failure, but not the overwhelming source of success, encourages technological solutionism – more automation. This changes the nature of human involvement, rather than ‘reducing the human factor‘, and comes with ironies that are even less well understood.

So ‘human factors’ isn’t an explanation, but Human Factors theory and method can help to explain failure, and moreover, everyday work. Human factors isn’t a reason for failure, but Human Factors helps to reason about failure and – moreover – about everyday work.

Unfortunately, some Human Factors methods that have emerged from a Safety-I mindset (curiously different to the progressive mindset that created the discipline) may have encouraged a negative frame of understanding. The Human Factors Analysis and Classification System (HFACS), for instance, classifies accidents according to ‘unsafe acts’ (errors and violations), ‘preconditions for unsafe acts’, ‘unsafe supervision’, and ‘organizational influences’. The word ‘unsafe’ here is driven by outcome and hindsight biases. Arguably, it should not be attached to other words, since safety in complex sociotechnical systems is emergent, not resultant. Such Human Factors analysis tools typically classify ‘error’ (difficult as it is, to define) and ‘violation’ only at the sharp end (blunt end equivalents are seen as ‘performance shaping factors’ or in the case of HFACS – influences). So, inadvertently, Safety-I Human Factors may have encouraged proximalisation to some degree, linguistically and analytically, since errors are only errors when they can be conveniently bound, and everything else is a condition or influence – ever weakening with more time and distance from the outcomes. Again, this has implications for explanation and intervention.

Still, in the main, Human Factors is interested primarily in normal work, and sociotechnical system interaction is the primary focus of study, not accidents. Within this frame is the total influence of human involvement on system performance, and the effects of system performance on human wellbeing. Even within safety research and practice, there is an increasing emphasis in Human Factors on human involvement in how things go right, or just how things go – Safety-II.

But the term ‘human factors’ will probably be used in the vernacular for some time yet. My best advice for those who use the term ‘human factors’ in their work is to think very carefully before using the term as a cause of, or explanation for, failure. Doing so is not only meaningless, but has potential consequences for safety and justice, and even the future of work, which may be hard to imagine.

Posted in Human Factors/Ergonomics | Tagged , , , , , , , | Comments Off on What Human Factors isn’t: 4. A Cause of Accidents

What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training

‘Human Factors’ (or Ergonomics) is often presented as something that it’s not, or as something that is only a small part of the whole. Rather than just explain what Human Factors is, in this sporadic series of short posts I will explain what it isn’t. The posts outline a number of myths, misunderstandings, and false equivalencies.

In this series:

  1. What Human Factors isn’t: 1. Common Sense
  2. What Human Factors isn’t: 2. Courtesy and Civility at Work
  3. What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training (this post)
  4. What Human Factors isn’t: 4. A cause of accidents

Royal Navy Media Archive CC BY-NC 2.0 https://flic.kr/p/CCkksw

Human Factors Isn’t Off-the-shelf Behaviour Modification Training

Human Factors and behaviour modification training have a somewhat complicated relationship. It is not easy to explain, especially in a way that everyone would agree. I will start by saying that one thing is certain: Human Factors and training-based behaviour modification are not equivalent. But, in my view, training-based behaviour modification can be an application of Human Factors. In other words, the two are not equivalent, but one can be an application of the other. I’ll try to explain.

Human Factors has a core focus that can be described in a few words as ‘fitting the work to the people’ or ‘designing for human use’. It does this in the context of the system as a whole. More formally, there are a number of definitions that help to make the point, but they tend include two foci: understanding system interactions as the method of understanding and design as the method of intervention. These foci are not contentious: they are core to many definitions and are the foci of Human Factors textbooks and degrees. My preferred definition was offered by my late PhD supervisor, Prof. John Wilson:

“Understanding the interactions between people and all other elements within a system, and design in light of this understanding.” (Wilson, 2014, p.12)

The word that is sometimes subject to discussion is the word ‘design’. In the context of Human Factors, it can be described as a process for solving problems and realising opportunities relating to interactions between people and all other elements within a system. Some definitions flesh this out a little more, including also the goals of Human Factors, e.g.:

“Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.” (International Ergonomics Association)

(Note that the terms ‘Human Factors’ and ‘Ergonomics’, which originate in the US and Europe respectively, are usually treated as synonymous within the discipline, but often one is chosen over the other in the profession, and in practice more generally, depending also on the country.)

Going back to the origins of Human Factors in WWII aviation, it began with observations around the lack of fit or compatibility between designed artefacts on the one hand, and human capabilities, limitations and needs on the other. While the intention of early researchers was not to create a new discipline, that is effectively what happened, as is the case, I suspect, with many disciplines.

In 1977, the Tenerife runway accident occurred. This led a renewed focus on behaviour, especially communication and teamwork, and ultimately the development of crew resource management (CRM). The term CRM was invented by American aviation psychologist John Lauber, who defined it as “using all the available resources – information, equipment, and people – to achieve safe and efficient flight operations”. The concept was further developed and tested by other applied psychologists, such as psychologist Bob Helmreich, drawing especially from social psychology.

It is worth saying here that Human Factors and Applied Psychology are closely related, sometimes indistinguishably so in practice. Applied Psychology is one of several core disciplines of HF, and a large proportion HF specialists, including those involved in the initial development of CRM and TEM, are psychologists. But the fields are also distinct disciplines and professions. Going back to the ICAO SHELL model, psychology tends to focus on the ‘liveware’ and ‘liveware-liveware’ interactions – people, individually and collectively. Human Factors tends to focus on the patterns of interactions between ‘liveware’, ‘software’ (including policies and procedures), ‘hardware’ and the ‘environment’ – the relationships between elements are more interesting and relevant than the elements themselves. Psychology is a human science focused on mind and behaviour. Human Factors is a design discipline focused on system interactions.


Since ‘Human Factors’ was better embedded as a term in aviation, CRM was soon associated with Human Factors in a cockpit and crew context. In an sense, it is an aspect of ‘Human Factors in Operations‘, though even then, it is only one aspect – one application. CRM training typically comprises a training course and subsequent monitoring of CRM skills during simulator flights (line-oriented flight training, or LOFT). CRM training is now a regulatory requirement for commercial pilots under most regulatory bodies.

Threat and error management (TEM) also emerged, which is often seen as another application of HF, used alongside with normal operations safety survey (NOSS) in aviation. Interestingly, ICAO notes in a circular on TEM that,

It must be made clear from the outset that TEM and NOSS are neither human performance/Human Factors research tools, nor human performance evaluation/assessment tools. TEM and NOSS are operational tools designed to be primarily, but not exclusively, used by safety managers in their endeavours to identify and manage safety issues as they may affect safety and efficiency of aviation operations.

CRM, and to a lesser extent TEM, has since become widespread not only in aviation, but also in rail, shipping, healthcare, and other sectors.

The downside of this success is the perceived (but false) equivalence of ‘Human Factors’ and ‘training-based behaviour modification’. This perception is more prevalent among those who have received such training (e.g., pilots and clinicians), and where there are no or few Human Factors practitioners working more systemically. Unfortunately, the perception has spread to managers, who have come to see Human Factors as ‘done’ once training has been delivered. This creates a moral hazard. If there are now inadequate funds available to address wider systems problems, and if failure is seen as focused on individual and team performance, then failure is both more likely and more punishable.

So it is fair to say that Human Factors researchers and practitioners are uncomfortable with training as an intervention for problems that are not fundamentally associated with competency, at least in the first instance. Since training is about modifying people – fitting people to tasks – it seems to go against the philosophy of Human Factors. If interaction problems are rooted more in the design of activities, tools, and contexts of work, then those are the first ports of call when it comes to modification. “It’s easier to bend metal than twist arms”, wrote Sanders and McCormick (1993), while James Reason wrote “You cannot change the human condition, but you can change the conditions in which humans work” (2000).

From a practical point of view, training to modify behaviour is expensive and often ineffective in the short or long term, unless done in a way that integrates a thorough understanding of Human Factors. More is said on this by Russ et al‘s The science of human factors: separating fact from fiction, an excellent paper written by Human Factors specialists from psychological, engineering and clinical backgrounds.

But to put it into perspective, consider the National Health Service in England, which employs around 1.5 million people (1.1 FTE). Around half a million of these are doctors, nurses, midwives and ambulance staff. Training is essential for all staff, in order to do their jobs. But imagine training 500,000 staff to modify their behaviour in order to address problems. You’d still be left with inadequate staffing, poor rosters, confusing medicine packaging, badly designed equipment and facilities, too many policies and guidelines, shallow investigations, and stressful jobs and tasks, to pick just a few remaining problems. (And you’d still have to train the 140,000 or so pharmacists, radiographers, operating theatre practitioners and other scientific, therapeutic and technical staff.) During this training process, many staff would also have left, and new staff would have joined. And after a year or so, training would need to be refreshed. Training staff in behaviour modification can make painting the Forth bridge look easy.

Ultimately, all training aims to modify behaviour or practice, but it would be nonsensical to call all training ‘Human Factors’. ‘Human Factors’ is often invoked for so-called ‘non-technical’ skills rather than ‘technical skills’ – a false dichotomy on both theoretical and practical grounds, with unfortunate unintended consequences.

Still, I would argue that, if done well, behaviour modification training can be an application of Human Factors. If you’ve read this far, then you might be wondering how. One argument can be seen in in the example of CRM, which can be found in Human Factors journals and in some textbooks. However, to reinforce the point about non-equivalence, training-based behaviour modification approaches are indeed a minority of articles. Given the number of pages of journals and textbooks on Human Factors, I would estimate that training-based behaviour modification solutions are mentioned in fewer than 1 in every 100 pages.

So what might make training-based behaviour modification a ‘Human Factors’ intervention, since all training aims to modify behaviour? The conditions might involve the following sorts of activities, laid out below in a process.

  1. A problem or opportunity relating to the interaction between humans and other elements of a system has been identified and investigated.
  2. The interactions between people, activities, contexts and tools/technologies are analysed and understood using Human Factors methods.
  3. Needs arriving from 1 and 2 above are analysed and understood, considering both system performance and human wellbeing criteria.
  4. A range of solutions is considered, as ways of meeting these needs.
  5. Training is identified as an appropriate solution (typically, along with others).
  6. Training requirements are defined.
  7. A prototype training solution is developed (typically in conjunction with other prototype solutions).
  8. The prototype training solution is implemented and evaluated, ideally in conditions that are reasonably reflective of real working conditions.
  9. If the needs are not met, then the process returns to any of the steps 1 to 7 (the activities may need to be done more thoroughly, perhaps, or the problem or context may have changed).
  10. If the needs are met, then the training solution is implemented and sustained.

With such a process, we can say that training is a well-designed solution to a well-understood problem or opportunity. Training, in this context, is part of the work context, and must be designed. Where training is simply provided en masse without these steps (accepting that there will be compromises – the above is intended as a fairly robust process), then we would have to question whether training is a well-designed solution to a well-understood problem or opportunity.

What about simply teaching people about ‘factors of humans‘ – memory, attention, decision making, fatigue, and the like? Again, if something like the process above is followed, then one can be confident that this is a ‘Human Factors solution’. If the process is heavily compromised, or not followed at all, then there may well be too many assumptions about:

  • the problem or opportunity
  • the people, activities, contexts and tools (PACT) that are exposed to the problem or opportunity
  • the suitability of training as a solution
  • the adequacy of the development, evaluation and implementation of training
  • competing systems and behaviours that affect the behaviour targeted by training, and
  • the sustainability of training as a solution.

So how can you know training-based behaviour modification is a Human Factors intervention, or…just training? If a training-based behaviour modification solution is offered off the shelf, without following something like the 10 steps above, then it is probably fair to say that it isn’t a Human Factors intervention. One quick test is to check how soon training is proposed in response to an identified problem or opportunity. If any of steps 1 to 4 have been missed in any significant way (regarding the understanding of the problem/opportunity, context and possible solutions), then it’s probably not a Human Factors intervention, and it would be more appropriate (and helpful) to describe such training as something else (much ‘Human Factors Training’ would be better described as something more contextual and specific). If any of steps 6 to 9 have been missed (regarding the development, evaluation and implementation of training), then the training solution may not be well-designed, no matter how it is branded.

Posted in Human Factors/Ergonomics | Tagged , , , , , | Comments Off on What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training

What Human Factors isn’t: 2. Courtesy and Civility at Work

‘Human Factors’ (or Ergonomics) is often presented as something that it’s not, or as something that is only a small part of the whole. Rather than just explain what Human Factors is, in this sporadic series of short posts I will explain what it isn’t. The posts outline a number of myths, misunderstandings, and false equivalencies.

In this series:

  1. What Human Factors Isn’t: 1. Common Sense
  2. What Human Factors isn’t: 2. Courtesy and Civility at Work
  3. What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training
  4. What Human Factors isn’t: 4. A cause of accidents

ResoluteSupportMedia CC BY 2.0 https://flic.kr/p/anoN2L

Human Factors Isn’t Courtesy and Civility at Work

Some myths about Human Factors are just plain wrong, such as the common sense myth. Others are more subtly wrong. One of these is the false equivalence of ‘Human Factors’ with ‘good behaviour at work’. Courtesy and civility and are fundamental human values, expressed differently in different cultures, and as such may be seen as ‘factors of humans‘ in the vernacular sense. They are themes that are increasingly common in healthcare in particular. These are undoubtably important aspects of life, including work-life. Research reported in healthcare journals has shown that rudeness has adverse consequences on the diagnostic and procedural performance of clinical team members, staff satisfaction and retention, among other outcomes. It is the focus of campaigns such as Civility Saves Lives. Reseach on social media indicates that incivility is a growing problem: it seems to be perceived as the norm of online interaction, rather than the exception. So courtesy and civility may also be seen as specific ‘factors affecting humans‘, and important aspects of professionalism, in a work context.

But to equate these values with Human Factors as a discipline or field of study (and practice) is erroneous. The terms rarely come up in the Human Factors and Ergonomics literature. I was unable to find either in the title, keywords or abstracts of any article published in ‘Human Factors’, ‘Ergonomics’ or ‘Applied Ergonomics’ – the top three journals in the discipline. Nor are the terms listed in any indices of Human Factors textbooks (at least the ones that I have). Human Factors practitioners are unlikely to have specific expertise the topic, though those working in healthcare may well be aware of some of the related healthcare literature. They would probably see these topics as a better fit with other disciplines.

So this wouldn’t be a surprise researchers and practitioners of Human Factors, since the terms seem not to fit the scope of Human Factors:

Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.

Practitioners of ergonomics and ergonomists contribute to the design and evaluation of tasks, jobs, products, environments and systems in order to make them compatible with the needs, abilities and limitations of people.

International Ergonomics Association (2019)

Courtesy and civility are critically important, and crop up in disciplines such as psychology, sociology, anthropology, organisational behaviour and human resources management, as well as professional studies, interdisciplinary studies and healthcare in particular. But an association with the term ‘Human Factors’ is unhelpful. First, it reduces the essential focus of Human Factors on design (of work), though one might argue that courteous and civil interactions can be designed and reinforced, for instance through teamwork training. (That being the case, courtesy and civility are aspects of a specific application of Human Factors, but should not be equated with the term.) Second, the terms distort the focus of Human Factors on ‘fitting the task to the person’. Third, a false equivalence with Human Factors may reinforce the myth that Human Factors is (or that human factors are) ‘common sense’; most people would understand their importance, and how to be courteous and civil in every day life, even if these behaviours lapse from time to time.

Courtesy and civility should be an important topic for social dialogue in all aspects of life. They are also important aspects of training – fitting the person to the job. But we should be careful in overemphasising courtesy and civility in conversations about ‘Human Factors’. The false equivalence of courtesy and civility with Human Factors risks diluting its scope to ‘everything human’ – all humanities – along with its essential focus on designing for system performance and human wellbeing.

Posted in Human Factors/Ergonomics | Tagged , , , , | Comments Off on What Human Factors isn’t: 2. Courtesy and Civility at Work