Human Factors at the Fringe: Lemons Lemons Lemons Lemons Lemons

Walrus’ award-winning show returns to Edinburgh in Paines Plough’s Roundabout. ‘Let’s just talk until it goes.’ The average person will speak 123,205,750 words in a lifetime. But what if there were a limit? Oliver and Bernadette are about to find out. This two-person show imagines a world where we’re forced to say less. It’s about what we say and how we say it; about the things we can only hear in the silence; about dead cats, activism, eye contact and lemons, lemons, lemons, lemons, lemons. ‘About as promising as debuts get.’ (Time Out).

By Lemons Lemons Lemons Lemons Lemons, by Sam Steiner, 28 August, Roundabout @ Summerhall, Edinburgh

2016lemonsl_azc

What if we had a daily limit on the number of words we could speak? This is the premise of this experimental political fantasy focusing on the relationship of Bernadette (a lawyer) and Oliver (a musician) in the context of a new ‘hush law’. The law is introduced by the government to ration citizens to 140 words each per day. Oliver campaigns against the law while Bernadette seems not to believe it will actually be voted into effect. Ultimately, for unexplained reasons, it is.

The play is essentially about the dynamics of Bernadette and Oliver’s relationship and how the prospect and reality of the hush law affects their communication. It skips between the couple’s conversations in the past, when they could speak freely, and the present, when they are restricted.

The couple struggle to manage their lexical allowances. On the first day of the hush law, Bernadatte wastes nearly half of her allowance ordering a smoothie. Inconsistent use of the quota between the pair causes tension and raises questions on the importance of the other and the relationship. When Oliver uses his daily limit before returning home, Bernadette is frustrated and comes up with a bunch of random words to spend rest of her allowance, using up her last five with “lemons, lemons, lemons, lemons, lemons”. With varying degrees of success, they learn to monitor how they use their word quota over the course of each day, greeting one another with a number reflecting their available words. We are left to consider a number of questions. What words would we use and leave out, when every word counts? Who would we save our words for? How might we learn to communicate without words?

But the backstory is a restriction of freedom of speech and the social and political implications. The law has some strange effects in society. Songs gradually lose their words because it takes more than a day for the artists or listeners to sing a song. Perhaps most intriguing to me was when Oliver exclaimed that the law is inherently discriminatory because, even if everyone has the same limit, those with less power need more words, while those in positions of power already have the influence they need. They have less need for words. This was a thought that lingered after the play.

In organisations, and society, words are already funnelled and filtered. The ‘140’ limit is obviously borrowed from twitter, which has today excluded quoted tweets, photos, GIFs, videos and poll from its famous 140-character limit. And between the various strata of organisations and society, the possibility to communicate upwards diminishes with altitude. A front-line worker usually has little or no direct access to the Board, for instance. If they want to express anything they may have a small quota of words to do so, if they are lucky. On matters of safety, individuals may indeed have a limit of around 140 words to pass a concern to senior management, perhaps through a reporting scheme.

When we cannot speak out adequately in organisations and society, the concerns and messages do not go away. They take on new forms: learned helplessness, revolt, or anything in between. In Lemons, the characters learn new workarounds: more efficient words, blends and portmanteaus, rudimentary morse code (as with twitter, where people use images of many more words, or use a series of tweets). Some of this is probably not what the lawmakers imagined. In organisations and societies, competing means of communication emerge in response to limits on communication, including behaviours (e.g., facial expressions, postures, whistleblowing, demonstrations, strikes, riots) or other outcomes (e.g., accidents). As I often say to people in positions of power in organisations, people’s concerns and needs remain whether or not we listen to them. But by spending more time listening – allowing time for more words – everyone’s needs can be met, to some extent at least, before it is too late.

 

See also:

Human Factors at The Fringe

Human Factors at The Fringe: The Girl in the Machine

Human Factors at The Fringe: My Eyes Went Dark

Human Factors at The Fringe: Nuclear Family

Posted in Culture | Tagged , | 3 Comments

Human Factors at The Fringe: Nuclear Family

Nuclear Family is a gripping piece of interactive theatre which follows Joe and Ellen, nuclear plant workers and siblings, faced with an imminent disaster. Audience members will be privy to what could possibly be their last hours as they struggle with the biggest decisions of their lives. In a heated round table discussion, the audience will experience the pressure of making life and death decisions.

Nuclear Family, 3-29 Aug, Assembly, Edinburgh

20160822_170813-600x350

To have any chance at understanding why people do the things they do, you have to put yourself in their shoes. Nuclear Family is immersive, interactive theatre that requires you to do just that. The play begins with an introduction by a suited convenor, who explains that you are part of a board of inquiry into an explosion at the Ashtown nuclear power plant in 1996. For the next hour, your decisions are linked to those of two security guards and siblings Joe and Ellen Lynum, who work in the plant. Audience members are seated around the cast and the set – a grim, bunker-esque security office with a desk, some 1990s PCs, telephones, and other peripherals. Joe and Ellen are at the sharp end of the unfolding disaster and the focus is on their decisions, which happen to be yours.

The audience members were taken ‘inside the tunnel’ as events unfolded, watching the ‘video footage’ – the acted scenes. After each superbly acted scene leading up to a critical decision point, we were given short audio recordings of interviews and some documentation, such as police and employment records. We had two minutes to make a binary choice decision: what would be a reasonable or appropriate thing to do next, given the information available and the desired outcome? As an audience, we had to vote on a collective decision. These decisions – four or five in all – were moral dilemmas. Questions of rule-breaking, relationships and competence arose, and each decision had implications, for liberty and loss of life, for instance. Each decision contributed to an unfolding disaster, but the decisions were set against poor management – under-resourcing and reported problems that had never been acted on.

As the audience made each decision, we could not know the consequences until they arose. It was clear that were various routes through the mess and because of this we probably forgot that the ending was actually certain: an explosion. It became a chose-your-own-disaster, but one where we were fooled into counter-factually thinking we could mitigate the outcome and maybe prevent it. We felt the regret and anger for each decision in real time as the next scene unfolded.

This is innovate theatre that teaches the audience about local rationality. The audience, like Joe and Ellen, do what seems reasonable at that time. In hindsight, each decision seems like a bad decision, but at the time each decision is just that: a decision. The decisions seemed reasonable to most people, though there was minority dissent for some decisions, which was not explored. Interestingly, the minority could feel some anger that their preferred option was not taken: even though the consequences of neither option were known at the time, the unknown consequences of the unmade decision seemed better.

The division of the storyline into decision points was reminiscent of the method within Sidney Dekker’s Field Guide to Understanding ‘Human Error’, which suggests breaking down a detailed timeline into critical junctures. But there are crucial differences between an accident investigation, Nuclear Family, and real-time operations. In an accident investigation, you have much of the information and you have knowledge of the final outcome and the outcomes of each decision. You construct the critical junctures (based on the knowledge you now have) and you have many hours or days available to analyse them. In Nuclear Family, you have a some background information and you have knowledge of the final outcome but not the outcome of each decision. You are told the critical junctures and you can pause for a couple of minutes while you make a decision. In real-time operations – in control rooms, cockpits, operating theatres – you don’t have all the information and you don’t know the final outcome, nor for sure the outcome of the decision you are about to take. You may not know in advance that a juncture or decision point is critical and you can’t necessarily pause for long to make a decision.

Understanding local rationality demands a level of empathy, and Nuclear Family cultivated both background empathy (or person empathy) into the characters, and process empathy for their moment-to-moment experience – cognitive, emotional, and social. It is hard to think of a better medium through which to experience this so efficiently than interactive theatre.

See also:

Human Factors at The Fringe

Human Factors at The Fringe: The Girl in the Machine

Human Factors at The Fringe: My Eyes Went Dark

Posted in Human Factors/Ergonomics, Safety | Tagged , , , , , , , | 4 Comments

Human Factors at The Fringe: My Eyes Went Dark

Written and directed by Matthew Wilkinson. A thrilling modern tragedy about a Russian architect driven to revenge after losing his family in a plane crash. Cal MacAninch and Thusitha Jayasundera give electrifying performances in this searing new play about the human impulse to strike back. Inspired by real events. Nominated for three Off West End Theatre Awards.

My Eyes Went Dark by Matthew Wilkinson, 28 Aug, Traverse Theatre, Edinburgh

my-eyes-went-dark-main

(See Human Factors at The Fringe for an introduction to this post.)

In 2002 a Bashkirian Airlines Tupolev passenger jet en route to Barcelona collided with a DHL Boeing 757 cargo jet over Überlingen, southern Germany, while under air traffic control. Seventy one people died, 52 of which were children. The controller on duty instructed the Russian jet to descend, after noticing that the planes were on a collision course. Unbeknown to him, the onboard collision-avoidance systems (TCAS) on the aircraft issued instructions that contradicted the controller’s own instruction. The Russian pilots acted on the controller’s instruction, while the DHL pilots acted on that of TCAS (see here for a description of the accident and the aftermath).

This play is essentially a tragedy, inspired by real events, and concerns the aftermath of that accident. It takes place over the course of five years in Switzerland, Germany, and Ossetia. ‘Nikolai Koslov’ lost his two children and his wife in the accident. Koslov was a Russian architect working in France on a major new hotel build.

Koslov is consumed with grief, seething with a quiet anger at how such an accident could have occurred. He runs through the possible causes with a search team co-ordinator on the night of the accident, at the scene. He asks about the age and condition of the plane, about who was responsible for maintenance. He wonders about terrorism. But the coordinator offers a mundane reason for the accident.

CO-ORDINATOR: (Gentle) My opinion is, and I know how stupid this must sound, but it could well have been a … a simple mistake.

KOSLOV: A mistake? But who made the mistake?

Koslov interrogates the perpetrator of the mistake. With no answer, he turns to the mistake itself.

KOSLOV: But what mistake? What sort of mistake?

CO-ORDINATOR: I don’t know, really.

KOSLOV: Then why do you say that?

CO-ORDINATOR: Because – isn’t that usually the reason?

Koslov is incredulous.

KOSLOV: … You cannot put people up there, in aeroplanes, high up there, and then make simple mistakes… it’s completely unheard of.

Koslov’s late wife’s sister comes to a granite memorial to find him. He’s been there for days. While Koslov is full of anger for Olsen, Lizka has compassion.

LIZKA: I heard him interviewed. He was crying. He said it was his duty and responsibility to prevent such accidents happening. I remember that clearly. He sounded at a total loss. He sounded terrible.

Koslov is angry and yet numb to the world, turning to ultra-dark chocolate to get a sense of something external.

While Koslov cannot understand how a ‘simple mistake’ could happen, Lizka cannot understand how the context for it could exist. Koslov focuses on the actions of the controller. Lizka focuses on the context of work. She starts to recount the ‘second story’.

LIZKA: He said he was left all alone on duty that night. I just can’t understand that. He was all by himself, flitting between two screens. … Why would they allow that? He said he wasn’t even aware that the Russian plane’s warning system had told it to go up. When clearly it should have gone up. Just kept on going. If it had kept on going everything would have been OK. The other plane would have missed it completely.

KOSLOV: Yes

LIZKA: But they’re saying all his phone lines were down. So no one could call anyone. Then, then maintenance men came in as well…

KOSLOV: I know, I heard his describing it.

LIZKA: It sounds horrific … like some crazy soap opera … like they were there to fix the telly!

KOSLOV:  know.

LIZKA: I mean he couldn’t know what was going on! And he had another plane to land in Germany at the same time! Five minutes before. It was complete confusion! My God, his colleague was outside in the hall fast asleep!

KOSLOV: Lizka –

LIZKA: He was all by himself…

KOSLOV: Lizka –

LIZKA: No. No. I don’t understand.

KOSLOV: Lizka –

LIZKA: You don’t let people fall asleep in halls when there are planes flying around do you? Do you? What for? It doesn’t make sense…

KOSLOV: It was common policy.

LIZKA: To sleep in halls?

KOSLOV: To take it in turns. When traffic was slow.

LIZKA: Really? Was it? Really? But traffic wasn’t slow!

Silence.

From her outside perspective, the conditions of work don’t seem reasonable.

But Koslov cannot escape the feeling that Olsen is culpable. In a phone call he talks about the statements given to the German and Swiss accident investigation authorities.

KOSLOV: It’s an inescapable fact he did do it. Im not saying he wasn’t put in a dreadful position. I’m saying he did it. … He commanded those pilots to dive. To ignore their screens and fly into each other. Yes? OK? Whatever the reasons. …

Koslov believes that someone must be held accountable but Thomas Olsen is acquitted by the courts. A representative of Skyways is in court:

WEITNER: In hindsight, you always ask yourself, could I have done more> More to anticipate, more to prepare, more to … mitigate. More.

Two officials received suspended sentences, and a fine of twelve thousand Euros. Koslov is offered compensation for his wife and children ($60,000, and $50,000, respectively). For Koslov, this defiles the name of his family. For him, justice has not been done. What justice can there be?

WEITNER: From the trial, did you really think someone was going to be prosecuted? Sent to prison? For an accident? Nobody was going to prison. It’s not how it works. Can you imagine? Private employees, in public service, sent to prison – for making mistakes? Who would be willing to take their place?

WEITNER: I know how difficult this must be for you.

KOSLOV: You can’t even say sorry.

Koslov tracks Olsen down in his family home, and murders him. He is sent to prison.

In his region of Russia, blood feuds were traditionally an accepted means of justice . His counsellor proposes that this might explain his actions.

GEISINGER: We know it wasn’t so long ago, perhaps only fifty years or so, that feuds in your country were decided in this way

… You belong to a history, a cultural history, of resolving trauma this way.

Koslov he denies this, and denies planning to kill Olsen, even remembering what happened.

Koslov is released part way through his sentence. On return to Russia, he receives a hero’s welcome. He is given an official post for architecture and construction and designs an Olympic-standard ski resort in Ossetia.

The play ends with Olsen’s daughter, Helena, arriving unexpectedly at a party for Koslov, seeking answers on why he did what he did, and restorative justice for her mother, who has made multiple requests to speak with Koslov. He has never responded.

HELENA: Speak to her. Please. It must mean something to you. It must do. You were a father. You had children.

KOSLOV: And your father murdered them.

HELENA: No! No! My father was a man, a good man! Who made a mistake!

KOSLOV: He is a murderer.

HELENA: (Screams) You are a murderer!!

My Eyes Went Dark raises questions about causation, culpability, justice, revenge and forgiveness. The first story of ‘human error’ and individual responsibility are set out alongside the second story of system conditions and collective and corporate responsibility. Human error, “a simple mistake” (famously cited as being the ’cause’ of 70% or so of accidents) is the first assumption of the co-ordinator. But a mistake is not innocent in the eyes of Koslov (nor in the eyes of many judicial systems around the world). The system as a whole is the focus for Lizka. She describes how degraded modes of operation stack on top of one another and become accepted as normal as an organisation drifts into failure. She feels compassion for the controller who was put in this position, and who ultimately lost his own life.

A mistake and an individual perpetrator gives Koslov a clear reason for the event and an identifiable target for his anger. As recalled by Lizka, the controller said it was his “duty and responsibility to prevent such accidents happening”. An organisation does not provide a clear reason for the event, nor a clearly identifiable target for Koslov’s anger

How would we react to such an event? Would a progressive understanding of human factors and system safety help or hinder forgiveness? Would an understanding of complexity actually make it easier or harder for us to channel our grief, and to get restorative justice? Would our understanding of ‘just culture’ save us from our darkest urges? We hope we’ll never know.

 

Script: Wilkinson, M. (2015). My eyes went dark. Oberon Books. https://www.oberonbooks.com/eyes-went-dark.html

Image: http://www.finboroughtheatre.co.uk/assets/images/my-eyes-went-dark-main.jpg 

 

See also:

Human Factors at The Fringe

Human Factors at The Fringe: The Girl in the Machine

Human Factors at The Fringe: Nuclear Family

Human Factors at the Fringe: Lemons Lemons Lemons Lemons Lemons

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , , | 3 Comments

Human Factors at The Fringe: The Girl in the Machine

Polly is a professional, a high achiever and an addict. Her drug of choice is a grade A, top of the range smart phone. She clicks and scrolls for minutes, hours and days at a time. When Polly discovers an app that uses algorithms to create brand new music by long-dead musicians, the line between human and computer begins to blur, and the downloads become increasingly dangerous. A play about networks, nerve endings and Nirvana.

The Girl in the Machine by Stef Smith, 19, 24 & 28 Aug, Traverse Theatre, Edinburgh

(See Human Factors at The Fringe for an introduction to this post.)

This script-in-hand, rehearsed-only-once play for early risers at Edinburgh’s Traverse Theatre was one of a series on the same theme: “Tech will tear us apart (?)” The play features a corporate IP lawyer – Polly – and her tech designer husband – Owen. Polly is addicted to her device, and will spend hours clicking and swiping through apps and the internet. Out of the blue, a new app appears that can create new music by dead artists based on aspects of their existing body of work. This is a problem, because – in her new position – it is Polly’s job to prevent and now deal with this legal quagmire.

The app is downloaded by legions of users, and it has a much darker hidden feature. The app includes an aural code via by which – it is promised – users can leave their bodies and upload their consciousness to the internet, sending messages to those on the other side. Hundreds of lives are lost as people seek to escape the stress of a hyper-connected, information-overloaded life, ironically putting their faith in everlasting life in a high-tech heaven, as pure information. Polly is blamed for the viral suicide and is sacked for failing to spot the emerging threat. She spirals into depression.

As the pair sit, Polly is consumed by her phone much like so many of us today. They grow further apart – physically and emotionally – and the phone becomes a love/hate object in the marriage. Society breaks down as attempts are made to stop the cultish phenomenon. Polly uses the last of her battery to upload her consciousness to the net, or so she thinks.

This sad but riveting play sheds light on our addition to technology while playing on our fears. It also exposes our faith in technological solutions to socio-technical and even spiritual problems. “When did life get so complicated?” Polly asks. “When we tried to make it simple”, Owen responds.

Does technology simplify life, or make it even more intractable?

 

See also:

Human Factors at The Fringe

Human Factors at The Fringe: My Eyes Went Dark

Human Factors at The Fringe: Nuclear Family

Human Factors at the Fringe: Lemons Lemons Lemons Lemons Lemons

Posted in Human Factors/Ergonomics, Humanistic Psychology | Tagged , , , , , | 2 Comments

Human Factors at The Fringe

wp-image-316898595jpg.jpg

There have been many debates in human factors about its status as science or art or both, and the scientific literature has recorded some of the issues spanning back over 50 years (e.g., de Moraes, 2000; Moray, 1994; Wilson, 2000; Sharples and Buckle, 2015; Spielrein, 1968). Human factors (or ergonomics) is formally defined by the International Ergonomics Association as  a “scientific discipline“, and we use scientific communication to try to get our message across, even in practitioner reports and outputs: Introduction, Method, Results, Discussion, Conclusion. This has become the de facto method of communication, though other mediums, such as natural storytelling, can be found in some talks (e.g. TED and TEDx), and a number of books (such as Stephen Casey’s Set Phasers on Stun).

There are several problems with our default approach to communication in human factors and other disciplines. These problems apply especially to disciplines that are not just the preserve of a small number of scientists, but that concern a wide range of stakeholders: citizens, front-line or shop-floor workers, specialists of many kinds (designers, HR, occupational health, safety, etc), managers, CEOs, regulators and policy makers… A first problem is this: More evidence, more proof, more detail, does not necessarily convince or trigger a change in thinking. In fact, it can backfire. Deeply held convictions can be strengthened further by contradictory evidence: the so-called backfire effect (observers of the Brexit debate may have noted the group polarisation that ramped up as the campaigns progressed, leading some – rather politically – to invent phrases such as “war on truth” and “post-truth politics”). A second problem, for which I have only the evidence of my own and others’ reported experiences, is that people in industry just don’t like reading reports, let alone scientific journal articles (which are almost completely ignored in some sectors outside of a minuscule number of stakeholders, i.e., researchers). Too many articles, too much contradiction, too technical, too boring, too time-consuming, too overwhelming. I remember a comment made by a UX practitioner in a survey about scientific journal articles that I conducted with Amy Chung: “I think over time I’ve just learned to ignore them”  (the work was reported in a scientific journal article [Chung and Shorrock, 2011]). The comment struck me. A third problem is that scientific communication, and its variants, tend not to connect with, for want of a better word, ‘feeling’. By this I mean an emotional connection, an internal reaction, a realisation or change of opinion, insight. Scientific communication uses a story approach, but a formal, stilted, exclusionary story format that pushes most people away. We should really be drawing people in, especially those who make decisions about work, products and services, and those who are affected by their decisions.

Art may not bring ‘scientific evidence’ and ‘proof’, but it does tend to connect with feeling. It can bring about an instant insight or dawning realisation that is hard to put into words. I experience this every summer at the Edinburgh Fringe festival. I’ve found that theatre – in particular – gives me insights into work and life that I don’t get from scientific articles. The productions I have seen have had no input from human factors specialists, and most have have no direct input from any scientific discipline (with some exceptions, e.g. The Happiness Project, which featured scientists from a variety of disciplines). And yet I have found that one theatrical production can be worth far more to me than a day (or a week) at a conference. The message from a play can stick with us for years, even if we did not think much of it at the time.

Perhaps theatre, and other art forms, present an opportunity for conveying and discussing themes in human factors (and related disciplines). Imagine an industry conference that included a powerful play on the themes of just culture and ethics, or hindsight and local rationality, or automation and technological solutionism. Or imagine more involvement from researchers and practitioners to help bring about new productions. In 2016, Edinburgh Fringe featured over 50,000 productions of over 3,000 shows in nearly 300 venues. What an audience. In the posts that follow I’ll reflect on four productions from Edinburgh Fringe 2016 that somehow relate to human relationships with technology, and that connected with me. Perhaps by paying more attention to the fringe, we can move out of the fringe as a discipline.

References

Casey, S.M. 1998. Set phasers on stun. And other true tales of design, technology and human error (2 ed). Aegean Publishing Company.

Chung, A.Z.Q., and Shorrock, S.T. 2011. The research-practice relationship in ergonomics and human factors – surveying and bridging the gap. Ergonomics. 54(5), 413-429.

de Moraes, A. 2000. Theoretical aspects of ergonomics: art, science or technology – substantive or operative. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. July 2000, 44 (33), 264-267.

Moray, N. 1994. ‘De maximis non curat lex’ or how context reduces science to art in the practice of human factors. In: Human Factors and Ergonomics Society 38th annual meeting, 24–28 October 1994, Stouffer Nashville, Nashville, Tennessee. Santa Monica, CA: HFES, 526–530.

Sharples, S. and Buckle P. 2015. Ergonomics/human factors – art, craft or science? A workshop and debate inspired by the thinking of Professor John Wilson. In: Sharples, S., Shorrock, S. and P. Waterson, P. eds. Contemporary ergonomics and human factors 2015. London: Taylor & Francis, 132–132.

Spielrein, R.E. 1968. Ergonomics: an art or a science. Australian Occupational Therapy Journal. 15(2), 19-21.

Wilson, J.R. 2000. Fundamentals of ergonomics in theory and practice. Applied Ergonomics. 31(6), 557-567.

Image: Steven Shorrock CC BY-NC-SA 2.0 https://flic.kr/p/D5kqFQ

 

See also:

Human Factors at The Fringe: The Girl in the Machine

Human Factors at The Fringe: My Eyes Went Dark

Human Factors at The Fringe: Nuclear Family

Human Factors at the Fringe: Lemons Lemons Lemons Lemons Lemons

Posted in Culture, Human Factors/Ergonomics, Humanistic Psychology, Safety, systems thinking | Tagged , , , , , , , , | 5 Comments

Adjusting to a Messy World: Donald Broadbent Lecture 2016

Just posted at http://www.HFEinPractice.wordpress.com. Human Factors and Ergonomics in Practice: Adjusting to a Messy World: Donald Broadbent Lecture 2016 with Claire Williams at CIEHF Ergonomics and Human Factors 2016.

Human Factors and Ergonomics in Practice

On 21 April 2016, we co-presented the Donald Broadbent lecture at Ergonomics and Human Factors 2016 (Daventry, UK) summarising some of the themes in ‘Human Factors and Ergonomics in Practice’. In this post, we summarise aspects of the book, slide by slide, with a grateful acknowledgement to every author who collaborated.


Slide 1 – Welcome

This lecture is about the messy world in which we live, and how this affects human factors and ergonomics in practice. The lecture material is derived from a book that we have edited called Ergonomics and Human Factors in Practice: Improving Performance and Wellbeing in the Real World.

Slide01


Slide 2 – How do practitioners really work?

We met about ten years ago, at this conference, when we were presenting papers on issues of practice. We had both been practitioners for about ten years at that time, and were beginning to realise HF/E professionals did not talk…

View original post 5,762 more words

Posted in Uncategorized | Leave a comment

Never/zero thinking

CcJXsKhWEAA0We0.jpg_large

“God save us from people who mean well.”
― Vikram Seth, A Suitable Boy

There has been much talk in recent years about ‘never events’ and ‘zero harm’, similar to talk in the safety community about ‘zero accidents’. ‘Never events’, as defined by NHS England, are “serious incidents that are wholly preventable as guidance or safety recommendations that provide strong systemic protective barriers are available at a national level and should have been implemented by all healthcare providers”. The zero accident vision, on the other hand, is a philosophy that states that nobody should be injured due to an accident, that all accidents can be prevented (OSHA). It sounds obvious: no one would want an accident. And we all wish that serious harm would not result from accidents. But as expressed and implemented top-down, never/zero is problematic for many reasons. In this post, I shall outline just a few, as I see them.

1. Never/zero is not SMART

We all know that objectives should be SMART </sarcasm>:

  • Specific – target a specific area for improvement.
  • Measurable – quantify or at least suggest an indicator of progress.
  • Assignable – specify who will do it.
  • Realistic – state what results can realistically be achieved, given available resources.
  • Time-related – specify when the result(s) can be achieved. (Wikipedia)

Never/zero fails on more that one SMART criteria. You could say ‘harm’ and ‘accidents’ are specific. ‘Never events’ are so specific that there are lists – long and ever-changing lists (in NHS England, apparently beginning at eight, growing to 25, and recently shrinking to 14, with an original target of two or three). This in itself may become a problem. Someone can always think of another. So what about the ones not on the list? When thinking about zero harm, what about the harm that professionals might need to do in the short term to improve outcomes in the longer term?

You could say ‘harm’ or ‘accidents’ are measurable, and that never/zero is the target.  There are of course problems with measures that become goals. One problem is expressed in Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” A measure-as-goal ceases to become a good measure for a variety of reasons, explained elsewhere, but (among other factors) targets encourage blame and bullying, distort reporting and encourage under-reporting, and sub-optimise the system, introducing competition and conflict within a system. There is much evidence for each of these claims. Even if never/zero is seen by some as a way of thinking, it is inevitably treated as a numerical goal, and inevitably generates a bureaucratic burden.

As for assignability, well, you could assign the never/zero goal to a safety/quality department, or the CEO, or the front-line staff…or everyone (but it that really assigning?).  But what are we assigning exactly? Are we assigning not having an accident to individuals, or never/zero to the organisation as a whole? Or perhaps the putting in place of specified safeguards? (And if so, can they always be implemented as specified?) What are the consequences for those to whom never/zero is assigned when an does accident occur, aside from the immediate physical and emotional consequences (see Point 6)?

By now we can see that zero/never is not realistic given available resources (having probably never been achieved in any safety-related industry), but probably given any resources, unless all activity were to stop (e.g. no flying, no surgical procedures). But then other harms result, as we saw following 9/11 with increased road deaths associated with reduced flying. If never/zero is unrealistic, then the time factor is neither here nor there, but knowing that it is unrealistic, people usually avoid specifying when never/zero must be achieved. And if they do, it is demotivating when it does not happen (see point 8 below).

2. Never/zero is unachievable

This is obvious from the above but it is worth repeating because it is not all that obvious to those removed from the front-line. There will never be never. There are, at present, several ‘never events’ a week in English hospitals. The chance of zero (by <no time>) is zero. It is a dream, a wish. For some, it is a utopia, but perhaps it is lost on those people that utopia comes from the Greek: οὐ (“not”) and τόπος (“place”) and means “no-place”. Never/zero is nowhere. In no place does it exist.

3. Never/zero is avoidant

Leaving aside the counterfactual inherent in never/zero definitions, never/zero focuses our attention on an anti-goal (harm, accidents, ‘avoidable deaths’). We may wish for a never/zero utopia, but with a focus on anti-goals the strategy obviously becomes avoidance. The anti-goal itself gives no information on how to go about this, and a focus on avoidance may, paradoxically, lead you into the path of another anti-goal as you run up against another constraint or complication with a limited focus of attention.

There are many potential ways to avoid an anti-goal, which may take you in slightly different directions and perhaps toward different things, which may or may not be desirable. In air traffic control, controllers do not train primarily by thinking of all the things not to do, and do not work by practising avoiding all the things that should be avoided. The focus of training is to learn to think about what to do (goal), and how to do it (strategy and tactics). It is well known, for instance, that thinking of a flight level (e.g. FL270 – 27,000ft) that is occupied by another aircraft or otherwise unavailable can lead you to issue that very flight level in an instruction. Thoughts lead to actions, even thoughts about what not to do. To part-quote Gandhi: “Your thoughts become your words, Your words become your actions”. It does not necessarily follow that focusing on not doing something will result in that thing not being done.

4. Never/zero is someone else’s agenda

No-one wants to have an accident, by definition. If they did, it wouldn’t be an accident. But the sum of individual wishes does not equal consensus on an agenda. Staff have usually not come together and decided on a never/zero agenda. It is usually decided from another place.

There are a variety of goals, there are complications inherent in every goal, and there are difficulties in balancing conflicting goals, especially in real-time, and at the sharp-end of operations. Compromises and trade-offs have to be made, strategically and tactically. None of these can be simplified to never/zero.

5. Never/zero ignores ‘always conditions’

All human work activity is characterised by patterns of interactions between system elements (people, tools, machinery, software, materials, procedures, and so on). These patterns of interactions achieve some purpose under certain conditions in a particular environment over a particular period of time. Most interactions involving human agency are intentional but some are not, or else the consequences are not intended. At the sharp-end, in the minutes or seconds of an adverse event as it unfolds, things do not always go as planned or intended. But nobody ever intended for things to go wrong.

We tend to use labels such as ‘human error‘ (and various synonyms) for these sorts of system interactions, but there is nearly always more to it that just a human. For instance, there may be confusing and incompatible interfaces, similar or hard-to-read labels, unserviceable equipment, missing tools, time pressure, a lack of staff, fatiguing hours of work, high levels of stress, variable levels of competence, different professional cultures, and so on. In other words, operating conditions are nearly always degraded. We ask for never/zero, and yet we ask for this in degraded ‘always conditions‘. Perhaps a new vision of ‘never conditions‘ (never degraded) or ‘always conditions’ (always optimal) would focus the minds of policy-makers closer to home, since it it would bring the trade-offs and compromises closer to their own doorstep.

It makes sense to detect and understand patterns in unwanted events, and to examine, test and implement ways to prevent such events (the basic idea behind never events), with the field experts who do the work. The problem comes with a never/zero expression and all of the implications of that.

6. Never/zero leads to blaming and shaming

It is inevitable. As soon as you label something ‘never/zero’ – as soon as you specify never/zero outcomes that are closely tied in time or space to front-line professionals – those professionals will be blamed and shamed, either directly or indirectly, by individuals, teams, the organisation, the judiciary, the media, or the public. The shame may be systematised; someone will have the bright idea to publish de-contextualised data, create a league table of never/zero failures, ‘out’ individuals, etc. The associated unintended consequences of these sorts of interventions are now well-known. So we have to acknowledge that simultaneous talk of never/zero and ‘just culture’ is naive at best. It is at odds with our understanding of systems thinking, human factors and social science. This understanding is lacking among the public, and this is sadly evident in the language of the media and, for instance, the Patients’ Association’s latest press release, which attached terms such as “disgrace”, “utter carelessness”, “unforgivable” to never events. Never/zero adds to the psychology of fear in organisations (see here for a good overview). Nobody goes to work to have an accident, but never/zero treats people as if they do.

7. Never/zero makes safety language even more negative

These emotive words illustrate how words matter, especially when lives are involved. Never/zero adds to an already negative safety nomenclature, which  limits our thinking about work, and out ability to learn. Inevitably, this language, even if intended in a technical sense, is used in the media and judiciary in a very different sense: ‘human error’ is used, then abused. Error becomes inattention. Inattention becomes carelessness. Carelessness becomes recklessness. Recklessness becomes negligence. Negligence becomes gross negligence. Gross negligence becomes manslaughter. If that sounds dramatic, it is this more or less the semantic sequence that has ensnared Spanish train driver Francisco José Garzón Amo, who – over two years on – is still facing 80 charges of manslaughter by professional recklessness after the accident at Santiago de Compostela, in July 2013.

8. Never/zero cultivates cynicism

It is obvious to those on the front-line of services that never/zero is unachievable, and sadly it inspires cynicism. There are probably a few reasons for this. Aside from ignorance of ‘always conditions’ (Point 5), it illustrates a profound misunderstanding of human motivation. Never/zero is the worst kind of safety poster message (along with ‘Safety is out primary goal‘ ), not only because it is unrealistic or unachievable, but because it assumes that people’s hearts and minds are not in the job, so they need to be reminded to ‘be careful’. Yet any ‘accident’ would almost inevitably harm the front-line workers who were there, at least emotionally, and at least for a time (hence why some organisations have implemented critical incident stress management, CISM).

I know an organisation that set a zero/never goal for a certain type of safety incident. It was widely publicised, and the incident occurred in the first few weeks. So then what? Is the goal null and void, or do we reset the clock? Never/zero can confirm what front-line staff always knew, that never/zero is unachievable (Point 2).

9. Never/zero will probably lead to burnout

It’s tiring, chasing rainbows. And because never/zero is unachievable, because it is a negative, because it cultivates blame, shame and cynicism, because it is someone else’s agenda, it is more likely to lead to burnout of professionals. If not the chronic stress variant, then the burning out of one’s capacity, willingness and motivation to take the goal seriously and to pursue the goal any more. Try never/zero thinking as a public health practitioner (they already did). Burnout is inevitable.

What then for safety? Is safety just about never/zero? And if never/zero is unachievable, then is safety worth pursuing at all? There is precious little enthusiasm for traditional safety management (outside of those whose salary depends on it), so is it wise to extinguish the flame altogether with a  never/zero blanket?

10. Never/zero does not equal good safety

What’s the difference between a near miss and a mid-air collision? A little piece of blue sky, and there’s a lot of blue sky out there. So, what if an organisation has lots of near misses but zero collisions? Never/zero focuses on outcomes that can be counted and measured instead of the messy ‘always conditions’ that shape performance, but cannot be measured. Never/zero, then, is a trade-off after all, but a blunt-end trade-off. Because it is easier to set a never/zero goal than to understand how things really work.

If not never/zero, then what?

No-one wants an accident or never event. That’s obvious. It’s not a useful goal though, and it’s not a useful way of thinking either. Never/zero is the stuff of never-never land. You can’t swear off accidents.

There are alternative ways of thinking. There is of course harm reduction, long preferred in public health. There is as low as reasonably achievable (ALARA) or practicable (ALARP) in safety-critical industries where there are major accident hazards.

And then there is Safety-II and resilience (e.g. resilient healthcare). Rather than thinking only about counterfactuals and seeking only to avoid that things go wrong, Safety-II involves thinking about work-as-actually-done, and how to ensure that things go right. This means we have to ask “what is right for us (at this time)?”, i.e. what matters to us and what are our goals? Goals, especially when not imposed externally, promote attraction instead of simply avoidance, and imply trade-offs, since goals are obviously not all compatible. A focus on goals means that we must think about effectiveness, which includes safety (safe operations)  among other things such as demand, capacity, flow, sustainability, and so on. Focus on a goal makes us think of ways toward the goal, not just ways to avoid an anti-goal.

So perhaps instead of a never/zero focus, we should think of goals that we would like to achieve, the conditions and opportunities that that are necessary to achieve those goals, and the assets that we have and may help us to achieve the goals. ‘Always’ is probably as unachievable as ‘never’, but we can always try, knowing that we will not always achieve.

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , , , , , | 6 Comments