Four Kinds of ‘Human Factors’: 2. Factors of Humans

In the first post in this series, I reflected on the popularisation of the term ‘human factors’ and discussion about the topic. This has brought into focus various differences in the meanings ascribed to ‘human factors’, both within and outside the discipline and profession itself. The first post explored human factors as ‘the human factor’. This second post explores another kind of human factors: Factors of Humans.

2706701983_dc3d66fb8a_z

Ear by Simon James CC BY-SA 2.0 https://flic.kr/p/58bycz

What is it?

This kind of human factors focuses primarily on human characteristics, understood primarily via reductionism. Factors of humans include, for example:

  • cognitive functions (such as attention, detection, perception, memory, judgement and reasoning (including heuristics and biases), decision making – each of these is further divided into sub-categories)
  • cognitive systems (such as Kahneman’s dual process theory, or System 1 and System 2)
  • types of performance (such as Rasmussen’s skill-based, rule-based, and knowledge-based performance)
  • error types (such as Reason’s slips, lapses, and mistakes, and hundreds of other taxonomies, including my own)
  • physical functions and qualities (such as strength, speed, accuracy, balance and reach)
  • behaviours and skills (such as situation awareness, decision making, teamwork, and other ‘non-technical skills’)
  • learning domains (such as Bloom’s learning taxonomy) and
  • physical, cognitive and emotional states (such as stress and fatigue).

These factors of humans may be seen as limitations and capabilities. As with human-factors-as-the-human-factor, the main emphasis of human-factors-as-factors-of-humans is on the human; but general constituent human characteristics, not the person as an individual. The factors of humans approach acts like a prism, splitting human experience into conceptual categories.

This kind of human factors is emphasised in a definition provided by human factors pioneer Alphonse Chapanis (1991):

“Human Factors is a body of knowledge about human abilities, human limitations, and other human characteristics that are relevant to design.”

But Chapanis went on to say that “Human factors engineering is the application of human factors information to the design of tools, machines, systems, tasks, jobs, and environments for safe, comfortable, and effective human use.” He therefore distinguished between ‘human factors’ and ‘human factors engineering’. The two would probably be indivisible to most human factors practitioners today (certainly those who identify as ‘ergonomists’, i.e., designers), and knowledge and application come together as parts of many definitions of human factors (or ergonomics). Human factors is interested in these factors of humans, then, to the extent that they are relevant to design, at least in theory (in practice, the sheer volume of literature on these factors suggests otherwise!).

Who uses it?

Factors of humans have been researched extensively, by psychologists (especially cognitive psychologists, and increasingly neuropsychologists), physiologists and anatomists, and ergonomists/human factors specialists. Human abilities, limitations and characteristics are therefore the emphasis of many academic books and scientific articles concerning human performance, applied cognitive psychology, cognitive neuropsychology, and human factors/ergonomics, and  is the standard fare of such courses.

This kind of human factors is also of interest to front-line professionals in non-technical skills training, where skilled performance is seen through the lenses of decision making, situational awareness, teamwork, and communication.

The Good

Factors of humans – abilities, limitations, and other characteristics – must be understood, at least at a basic level, for effective design and management. Decades of scientific research have produced a plethora of empirical data and theories on factors of humans, along with a sizeable corpus of measures. Arguably, literature is far more voluminous for this kind of human factors than any other kind. We therefore have a sophisticated understanding of these factors. Much is now known from psychology and related disciplines (including human factors/ergonomics) about sustained attention (vigilance), divided attention, selective attention, working memory, long term memory, skilled performance, ‘human error’, fatigue, stress, and so on. Much is also known about physiological and physical characteristics. These are relevant to the way we think about, design, perform, and talk about, record or describe human work: work-as-imagined, work-as-prescribed, work-as-done and work-as-disclosed. Various design guidelines (such as the FAA Human Factors Design Standard, HF-STD-001) have been produced on the basis of this research, and hundreds of HF/E methods.

This kind of human factors may also help people, such as front-line professionals, to understand their own performance in terms of inherent human limitations. While humanistic psychology emphasises the whole person, and resists reducing the person into parts, cognitive psychology emphasises functions and processes, and resists seeing the whole person. So while reductionism often comes in for attack among humanistic and systems practitioners, knowledge of limits to sustained attention, memory, judgement, and so on, may be helpful to better understand failure, alleviating the embarrassment or shame that often comes with so-called ‘human error’. Knowledge of social and cultural resistance to speaking up can help to bring barriers out into the open for discussion and resolution. So perhaps reductionism can help to demystify experience, help to manage problems by going down and in to our cognitive and physical make-up, and help to reduce the stigma of failure.

The Bad

Focusing on human abilities, human limitations, and other human characteristics, at the expense of the whole person, the context, and system interactions, comes with several problems, but only a few will be outlined here.

One problem relates to the descriptions and understandings that emerge from the reductive ‘factors of humans’ approach. Conceptually, human experience (e.g., of performance) is understood through one or more conceptual lenses (e.g., situation awareness, mental workload), which reflect partial and fragmented reflections of experience. Furthermore, measurement relating to these concepts often favours quantification. So one’s experience may be reduced to workload, which is reduced further to a number on a 10-point scale. The result is a fragmented, partial and quantified account of experience, and these numbers have special power in decision making. However, as humanistic psychology and systems thinking reminds us, the whole is greater than the sum of its parts; measures of parts (such as cognitive functions, which are not objectively identifiable) may be misleading, and will not add up to form a good understanding of the whole. Understanding the person’s experience is likely to require qualitative approaches, which may be more difficult to gain, more difficult to publish, and more difficult to digest by decision-makers.

Related to this, analytical and conceptual accounts of performance with respect to factors of humans can seem alien to those who actually do the work. This was pointed out to me by an air traffic controller friend, who said that the concepts and language of such human factors descriptions do not match her way of thinking about her work. Human factors has inherited and integrated some of the language of cognitive psychology (which, for instance, talks about ‘encoding, storing and retrieving’, instead of ‘remembering’; cognitive neuropsychology obfuscates further still). So while reductionism may help to demystify performance issues, this starts to backfire, and the language in use can mystify, leaving the person feeling that their experience has been described in an unnatural and decontextualised way. Gong further, the factors of humans approach is often used to feed databases of incident data. ‘Human errors’ are analysed, decomposed, and entered into databases to be displayed as graphs. In the end, there is little trace of the person’s lived experience, as their understandings are reduced to an analytical melting pot.

By fragmenting performance problems down to cognitive functions (e.g., attention, decision-making), systems (e.g., System 1), error types (e.g., slips, mistakes), etc, this kind of human factors struggles with questions of responsibility. At what point does performance become unacceptable (e.g., negligent)? On the one hand, many human factors specialists would avoid this question, arguing that this is a matter for management, professional associations, and the judicial system. On the other hand, many human factors specialists use terms such as ‘violation’ (often further divided into sub-types; situational violation, routine violation, etc) to categorise decisions post hoc. (Various algorithms are available to assist with this process.) To those caught up in situations involving harm (e.g., practitioners, patients, families), this kind of analysis, reductionism and labelling may be seen as sidestepping or paying lip service to issues of responsibility.

While fundamental knowledge on factors of humans is critical to understanding, influencing and designing for performance, reductionist (including cognitivist) approaches fail to shed much light on context. By going down and in to physical and cognitive architecture, but not up and out to context and the complex human-in-system interactions, this kind of human factors fails to understand performance in context, including the physical, ambient, informational, temporal, social, organisational, legal and cultural influences on performance. This problem stems partly from the experimental paradigm that is the foundation for most of the fundamental ‘factors of humans’ knowledge. This deliberately strips away most of the richness and messiness of real context, and also tends to isolate factors from one another.

Because this kind of human factors does not understand performance in context, it may fail to deal with performance problems effectively or sustainably. For instance, simple design patterns (general reusable solutions to commonly occurring problems) are often used to counter specific cognitive limitations. These can backfire when designed artefacts are used in natural environments, and the design pattern is seen as a hindrance to be overcome or bypassed (problems with the design and implementation of checklists in hospitals is an example). Another example may be found in so-called ‘human factors training’ (which, often, should be called ‘human performance training’). This aims to improve human performance by improving knowledge and skills concerning human cognitive, social and physical limitations and capabilities. While in some areas, this has had success (e.g., teamwork), in others we remain constrained severely by our limited abilities to stretch and mitigate our native capacities and overcome system conditions (e.g., staffing constraints). Of course, in the absence of design change, training may also be the only feasible option.

A final issue worth mentioning here is that, more than any other kind of human factors, the ‘factors of humans’ kind has arguably been over-researched. Factors of humans are relatively straightforward to measure in laboratory settings, and related research seems to attract funding and journal publications. Accordingly, there are many thousands of research papers on factors of humans. The relative impact of this huge body of research on the design of real systems in real industry (e.g., road transport, healthcare, maritime) is dubious, but that is another discussion for another time.

References

Chapanis, A. (1991). To communicate the human factors message, you have to know what the message is and how to communicate it. Bulletin of the Human Factors Society, 34, 1-4.

Posted in Human Factors/Ergonomics | Tagged , | Leave a comment

Four Kinds of ‘Human Factors’: 1. The Human Factor

Over the last decade or so, the term ‘human factors’ has gained currency with an increasing range of people, professions, organisations and industries. It is a significant development, bringing what might seem like a niche discipline into the open, to a wider set of stakeholders. But as with any such development, there are inevitable differences in the meanings that people attach to the term, the mindsets that they bring or develop, and their communication with others.  It is useful to know, then, what kind of ‘human factors’ we are talking about? At least four kinds seem to exist in our minds, each with somewhat different meanings and – perhaps – implications. These will be outlined in this short blog post series, beginning with the first: The Human Factor.

 

9043194694_4bf35fc685_z

Steph Kelly, Air Traffic Controller at Heathrow Airport. NATS UK Air Traffic Control            CC BY-NC-ND 2.0 https://flic.kr/p/eM7JHU 

What is it?

The first kind of human factors is the most colloquial: ‘the human factor’. Human-factors-as-the-human-factor seems enters discussions about human and system performance, usually in relation to unwanted events such as accidents and – increasingly – cybersecurity risks and breaches. It is rarely defined explicitly.

Who uses it?

As a colloquial term, ‘the human factor’ seems to be most often used by those with an applied interest in (their own or others’) performance. The term was the title of an early text on human factors in aviation (see David Beaty’s ‘The Human Factor in Aircraft Accidents’, originally published in 1969, now ‘The Naked Pilot: The Human Factor in Aircraft Accidents‘). It can be found in magazine articles concerning human performance by aviators (e.g., this series by Jay Hopkins in Flying magazine) and information security specialists (e.g., Kaspersky, Proofpoint). Journalists tend to use the term in a vague way to refer to any adverse human involvement. Aside from occasional books and reports on human factors (e.g., Kim Vicente’s excellent ‘The Human Factor: Revolutionizing the Way People Live with Technology‘), the term is rarely used by human factors specialists.

The Good

In a sense, ‘the human factor’ is more intuitively appealing than the term ‘human factors’, which implies plurality. It seems to point to something concrete – a person, a human being with intention and agency. And yet it also hints at something vague – mystery, ‘human nature’. Human-factors-as-the-human-factor might therefore be seen in the frame of humanistic psychology, reminding us that:

  1. Human beings, as human, supersede the sum of their parts. They cannot be reduced to components.
  2. Human beings have their existence in a uniquely human context, as well as in a cosmic ecology.
  3. Human beings are aware and aware of being aware – i.e., they are conscious. Human consciousness always includes an awareness of oneself in the context of other people.
  4. Human beings have some choice and, with that, responsibility.
  5. Human beings are intentional, aim at goals, are aware that they cause future events, and seek meaning, value and creativity. (Association for Humanistic Psychology in Britain)

The individual, and her life and experience, is something that cannot be reduced to ‘factors’ the same way as a machine can be reduced to its parts, nor isolated from her context. The individual cannot be fully generalised, explained or predicted, since every person is quite different, even if we have broadly similar capabilities, limitations, and needs. Importantly, we also have responsibility, borne out our goals, intentions and choices. This responsibility is something that professional human factors scientists and practitioners are often nervous about approaching, and may deploy reductionism, externalisation or obfuscation to put responsibility ‘in context’ (this is sometimes at odds with others such as front-line practitioners, patients and their families, management and the judiciary, who perceive these narratives as absolving or sidestepping individual responsibility; see also just culture regulation).

Unfortunately, these possible upsides to human-factors-as-the-human-factor are more imaginary than real, since the term itself is rarely used in this way in practice.

The Bad

In use, ‘the human factor’ is loaded with simplistic and negative connotations about people, almost always people at the sharp end. ‘The human factor’ usually frames the person as a source of trouble – an unreliable and unpredictable element of an otherwise (imagined to be) well-designed and well-managed system. It comes with a suggestion that safety problems – and causes of accidents – can be located in individuals; safety (or rather, unsafety) is an individual behaviour issue. By example, Kaspersky’s blogpost ‘The Human Factor in IT Security: How Employees are Making Businesses Vulnerable from Within’ repeatedly uses adjectives such as ‘irresponsible’ and ‘careless’ to describe users. That is not to say that people are never careless or irresponsible, since we observe countless examples in everyday life, and the courts deal with many in judicial proceedings, but the question is whether this is a useful way to frame human interaction with systems in a work context. In the press, ‘the human factor’ is often used as a catch-all ‘explanation’ for accidents and breaches. It is a throwaway cause.

The human-factors-as-the-human-factor mindset tends to generate a behaviour modification solution to reduce mistakes – psychology, not ergonomics – via fear (threats of punishment or sanctions), monitoring (monitoring and supervision), or awareness raising and training (information campaigns, posters, training).  The mindset may lead to sacking perceived ‘bad apples’, or removing people altogether (by automating particular functions). In some cases, each of these is an appropriate response (especially training, for issues requiring knowledge and skill), but they will tend not to be effective (or fair) without considering the system as a whole, including the design of artefacts, equipment, tasks, jobs and environments.

 

Posted in Human Factors/Ergonomics | Tagged , , , , | Leave a comment

Invitation, Participation, Connection

The text in this post is from the Editorial of HindSight magazine, Issue 25, on Work-as-Imagined and Work-as-Done, available for download here.


4344878104_e746795618_o.jpg

Image: Nathan CC BY-SA 2.0 https://flic.kr/p/7BWCTs

If a friend asked you what makes your organisation and industry so safe, what would you say? Our industry is often considered ‘ultra-safe’, and yet we rarely ask ourselves what keeps it safe. What are the ingredients of safe operations?

When we ask this question to operational controllers as part of the EUROCONTROL safety culture programme, it is revealing to hear how far outside of the ops room the answers extend. Operational work is of course done by operational people, but it is supported by a diverse range of people outside of the ops room: engineers and technicians, AIS and meteo staff, safety and quality specialists, technology and airspace designers, HR and legal specialists, procedure writers and training specialists, auditors and inspectors, senior and middle managers, regulators and policy makers.

Each of the above has an imagination about operational work – as they think it is, as they think is should be, and as they think it could be. (Operational also have some imagination about non-operational work!) We call this work-as-imagined. It is not the same as the reality of work activity: work-as-done. The degree of overlap depends on the effectiveness of interaction between operational and non-operational worlds.

This is important because non-operational imaginations produce regulations, policies, procedures, technology, training courses, airspace, airports, buildings, and so on. These need to be ‘designed for work-as-done’.

Designing for work-as-done requires that we bring together those who do the work and those who design and make decisions about the work. We have talked with over a thousand people, in hundreds of workshops, in over 30 ANSPs, to discuss work and safety. While there are some excellent examples of interaction and cooperation (e.g., new systems, procedures and airspace), there are also many examples of disconnects between work-as-imagined and work-as-done. Where this is the case, people have said to us that operational and non-operational staff rarely get together to talk about operational work.

With this issue of Hindsight, we wish to encourage more conversations. But how? In their book Abundant Community, John McKnight and Peter Block suggest three ingredients of a recipe that can be used to bring people together.

Invitation

Think of the boundaries of your work community and your workplace. Is there a ‘welcome’ mat at the door, or a ‘keep out’ sign? Several barriers keep us apart:

  • Organisational barriers: Goals, structures, systems and processes that define and separate functions, departments and organisations.
  • Social barriers: ‘In-groups’ (us) and ‘out-groups’ (them), defined by shared values, attitudes, beliefs, interests and ways of doing things.
  • Personal barriers: Individual choices and circumstances.
  • Physical barriers: The design of buildings and environments.

We must look honestly at these barriers because by separating us they widen the gap between work-as-imagined and work-as-done. According to McKnight and Block, “The challenge is to keep expanding the limits of our hospitality. Our willingness to welcome strangers. This welcome is the sign of a community confident in itself.” Hospitality is the bedrock of collaboration.

How can we reduce the separating effects of organisational, social, personal and physical barriers, and extend an invitation to others, inside and outside our ‘community’?

  1. Participation

The second ingredient is participation, of those at the ‘sharp end’ in work-as-imagined, and of those at the ‘blunt end’ in work-as-done. This requires:

Capability (useful knowledge, skills, and abilities); Opportunity (the time, place and authorisation to participate); and Motivation (the desire to participate and a constructive attitude) (C-O-M). Together, we try to understand People, Activities, Contexts and Tools (P-A-C-T) – ‘as-found’ now, and ‘as-imagined’ in the future (C-O-M-P-A-C-T).

The capability lies within two forms of expertise. The first is field expertise, held by experts in their own work – controllers, pilots, designers, etc. The second is emergent expertise. It is more than the sum of its parts and only emerges when we get together and interact.

But who are ‘we’? In his book The Difference, Scott Page of the University of Michigan’s Center for the Study of Complex Systems reviews evidence about how groups with diverse perspectives outperform groups of like-minded experts. Diversity not only helps to prevent groups from being blindsided by their own mindsets. Diverse and inclusive organisations and teams are more innovative and generate better ideas. This diversity does not only refer to inherited differences such as gender and nationality, but also diversity of thought, experience and approach. Multiple perspectives, including outside perspectives, are a source of resilience. If you are a controller, imagine a supervisor from another ANSP’s tower or centre observing your unit’s work for a day or so, and discussing this with you, perhaps questioning some practices. They would likely see things that you cannot.

How can we increase diverse participation in the development of policies, procedures, and technology, and in the understanding of work-as-done?

  1. Connection

Among your colleagues, you can probably pick out a small number who are exceptionally good at connecting people. According to McKnight and Block, these connectors, typically: are well connected themselves; see the ‘half-full’ in everyone; create trusting relationships; believe in their community; and, get joy from connecting, convening and inviting people to come together.

Connectors know about people’s gifts, skills, passions – their capabilities – even those at the edge of the community. They know how to connect them to allow something bigger to emerge. They have an outlook based on opportunities. They have a deep motivation to improve things. They can sometimes be found at the heart of professional associations. People turn to them for support. Connectors are as valuable as the most distinguished experts.

Some people naturally have a capacity for making connections, but each of us can discover our own connecting possibility to help improve work-as-imagined and work-as-done.

Who are the connectors in your community, and how can they and you help to improve and connect work-as-imagined with work-as-done?

In this issue, you will read about work-as-imagined and work-as-done from many perspectives. In reading the articles, we invite you to reflect on how we might work together to bridge the gaps that we find.


Shorrock, S. (2017). Editorial: Invitation, participation, connection. HindSight, Issue 25, Summer 2017, EUROCONTROL: Brussels.

Posted in Culture, Human Factors/Ergonomics, Safety, systems thinking, Uncategorized | Tagged , , , , , | Leave a comment

Just Culture in La La Land

Photo: Steven Shorrock CC BY-NC-SA 2.0 https://flic.kr/p/Rpf4za

It was always going to happen.

The wrong Best Picture winner was read out live on air at The Oscars. Someone had to take the blame. Attention first turned to Warren Beatty and Faye Dunaway. They, after all, ‘touched it last’. But they had mitigating circumstances; they were given the wrong envelope. In any case, and perhaps more to the point, they are unsackable.

And so we go back a step, and ask who gave the wrong envelope? Now we find our answer: the PricewaterhouseCoopers auditors Brian Cullinan and Martha Ruiz. Both were sacked from the role of overseer shortly after the mistake.

Three key charges are levelled against Cullinan. First, he gave the wrong envelope, confusing the right envelope and the spare envelope for an award just given. Second, Cullinan posted a photo of Emma Stone to his Twitter account just before the fatal mistake. Third, when the wrong Best Picture winner was read out, he didn’t immediately jump into action. And neither did Ruiz

They had one job to do. They had one job! And they messed up.

So what should be the response? The relevant concept here is ‘just culture’. In his book ‘Just Culture‘, Sidney Dekker says that “A just culture is a culture of trust, learning and accountability“.  He outlines two kinds of just culture.

Retributive Just Culture

The first kind of just culture is a retributive just culture. According to Dekker, this asks:

  • Which rule is broken?
  • Who did it?
  • How bad was the breach, and what should the consequences be?
  • Who gets to decide this?

This is the typical form of just culture found in societies around the world, for thousands of years. Most of us are familiar with this from being small children.

Dekker explains that with retributive just culture, we have three scenarios:

  • Honest mistake, you can stay.
  • Risk-taking, you get a warning.
  • Negligence, you are let go.

There are even commercialised algorithms to help organisations with this distinction and the appropriate response. David Marx’s Just Culture Algorithm advises to console true human errors, coach against risky behaviours, and ultimately discipline reckless behaviour.

If we look at the Oscars scenario, we can address the three charges made against Culling and Ruiz.

On the first charge – giving the wrong envelope – we can conclude that this is an example of an ‘honest mistake’ category. This ‘honest mistake’ was influenced by a confusable envelope. In human factors and psychology, we have researched and catalogued such actions-not-as-planned for decades through diary studies, experiments, report analysis, interviews and naturalistic observation. We have many terms for such an error, common terms including ‘slip’ and ‘skill-based error’. In doctoral research that I began 20 years ago in the context of air traffic control, I developed a technique called ‘technique for the retrospective and predictive analysis of cognitive error’ (‘TRACEr’, download here). With TRACEr, we would probably classify this kind of error as Right action on wrong object associated with Selection error involving Perpetual confusion and Spatial confusion, which would be associated with a variety of performance shaping factors – aspects of the context at the time such as design, procedure, pressure and distraction. We’ve all done it, like when you pick up the wrong set of near identical keys from the kitchen drawer, or the wrong identical suitcase from the airport luggage carousel. In stressful, loud, distracting environments and with confusable artefacts, the chances of such simple actions-not-as-planned increase dramatically.

On the second charge, some might argue that posing for a photograph and sending tweets just prior to handing out the ‘Best Picture’ envelope is risk-taking, or even negligence. The TMZ gossip site wrote, “Brian was tweeting like crazy during the ceremony, posting photos … so he may have been distracted. Brian has since deleted the tweets.” Meanwhile, People reported an anonymous source who claimed that “Brian was asked not to tweet or use social media during the show. He was fine to tweet before he arrived at the red carpet but once he was under the auspices of the Oscar night job, that was to be his only focus.” The source reportedly continued, “Tweeting right before the Best Picture category was announced was not something that should have happened.” We can’t verify whether this is true and if so, who asked him not to use social media during the show. It is certainly sensible advice, bearing in mind what we know about distraction in safety critical industries and its role in accidents such as the 2003 train crash at Santiago de Compostela.

But perhaps the acid test for this assertion is whether people would have said anything about that photograph or tweet had everything gone according to plan. Just culture requires that we isolate the outcome from the behaviour. Applying the definition and principles of retributive just culture, what we are interested in is the behaviour. If the right envelope was given, then the photo on twitter would likely have been retweeted hundreds or thousands of times, and reported on various gossip websites and magazines, with no judgement from the press and public about the wisdom of such an activity. Instead, the photo would have been celebrated, and any deviation from alleged instructions ‘not to tweet or use social media during the show‘ would have been laughed away.

The third charge, levelled at both accountants, was that they failed to respond in a timely manner on hearing “La La Land”. The prospect of an erroneous announcement was clearly imaginable to Cullinan and Ruiz, who spoke to The Huffington Post about this scenario just a week or so before that fateful night: “We would make sure that the correct person was known very quickly,” Cullinan said. “Whether that entails stopping the show, us walking onstage, us signaling to the stage manager — that’s really a game-time decision, if something like that were to happen. Again, it’s so unlikely.” But could it be that, live on the night of the biggest show on earth, with the eyes of tens of millions upon them, they froze? Again, TRACEr might classify this as OmissionNo decisionDecision freeze, with a variety of performance shaping factors such as stress and perhaps a lack of training (e.g., simulation or practice).

The ‘freeze’ response is the neglected sibling of ‘flight’ and ‘flight’, and occurs in traumatic situations. It’s the rabbit-in-the-headlights response. Many people involved in accidents and traumatic events have been known to freeze, including in aircraft accidents. It is a psychophysiological response and few of us can claim immunity. If we take this as an example of freeze, associated with confusion, shock and fear, then can we say this is an ‘honest mistake’? Even this seems not to fit well, but for the sake of retributive just culture process, let’s classify this omission as such (since it would seem hideously harsh to judge a psychophysiological response as ‘risk taking’ or ‘gross negligence’).

Now we have two counts of ‘honest mistake’ for Cullinan, and one for Ruiz, and one count for Cullinan where we are unsure of its classification. But if the tweet was not seen as a problem had the error not have occurred, then no harsh personal responses are justified.

But they had one job! And such an important job (by Hollywood standards)! And it’s not like that are losing their actual jobs or their liberty. It’s hard to feel sorry for two well paid accountants, mingling with Hollywood celebs during one of the biggest shows on earth.  And remember that the consequences for PwC are not insignificant. An unnamed source told ‘People’ that “The Academy has launched a full-scale review of its relationship with PwC but it is very complicated.” So surely cancelling their involvement is justified, along with a few stories in the media?

Put aside for one moment that the pair are celeb-mingling accountants, and think of them as Brian and Martha – two human beings with families and feelings and ordinary lives outside of this extraordinary day. Most of us have experienced some kind of humiliation in life. It is deeply unpleasant and the memory can resonate for months, years, or a lifetime. Most of us, though, have not felt this humiliation in front of tens of millions of people on live TV, played back by hundreds of millions afterwards. Most of us have not been the subject of thousands of global news stories – and over a million web pages – with front-page stories labelling us a ‘loser’ and a ‘twit’, and a ‘bungling bean counter’, with press hounding us and our families. Most of us have not been subject to hundreds of thousands of comments and memes on social media, nor have we needed bodyguards due to death threats. This is the reality for Brian Cullinan and Martha Ruiz.

Restorative Just Culture

There is another way, and according to Dekker this is restorative just culture. Dekker says that a restorative just culture asks:

  • Who is hurt?
  • What do they need?
  • Whose obligation is it to meet that need?
  • How do you involve the community in this conversation?

Here we might say that those hurt might include the producers of La La Land and Twilight, though neither have given that impression since the event. We might also list the The Academy and PwC, in terms of repetitional damage.

But the individuals most hurt are surely Brian and Martha. What do they need? That we don’t know, but it is certain that their needs are not met by the response so far. Whose obligation is it to meet that need? Here one might say it is the obligation of The Academy and PwC, but we all have an obligation at least not to cause further harm.

The event may live on as an example to individuals and organisations in safety-critical, security-critical and business-critical industries when ordinary front-line workers get caught up in accidents that they never wanted to happen. Should we scapegoat pilots and air traffic controllers, or doctors and nurses, for good-will actions and decisions with unintended consequences? Or should we seek to understand and redesign the system to increase the chances of success in the future? The choice will influence whether front-line workers disclose their ‘honest mistakes’, or cover them up. In his book Black Box Thinking, Matthew Syed explains that “Failure is rich in learning opportunities for a simple reason: in many of its guises, it represents a violation of expectation. It is showing us that the world is in some sense different from the way we imagined it to be.

The event is also a challenge to us, to society. Syed notes that “Society, as a whole, has a deeply contradictory attitude to failure. Even as we find excuses for our own failings, we are quick to blame others who mess up.” He continues, “We have a deep instinct to find scapegoats.” We are deeply hypocritical in our response to failure. He describes examples from healthcare and aviation, where, on reading or hearing about an accident, we feel “a spike of indignation“, “fury“, and a a desire to stigmatise.

Paradoxically, the families of victims of accidents often have empathy for the front-line workers involved, and have a far more systemic view of the events than the general public, politicians, or – in many cases – official accident reports. This can be seen in the case of Martin Bromiley, whose wife died in a routine accident. Martin Bromiley went on to set up the Clinical Human Factors Group, and campaigns for just culture (see this video). It can also be seen in the families of those who died in the train crash at Santiago de Compostela in 2013, which was blamed on ‘human error’ both in the press, and in the official accident report (Spanish version). Following a review of the official accident report by the European Railways Agency, Jesús Domínguez, chairman of the Alvia victims’ association, told The Spain Report that “it confirms that the sole cause is not human error and that the root causes of the accident still need to be investigated“. On 28 July 2013, Garzón Amo was charged with 79 counts of homicide by professional recklessness and an undetermined number of counts of causing injury by professional recklessness. The charges still stand today. (See Schultz, et al, 2016 for a more detailed treatment of the accident.)

Of course, we cannot compare the outcome of The Oscars with any event involving loss of life. But the point is that our corporate and societal responses are similar, and have recursive effects, as Syed explains:

It is partly because we are so willing to blame others for their mistakes that we are so keen to conceal our own. We anticipate, with remarkable clarity, how people will react, how they will point the finger, how little time they will take to put themselves in the tough, high-pressure situation in which the error occurred. The net effect is simple: it obliterates openness and spawns cover-ups. It destroys the vital information we need in order to learn.

A scapegoat, or safer systems? We can’t have both

So we have two options available to us. According to Dekker, retributive justice asks who was responsible, and sets an example where those responsible have crossed the line. Restorative asks what is responsible, then changes what led up to the incident, and meets the needs of those involved. Both are necessary, and both can work and result in fair outcomes for individuals and society, and better learning. But – especially outside of the judiciary – perhaps the latter is more effective and humane. If we want to learn and improve outcomes in organisations and society, focus on human needs and on improving the system.

The Just Culture in La La Land approach takes the retributive route, but gets it badly wrong. Blaming individuals for their actions-not-as-planned in messy environments has destructive and long-lasting effects on individuals, families, professions, organisations, industries and society as a whole.

In the end, we all have one job. Our job is to learn.

See also

Human Factors at The Oscars

Just culture: Who are we really afraid of?

Safety-II and Just Culture: Where Now?

Human Factors at The Fringe: My Eyes Went Dark

Never/zero thinking

‘Human error’ in the headlines: Press reporting on Virgin Galactic

Life After ‘Human Error’ – Velocity Europe 2014

Human error’: The handicap of human factors, safety and justice

Posted in Culture, Human Factors/Ergonomics, Humanistic Psychology, Safety, systems thinking | Tagged , , , | 4 Comments

Human Factors at The Oscars

5121440257_e81647480b_o.jpg

Photo: Craig Piersma CC BY-NC-ND 2.0 https://flic.kr/p/8NyHL6

“An extraordinary blunder”

It has variously been described as “an incredible and almost unbelievable gaffe” (Radio Times), the greatest mistake in Academy Awards history” (Telegraph), “an extraordinary blunder…an unprecedented error” (ITV News), “the most spectacular blunder in the history of the starry ceremony” and “the most awkward, embarrassing Oscar moment of all time: an extraordinary failure” (Guardian).

It was, of course, the Grand Finale of the Oscars 2017.

Faye Dunaway and Warren Beatty are all set to announce the best picture win. Beatty begins to read out the winners card. But he looks visibly puzzled, pausing and looking in the envelope to see if there is anything else that he’s missed. He begins to read out the winners card, “And the Academy Award…”. He pauses and looks in the envelope again. “…for Best Picture“. He looks at Dunaway, who laughs “You’re impossible!”, then hands the card to her. Dunaway, perhaps assuming this is all for effect, simply reads out what she sees, and announces “La La Land!“.

Music sounds and a narrator gives a 17-second spiel about the film: “La La Land has fourteen Oscar nominations this year, and is tied for the most nominated movie in Oscar history, winning seven Oscars…

The La La Land team exchange embraces and walk to the stage. Jordan Horowitz, a producer, delivers the first thank-you speech. Everything looks normal. But as the second and third thank-you speeches are being delivered, there is visible commotion. A member of the Oscars production team takes back the envelope that has been given to the La La Land producers.

The winner’s envelope is, in fact, the envelope for best actress, just given to  La La Land’s Emma Stone. Behind him, the PricewaterhouseCoopers overseers – Brian Cullinan and Martha Ruiz – are on stage, examining the envelopes.

At the end of his speech, Producer Fred Berger says nervously: “We lost, by the way”. Horowitz takes over, “I’m sorry, there’s a mistake. Moonlight, you guys won Best Picture“. Confused claps and cries ensue. “This is not a joke“, Horowitz continues. Beatty now has the right card, but Horowitz takes it out of Beatty’s hand and holds it up to show the names of the winning producers.

Beatty tries to explain, and is interrupted by host Jimmy Kimmel: “Warren what did you do?!“. Beatty continues, “I want to tell you what happened. I opened the envelope and it said, ‘Emma stone – La La Land’. That’s why I took such a long look at Faye and at you. I wasn’t trying to be funny.” Horowitz hands his Oscar to Barry Jenkins, Moonlight’s director.

It was “the first time in living memory that such a major mistake had been made” (Reuters). The accountancy firm PriceWaterhouseCoopers has apologised and promised an investigation. In a statement, they said, “The presenters had mistakenly been given the wrong category envelope and when discovered, was immediately corrected. We are currently investigating how this could have happened, and deeply regret that this occurred. We appreciate the grace with which the nominees, the Academy, ABC, and Jimmy Kimmel handled the situation”.

Such a mistake, in an ordinary setting, is usually quite uneventful. Similar sorts of things happen every day. The only thing that is “incredible“, “spectacular” and “extraordinary” is the context. It is worth, then, looking a little deeper at this extraordinary event, and considering how similar sorts of  events are played out in many ordinary, but critical, contexts.

Design

The design of the envelopes for the various Oscar awards is identical. The only difference between the envelopes is the text that indicates the category. There is no other means of coding (e.g., colour, pattern) to indicate any difference. Several industries have realised the problem with this approach, and in some ways this can be considered the beginnings of the discipline of human factors and ergonomics: “A seminal study that set the agenda for the scientific discipline of human factors was by the experimental psychologists, Fitts and Jones (1947), who adapted their laboratory techniques to study the applied problem of ‘pilot error’ during WWII. The problem they faced was that pilots of one aircraft type frequently retracted the gear instead of the flaps after landing. This incident hardly ever occurred to pilots of other aircraft types. They noticed that the gear and flap controls could easily be confused: the nearly identical levers were located right next to each other in an obscure part of the cockpit” (van Winsen and Dekker, 2016) .

This problem still exists today in settings far more important than The Oscars, but far less newsworthy…until disaster strikes. A notable example is medicine packaging, where medicine names look alike or sound alike or have very similar labels for different drugs or doses. Many packages and labels require users to force attention onto small details of text, perhaps with the addition of a small area of colour which, on its own, is quite inconspicuous. It is asking a lot of people to make critical – sometimes life-and-death-critical – decisions based on small design features. This is in addition to drugs that look alike or sound alike, such as Aminophylline and Amitriptyline, or Carbamazepine and Chlorpromazine, or Vinblastine and Vincristine.

Experience of human factors suggests a number of coding methods (e.g., shape, colour, size) that, used appropriately, can help to make vital distinctions. There are also several design guidelines for medicines by NHS NPSA (2007) and the European Medicines Agency (2015). In human factors/ergonomics, these are used as part of an iterative human-centred design method that understands stakeholders and context, identifies user needs, specifies design requirements, produces prototypes, and tests them.

In the absence of this process, what is amazing is not that such errors occur, but that they do not occur much more often than they do. Because it happens fairly infrequently, when it does happen it is often (and unhelpfully) branded ‘human error’. But this is not simply a problem of ‘human error’. It is a problem of design, where form (such as branding and aesthetics) so often trumps function. As Hollnagel (2016) states, “The bottom line is that the artefacts that we use, and in many cases must use, should be designed to fit the activity they are intended for“. Form-over-function design places the human in a position where they have to bridge the gap between form and function every time they use an artefact.

Safeguards

For the Oscars, two identical sets of the winners cards are made for ‘safety purposes’. These duplicate envelopes are held in the wings in case anything should go wrong with a presenter or an envelope. In this case, it may be that the duplicate of the Best Actress award, which had just been announced, was handed to Beatty as he walked out to announce the Best Picture winner.

Safeguards feature in most safety critical industries, and are often the result of a risk assessment that specifies a risk control for an identified risk. But the risk assessment process is often a linear cause-effect process, and it often stops at the risk control. And risk controls can have unintended consequences and introduce new risks. Consider this example in the context of aviation and air traffic control:

In early 2014, the UK experienced a prolonged period of low atmospheric pressure. At the same time, there was an unusual cluster of level busts [where aircraft go above or below the flight level or altitude instructed by ATC] at the transition altitude, which were thought to be linked to incorrect altimeter setting on departure into the London TMA [London airspace].

Level busts have been, and remain, a key risk in NATS operation. Longer-term strategic projects, such as the redesign of the London TMA and the raising of the Transition Altitude, are expected to provide some mitigation. However, to respond tactically to the perceived trend in the short-term, it was decided to issue a Temporary Operating Instruction (TOI) to controllers.

The TOI required the inclusion of additional phraseology when an aircraft was cleared from an altitude to a Flight Level during low pressure days. The additional phraseology was “standard pressure setting” e.g. “BigJet123, climb now FL80, standard pressure setting”. The change was designed to remind pilots to set the altimeter to the standard pressure setting (1013 hPa) and so reduce level busts associated with altimeter setting. As this phrase was deemed to be an instruction, it was mandatory for flight crews to read back this phrase.

The TOI was subject to the usual procedural hazard assessment processes and implemented on 20 February 2014 on a trial basis, with a planned end date of 20 May 2014, after which the trial results would be evaluated. The change was detailed in Notices to Airmen (NOTAMs).

During the first day of implementation, several occurrence reports were received from controllers, who noted that flight crews did not understand the meaning of the phraseology, and did not read back as required. This led to additional radio telephony to explain the instruction, and therefore additional workload and other unintended consequences.

Extract from case study by Foster, et al, in EUROCONTROL (2014). 

Every industry has many examples of ‘safeguards gone bad’. We often fail to understand how such changes change the context and introduce secondary problems.

Decision making under uncertainty

Beatty is standing there, with the eyes of tens of millions of viewers upon him. He is being recorded for perpetuity, for viewing by hundreds of millions more. He has to make a decision about an announcement, which will feel like a gold Olympic medal to a few producers. But he isn’t sure what’s going on. As Beatty  explained, “I opened the envelope and it said, ‘Emma Stone – La La Land’. That’s why I took such a long look at Faye and at you. I wasn’t trying to be funny“.

Here we cannot be certain what was going through Beatty’s mind, but could it be that – live on one of the most important TV events in the world – Beatty did not want to voice his confusion and uncertainty? He appeared visibly puzzled and gave the envelope to Dunaway to read out the ‘winner’. Dunaway could not have known about Beatty’s thoughts, since his behaviour could easily have been a time-filler or fumbling joke, and of course it made sense to her to simply read hat she saw: “La La Land“.

When under pressure, any delay can have associated costs. For Beatty, asking for clarification would have meant an awkward period of filler, a clumsy live-on-air check of envelopes, perhaps a loss of advertising time. In a state of confusion and self-doubt, perhaps it made sense to say nothing and pass the confusing artefact to someone else.

In many safety-critical activities, decisions are made under uncertainty. The information and situation may be vague, conflicting or unexpected. In some cases, there is a need to signal confusion or uncertainty, perhaps to get a check, or to ask for more time. It can seem hard for us to give voice to our uncertainty in this way, especially under pressure. When someone has a command position – in an operating theatre, cockpit, or at the Oscars  – it can be difficult for that person to indicate that they are not sure what is going on. This has played out in several accidents and moreover in everyday life. But sometimes, the most powerful phrase may be something along the lines of, “I do not understand what is going on”. This identifies a problematic situation and opens the door for other members of the team to help problem-solve. This kind of intervention is part of many training programmes for ‘team resource management’ (by whatever name), and can help everyone involved – no matter what their formal position – to voice and resolve their doubts, uncertainties and concerns.

It’s just an awards show

The events of Oscars 2017 will be emblazoned forever on the minds of participants and aficionados. But it will also soon be a feature of a trivia game or TV show. As host Jimmy Kimmel said “Let’s remember, it’s just an awards show.” But for those who have to put up with the same sorts of problems every day, it’s much more than that. In many industries, people help to ensure that things go well despite other aspects of the system and environment in which they work. For the most part, the human in the system is less like a golden Oscar, and more like a Mister Fantastic, using abilities of mind and body to connect parts of systems that only work because people make them work. This aspect of human performance in the wild is usually taken for granted. But in the real world, people create safety. And for that, they deserve an Oscar.

References

EUROCONTROL (2014). Systems Thinking for Safety: Ten Principles. A White Paper. Brussels: EUROCONTROL Network Manager, August 2014. Authors: Shorrock. S., Leonhardt, J., Licu, T. and Peters, C.

Hollnagel, E. (2016). The Nitty-Gritty of Human Factors (Chapter 4). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

van Winsen, R. and Dekker, S. (2016). Human Factors and the Ethics of Explaining Failure (Chapter 5). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

See also

Just Culture in La La Land

‘Human error’ in the headlines: Press reporting on Virgin Galactic

Life After ‘Human Error’ – Velocity Europe 2014

Human error’: The handicap of human factors, safety and justice
The HAL 9000 explanation: “It can only be attributable to human error”
Occupational Overuse Syndrome – Human Error Variant (OOS-HEV)
‘Human error’: Still undefined after all these years

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , | 10 Comments

The Archetypes of Human Work: 7. Defunct

This is the seventh and last in a series of posts on The Archetypes of Human Work, which are based on the interactions or relationships between The Varieties of Human Work. For an introduction, see here.

The seven archetypes are:

  1. The Messy Reality
  2. Congruence
  3. Taboo
  4. Ignorance and Fantasy
  5. Projection
  6. P.R. and Subterfuge
  7. Defunct (this Archetype)

Each archetype includes a number of examples (currently healthcare-related). If you have further examples – from any industry – please provide an example as a comment or get in touch. More examples will be added over time.

Archetype 7: Defunct

slide7

Archetype 7: Defunct

Composition: work-as-prescribed but not as-done. May or may not be as-imagined or as-disclosed.

Short description: Some forms of prescribed work are not enacted, or else drift into disuse, but are still officially in place. Some will imagine that these are in place, while others know or think they are not. However, the existence of the Defunct work may be used to judge actual activity.

What is it? 

Much human work exists in prescribed form, such as regulations, management systems, policies, procedures, guidelines, checklists, good practice, user interface dialogues, etc. Sometimes, this work-as-prescribed does not reflect the reality of work-as-done, which might be characterised as The Messy Reality. The prescribed work still exists, but in a form which is Defunct. Sometimes, this is just a temporary matter, where work-as-prescribed for some reason does not apply. Other times, work-as-prescribed may be permanently Defunct. Work-as-prescribed may even seem quite irrelevant; few would even think about it or discuss it, let alone follow it, especially at the front line of work, or even throughout an organisation or industry sector.

Why does it exist? 

It is often the case that Defunct designed work has been prescribed without adequate attention to the the design process, often an efficiency-thoroughness trade-off at the blunt end. A thorough approach to design (of interfaces, procedures, checklists, etc) would require that: 1) the stakeholders (especially the users), system, activities and context are understood; 2) stakeholder needs are investigated and design requirements specified; 3) prototypes are developed; and then 4) prototypes are tested. The testing would reveal any flaws in the implementation of this process, and thus there would be iterative loops back to each stage. If the prototype (e.g., checklist) meets the users’ and other stakeholders’ needs, then we have a final step: 5) implementation. The whole process would be planned with appropriate resources allocated (expertise, time, etc). This is a thorough approach, known as human-centred design (or ergonomics).

The ‘efficient’ approach, which is more common, is to go straight to step 5 (implementation), perhaps with some perfunctory consideration of step 1. Commercial-off-the-shelf/pre-designed systems and artefacts are often purchased, which is understandable and often completely necessary. The problem is, neither the developer nor the purchaser may have completed the previous 4 steps. Even if the developer has used some kind of human-centred design process, the new context and stakeholders (and therefore the stakeholders’ and users’ needs and design requirements) may well be very different. Since there is no testing, feedback is gathered in real operations, by which time it is too late. Local adaptation of the artifact (e.g., checklist, user interface dialogue) to the users’ needs may be impossible, prohibitively expensive or impractical.

People at the sharp end are now faced with a Catch-22. Either they comply with work-as-prescribed (Congruence) or they find another unprescribed solution (The Messy Reality) and the work-as-prescribed is Defunct. In either case, work-as-done may have unintended and unforeseen consequences.

Even with human-centred design, work-as-prescribed may fall into disuse. Such cases are often a mystery to those at the blunt end and even many at the sharp end. This tends to happen when the work-as-prescribed is not understood, either the details or the purpose. In such cases, continuous monitoring and discussion of work-as-done is likely to be helpful, with appropriate adjustment and education where necessary.

There may also be cases where work-as-prescribed is simply not annulled or abolished when it should be. Many organisations and governments have numerious policies, procedures, regulations, laws and so on that remain officially in place, but that no-one imagines are in use. (British law is replete with such laws. For instance, Section 54 of the Metropolitan Police Act 1839 makes it an offence to carry a plank of wood on a pavement.)

Shadow side

Many of the problems associated with the Defunct archetype concern the nature of work-as-done and work-as-imagined, and so are associated with other archetypes, especially The Messy Reality and Ignorance and Fantasy.

In some cases, work-as-prescribed is Defunct only in particular circumstances. This was the case with the QF32 engine failure. The Airbus A380 ECAM checklists could not be followed as prescribed. In such cases, the people in control are deep into The Messy Reality and have to use their judgement and experience to find alternative solutions to the problems that they face. If appropriate training is not provided to help deal with such exceptional events, then the assumption that work-as-prescribed is universally safe becomes a particular liability.

In other cases, work-as-prescribed is more or less permanently Defunct. This presents some different problems, again mostly associated with other archetypes. A particular problem concerns the consequences of not working to rule. Gaps between work-as-prescribed and work-as-done may be the basis for disciplinary and regulatory/legal action against individuals and organisations. In some cases, such action may be unfair and vindictive, for instance when Defunct rules are used for used as a tool for workplace bullying.

Finally, an obvious problem with this archetype is that the Defunct work might actually represent good practice with benefits for safety, health, or other goals. In this case we need to try to understand why the work-as-prescribed failed to make it over the line of reality.

Examples (Healthcare)


Of the 2184 policies, procedures and guidelines (PPGs) in my organisation, 28% are currently out of date and may therefore not reflect current practice. More interesting still, are the nearly 19% of PPGs that have been opened less than 5 times in total, including by their authors. These documents are often written to meet the requirements of external agencies with the idea that not having a policy leaves the organisation vulnerable to criticism. These documents remain unopened, unused and unrelated to daily work but may be used after incidents as a form of organisational protection: “yes, we had a policy for that”.

Carl Horsley, Intensivist, @horsleycarl


In operating theatres that use lasers, certain precautions, rules and safety precautions have to be in place. Part of this is to have a risk assessment and standard written laser protection policy. This risk assessment is normally carried out by a laser protection supervisor from a distant site who has no knowledge of local practice. In addition this tends to be written when a new laser is purchased and then is never updated. While work-as-imagined would be following the policy to the letter, if the policy is impractical for the local use of the laser, the local team will tend to develop workarounds (The Messy Reality). When there is a site visit by the laser protection supervisor however, work-as-disclosed will follow work-as-imagined – as they are reassured that everyone follows all the rules to the letter (P.R. and Subterfuge). If a laser protection incident does however occur, the local team would all be held to account by the Defunct laser protection rules.

Craig McIlhenny, Consultant Urological Surgeon, @CMcIlhenny


When the surgical team book a patient for theatre, they are supposed to discuss this with the anaesthetic team, to explain the indication for surgery, the degree of urgency and any medical conditions the patient has. The anaesthetic team should therefore be a central point who are aware of all the patients waiting for theatre to help with appropriate prioritisation. In reality this only happens if they happen to see an anaesthetist when they book the case. More often than not, cases are “booked” with no discussion with the anaesthetist and often the cases are not ready for theatre (may need scans first for example) or may not even need an operation. This only becomes obvious when the anaesthetist goes to review the patient, or perhaps even later. Despite many organisations having guidelines about this, it still seems to happen.

Emma Plunkett, Anaesthetist, @emmaplunkett


 

Posted in Culture, Human Factors/Ergonomics, systems thinking | Tagged , , , , , , | 6 Comments

The Archetypes of Human Work: 6. P.R. and Subterfuge

This is the sixth in a series of posts on The Archetypes of Human Work, which are based on the interactions or relationships between The Varieties of Human Work. For an introduction, see here.

The seven archetypes are:

  1. The Messy Reality
  2. Congruence
  3. Taboo
  4. Ignorance and Fantasy
  5. Projection
  6. P.R. and Subterfuge (this Archetype)
  7. Defunct

Each archetype includes a number of examples (currently healthcare-related). If you have further examples – from any industry – please provide an example as a comment or get in touch. More examples will be added over time.

Archetype 6: P.R. and Subterfuge

slide6

Archetype 6: P.R. and Subterfuge

Composition: work-as-disclosed and often as-prescribed, but not as-done. May or may not be as-imagined by the discloser. 

Short description: This is what people say happens or has happened, when this does not reflect the reality of what happens or happened. What is disclosed will often relate to what ‘should’ happen according to policies, procedures, standards, guidelines, or expected norms, or else will shift blame for problems elsewhere. What is disclosed may be based on deliberate deceit (by commission or omission), or on Ignorance and Fantasy, or something in between… The focus of P.R. and Subterfuge is therefore on disclosure, to influence what others think.

What is it?

Work-as-disclosed is what people say (in verbal or written form) about work-as-done by themselves or others, and is the dominant variety of human work in the P.R. and Subterfuge archetype. ‘P.R.’, in this context, could stand for ‘Public Relations’ or ‘Press Release’, which focus on disclosure but not necessarily reality. P.R. could also mean ‘Pre-Reality’ (disclosing that something is real before it really is real) or ‘Post-Reality’ (where “words don’t matter nearly as much as the intent, the emotion, the subtext…”, Seth’s Blog). It might also be seen as what is now called ‘alternative facts’ and fake news. P.R. and Subterfuge is commonly associated with politicians, spin doctors, lawyers, lobbyists, reporters, public relations specialists, sales people, and advertisers, but will be familiar to most, to some degree.

P.R. and Subterfuge tends to concern what in-group members say about work-as-done to out-group members. It is especially evident when people have to disclose the circumstances of failures or compliance with regulations, management systems, policies, procedures, guidelines, checklists, good practice, etc. to internal specialists (e.g., auditors, investigators, competency assessors, doctors, HR, senior managers) or outside agencies, organisations or individuals (e.g., regulators, supervisory bodies, professional associations, judiciary, journalists, citizens, interfacing organisations). It includes what is said or written, and what is not, in audits, investigations, inquiries, press releases, interviews, freedom of information requests, corporate communications, social media, etc.

P.R. and Subterfuge may involve varying levels of deception. Generally, where the consequences of disclosure are pertinent, unless the other party is trusted, people will tend to describe the work that they do in a way that accords with work-as-prescribed or (what is thought to be) work-as-imagined by other party. In some cases, the difference between work-as-disclosed and work-as-done with P.R. and Subterfuge is very much deliberate, from minor omission to large scale cover-ups. In such cases, a partner archetype will often to be found in Taboo; the aspects of work-as-done that cannot be discussed openly will be omitted from P.R. and Subterfuge. In other cases, there may not no intentional deceit on behalf of the discloser, but what is disclosed may be fed by subterfuge by others.

Why does it exist?

There is often a need to describe or explain performance, both internally within organisations and outside of organisations. What is said (work-as-disclosed) will clearly influence the work-as-imagined of these others, and this is the primary purpose of P.R. and Subterfuge. Because work-as-disclosed does not align with work-as-done, P.R. and Subterfuge will tend to feed the archetype Ignorance and Fantasy in others, inadvertently or deliberately.

The reasons for P.R. and Subterfuge are varied but many of these can be grouped into two major categories: ignorance and fear. Often, those who are distant from work-as-done talk about it based on Ignorance and Fantasy. Such individuals are reliant on their work-as-imagined, knowledge of work-as-prescribed, and work-as-disclosed by others. For instance, a corporate communications specialist, press officer, or a senior manager, will tend to know little about the specifics of how front-line workers actually work, and will rely on others for this information.

P.R. and Subterfuge can also be motivated by fear of possible consequences should the reality of work-as-done be revealed. These consequences for individuals and organisations may relate to legal action, bad publicity, journalistic inquiry, regulatory investigation or sanctions, fines, cut backs to funding or resources (e.g., staff, training), loss of reputation or status (individual or organisational), loss of profession, operating/professional licence or livelihood, and in extreme cases, loss of liberty. The perceived risk of such consequences will tend to shape what is disclosed, what is not, and what else is said.

It may seem like P.R. and Subterfuge is the product of dishonest organisations and individuals, but a number of systemic features of organisations and industries can  cultivate the archetype. Examples include aspects of regulatory practice, management control measures, procedural constraints, measures, information flows, performance targets, incentive systems, punishments, and goals (especially goal conflicts). In the face of conditions or interventions that get in the way of the work (and potentially make it unsafe or otherwise ineffective), individuals and groups may justify P.R. and Subterfuge via a perceived higher purpose or goal. An illusion of Congruence may be created for out-groups, perhaps in response to the Defunct archetype, or to try to see off damaging interventions based on a superficial and inaccurate perception of work-as-done, such as cutbacks to resources (e.g., cutbacks to staff based on observation of a quiet period) or inappropriate constraints (e.g., procedural diktats based on one incident). P.R. and Subterfuge may therefore offer perceived benefits by protecting people from unwanted and potentially damaging outside influence or intervention which does not recognise the reality of work.

Shadow side

P.R. and Subterfuge, especially in its more deceptive form, involves a variety of ethical problems and dilemmas. More generally, it increases further the distance between work-as-imagined and work-as-done. Work-as-prescribed may become increasingly detached from reality, perhaps Defunct, thus invalidating many organisational and regulatory control measures, which are tied to  work-as-prescribed. Work-as-done (and associated risks) remains unknown to most stakeholder groups. This creates problems of safety, accountability and liability.

In many industries, organisations have been known to cover up work-as-done (especially The Messy Reality) when things have gone wrong (see this reported decades-long cover-up by Dupont, which has long promoted itself as a “world class safety leader”). In explaining failure, the activity of an organisation may be Taboo, and what is disclosed may differ markedly from what is found by an independent inquiry. In 2016, four Dupont workers died in a toxic gas leak four workers died in a toxic gas leak (see here). The U.S. Chemical Safety Board inspectors said the reasons for the accident related to the corporate safety culture nationwide, citing design flaws in DuPont’s complex pesticide production unit, inadequate gas detectors, outdated alarms and broken ventilation fans. DuPont, the company originating from the founder of the ‘zero injury’ philosophy (chemist and industrialist Éleuthère Irénée du Pont de Nemours, 1771-1834), attributed the cause of the disaster to actions by rank-and-file employees. The tendency of organisations to point the finger at sharp end workers is an example of P.R. and Subterfuge which perpetuates P.R. and Subterfuge among rank-and-file employees, in order to protect themselves from blame; a spiral of subterfuge.

Examples (Healthcare)


Commissioners often use CQUINs (Commissioning for Quality and Innovation payments framework) to drive innovation and quality improvement in the NHS. In theory, the metrics relating to individual CQUINs are agreed between commisioners and clinicians. In practice, some CQUINs focus on meaningless metrics. A hypothetical example: a CQUIN target for treating all patients with a certain diagnosis within an hour of diagnosis is flawed due to a failure of existing coding systems to identify relevant patients. Clinicians inform the commissioners of this major limitation and offer suggested improvements to the metrics. These suggested improvements are not deemed appropriate by the commissioning team because they deviate significantly from previously agreed definitions for the CQUIN. The clinicians are demotivated by the process of collecting meaningless data and are tempted to use gaming solutions to report best performance. This situation is exacerbated by pressure from the management team within the NHS Trust who recognise that failure to demonstrate adherence to the CQUIN key performance indicators is associated with a financial penalty. The management team listen to the clinicians and understand that the data collection is clinically meaningless, but insist that the clinical team collect the data anyway. The motivational driver to improve performance has moved from a desire to improve clinical outcomes to a desire to reduce financial penalties. The additional burden is carried by the clinical team who are expected to collect meaningless data without any additional administrative or job plan support. 

Anonymous, NHS paediatrician


It is one thing when you find out that your local hospital has suffered serious failures of care resulting in numerous preventable deaths, it is another when you find that hospital is involved, if not in blatant cover-up, in obscuring the extent of the problems. But when you find the organisations responsible for regulating hospitals have not only failed to maintain standards but are complicit in their own cover-ups then you can begin to despair whether you will ever get to the bottom of just how and why these tragedies occurred. [Extract from Joshua’s Story, by James Titcombe – used with permission.]

James Titcombe, Father of Joshua Titcombe, who died nine days after his birth at Furness General Hospital in Barrow in October 2008, @JamesTitcombe.


Healthcare staff often have to complete mandatory online modules, e.g. in fire safety, manual handling, blood transfusion. The modules have a pass rate (e.g. 80%) and sometimes a maximum number of attempts before the healthcare worker is locked out and has to discuss their poor performance with their line manager. Healthcare workers may then sit down in groups to share the correct answers and therefore pass the module.

Anonymous


The use of checklists for the prevention of Central Line Associated Bacteraemia (CLAB) is well described and has been taken up widely in the healthcare system. The purported benefits of the checklist include ensuring all steps are followed as well as opening up communication between team members. After introducing the CLAB bundle into our Intensive Care Unit, we saw very high levels of reported checklist compliance followed by the expected drop in our rates of infection, confirming the previously reported benefits. However, when we observed our staff it became apparent that they were actually filling in the checklist retrospectively without watching the procedure, as they were busy with other tasks.The fall in the CLAB rate could therefore not have been due to the use of a checklist and instead appears to be due to the use of “CLAB packs”. These put all required items for central line insertion into a single pack thereby making it easier for staff to perform the procedure correctly.

Carl Horsley, Intensivist, @horsleycarl.


 

Posted in Culture, Human Factors/Ergonomics, Safety, systems thinking, Uncategorized | Tagged , , , , , , | 6 Comments