Four Kinds of Human Factors: 4. Socio-Technical System Interaction

This is the fourth in a series of posts on different ‘kinds’ of human factors, as understood both within and outside the discipline and profession of human factors and ergonomics itself. The first post explored human factors as ‘the human factor’. The second post explored human factors as ‘factors of humans’. The third post explored human factors as ‘factors affecting humans’. This post explores a fourth kind of human factors: Socio-technical system interaction.

DSC06026

Polycom Practitioner Cart in Action by Andy G CC BY-SA 2.0 https://flic.kr/p/89hXG8

What is it?

This kind of ‘human factors’ aims to understand and design or influence purposive interaction between people and all other elements of socio-technical systems, concrete and abstract. For industrial applications, a good shorthand for this is ‘work’. The following definition, from the International Ergonomics Association, and adopted by the Human Factors and Ergonomics Society and Chartered Institute of Ergonomics and Human Factors and other societies and associations, characterises this view of human factors.

“Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.”

Note from this definition that ‘human factors’ is formally indistinguishable from ‘ergonomics’. While some people attempt to make a distinction between the terms, the relevant professional societies and associations do not, and typically instead recognise that the two terms have different origins (in the US and Europe, respectively). The terms are often used interchangeably by HF/E specialists, akin to ‘counselling’ and ‘psychotherapy’, with scientific journals (e.g., Ergonomics, Human Factors, Applied Ergonomics) using one term or the other but with the same scope. (The equivalence of the terms of sometimes a surprise to those who are not formally trained in human factors and ergonomics, especially those from anglophone backgrounds since many languages use translations of ‘ergonomics’ (ergonomia, ergonomie, ergonomija, eirgeanamaíocht, ergonoomika, ergonomika…).

It is relevant that ‘ergonomics’ derives from the Greek ergo (‘work’) and nomos (‘laws’). There are, in fact, very few accepted laws in human factors/ergonomics (aside from familiar laws such as Fitts’ Law and Hicks’ Law), but many would acknowledge and agree on certain ‘principles’. It is also relevant that the origin of human factors and ergonomics was in the study of interaction between people and equipment and how the design of this equipment influenced performance. Notably, Fitts and Jones (1947) analysed ‘pilot error’ accidents and found that these were really symptoms of interaction with aircraft cockpit design features. For instance, flap and gear controls looked and felt alike and were colocated (a problem that has been largely solved in cockpits but remains in pharmacy in terms of medicines).

The beginnings of human factors and ergonomics, then, focused not on the human or the factors that affect the human per se, but on interaction, and how context shapes that interaction. If we ignore context, ‘factors of humans’ and ‘factors that affect humans’ become less problematic. If I turn on the wrong burner on my stove (which I do, about 30-40% of the time), it is not a problem. I simply turn it off and now I know the correct dial to turn. If I want to be sure I can bend down to look at the little diagram, but often I can’t be bothered. If an anaesthetist presses the wrong button, she might turn off the power to a continuous-flow anaesthetic machine inadvertently because of a badly positioned power switch. If the consequence of my turning the wrong dial were more severe, I would bother to check the little diagram often, but I would still make mistakes, mostly because the layout of the stoves is incompatible with the layout of the dials, which look identical and are co-located.

This fourth kind of human factors is a scientific discipline, especially from an academic point of view, and a design discipline, especially from an applied point of view. But what we are designing is not so much an artefact or procedure, as the interactions between people, tools, and environments, in particular contexts. This design involves science, engineering and craft.

Human-factors-as-sociotechnical-interaction has a dual purpose to improve system performance and human wellbeing. System performance includes all system goals (e.g., production, efficiency, safety, capacity, security, environment). Human wellbeing, meanwhile, includes human needs and values (e.g., health, safety, meaning, satisfaction, comfort, pleasure, joy).

Who uses it?

This perspective – more nuanced than the other three – is most prevalent among professional human factors specialists/ergonomists, who are accredited, certified, registered or chartered by relevant societies and associations. However, it is also natural fit with the work of system engineers, interaction designers, and even anthropologists.

The Good

This kind of human factors takes account of human limitations and capabilities, influences on human performance, and human influences on system performance. It is rooted in:

  • systems thinking, including an understanding of system goals, system structure, system boundaries, system dynamics and system outcomes;
  • design thinking, and the principles and processes of designing for human use; and,
  • scientific understanding of people and the nature of human performance, and empirical study of activity.

This kind of human factors also makes system interaction and influence visible. It uses systems methods to understand and map this interaction, and how interaction propagates across scale, over time, as non-linear interactions within and between systems: legal, regulatory, organisational, social, individual, informational, technical, etc. While the ‘factors affecting humans’ perspective tends to be restricted to linear ‘resultant’ causation, the systems interaction perspective is alert to emergence.

As an example, what can seem like a simple and common sense intervention from one perspective (e.g., a performance target, such as the four-hour accident and emergency target in UK hospitals), can create complex non-linear interactions and emergent phenomena across almost all aspects of the wider context noted above. (See the example from General Practitioner Doctor Margaret McCartney in this post, concerning targets for dementia screening [examples are at the bottom of the post]).

Human factors as system interaction considers all stakeholders’ needs and system/design requirements, in the context of all relevant systems, including an intervention (or designed solution) as a system (e.g., a sat nav), the context as a system (e.g., vehicles, drivers, pedestrians, roads, buildings), competing systems (e.g., smartphone apps, signs), and systems that collaborate with the intervention system to deliver a function (e.g., satellites, power sources). Most failed interventions can be traced to a failed understanding of one or more of these systems, especially the context as a system. (See the example from surgeon Craig McIlhenny in this post on the installation of a fully computerised system for ordering tests [radiology requests, lab requests, etc.])

This kind of human factors is the only kind that really recognises the world as it is: complex interaction and interdependency across micro, meso, and macro scales. Also unlike the other three kinds of human factors, at least in terms of their connotations, human-factors-as-sociotechnical-interaction has a clear dual purpose: improved system performance and human well-being. It is one of the only disciplines to have this dual focus.

The Bad

This kind of human factors is it is the least intuitive of the four. It is much easier to restrict ourselves to discussion of ‘the human factor’, ‘factors of humans’ and ‘factors affecting humans’, since these tend to restrict us to isolated factors and linear cause-effect thinking, usually within a restricted system boundary. This kind of human factors is therefore the perspective that tends to be neglected in favour of simplistic approaches to ‘human factors’.

It is also the most difficult of the four kinds of human factors to address in practice. In safety management, for instance, the tools that are routinely in use tend not to address system interactions. Taxonomies focus on ‘factors of humans’ and ‘factors affecting humans’, but do not model system interactions. Fault and event trees map interactions but only in the context of failure, and the interactions typically are fixed (unchanging), linear (lacking feedback loops), and assume direct cause-effect relationships, with no consideration of emergence. There is an important distinction here between thinking systemically (thinking in an ordered or structured way) and systems thinking (thinking about the nature and functioning of systems).

When human factors is approached as the study and design or influence of system interaction, it is rare that simple, straightforward answers can be given to questions. The reason that “it depends” (usually an unwanted answer to a question) is because the answer to a question, the solution to a problem, or the realisation of an opportunity in a sociotechnical system does depend on many factors: the stakeholders (and their skills, knowledge, experience, etc), their activities, the artefacts that they interact with, the demand and pressure, resources and constraints, incentives and punishments, and other aspects of the wider context – informational, temporal, technical, operational, natural, social, financial, organisational, political, cultural, and judicial. Not all of these will always be relevant, but they need to be considered in the context of interactions across scale and over time.

It is fair to say that this kind of human factors is depersonalising. As we study, map and design system interaction, the person (‘the human factor’) can seem to be an anonymous system component, certainly less interesting than system interaction. Even tools that we use to try to capture this in design – such as personas – tend to depict imaginary people. So this kind of human factors can feel more like an engineering discipline than a human discipline. It is important that this be addressed in the way that human factors is practised, both in general interpersonal approach and via qualitative methods that aim at understanding personal needs, assets and experience. Systems thinking and design thinking must be combined with humanistic thinking.

Finally, as with the second and third kinds of human factors, this kind struggles with issues of responsibility and accountability (the concepts, subtly different in English, are no different in many languages). Responsibility for system outcomes now appears to be distributed among complex system interactions, which change over time and space. Outcomes in complex sociotechnical systems are increasingly seen as emergent, arising from the nature of complex non-linear interactions across scale. But when something goes wrong, we as people, and our laws, demand that accountability be located. The nature of accountability often means that this must be held by one person or body. People at all levels – minister, regulator, CEO, manager, supervisor, front line operator – have choice. With that choice comes responsibility and accountability. A police officer chooses to drag a woman by the hair for trying to vote. A senior nurse chooses whether to bully junior nurses. A professional cyclist chooses to take prohibited drugs. A driver chooses whether to drink before driving, to drive without insurance, to drive at 60mph in a 30mph zone, or to or send text messages while driving. There may well be contextual influences on all of these behaviours, but we make choices in our behaviour. In these kinds of cases, it is important that ‘systems thinking’ is not used to scatter such choices into the ether of ‘the system’, stripping people of responsibility and accountability. That would be the ruin of both systems thinking and justice.

Advertisements
Posted in Human Factors/Ergonomics, systems thinking | Tagged , , , | 1 Comment

Four Kinds of Human Factors: 3. Factors Affecting Humans

In the first post in this series, I reflected on the popularisation of the term ‘human factors’ and discussion about the topic. This has brought into focus various differences in the meanings ascribed to ‘human factors’, both within and outside the discipline and profession itself. The first post explored human factors as ‘the human factor’. The second post explored human factors as ‘factors of humans’. This third post explores another kind of human factors: Factors Affecting Humans.

5658369556_8335b9b6ed_z

17/52 : Tchernobyl – Chernobyl by Eric Constantine CC BY-NC 2.0 https://flic.kr/p/9C1C8N

What is it?

This kind of ‘human factors’ turns to the factors – external and internal to humans – that affect human performance: equipment, procedures, supervision, training, culture, as well as aspects of human nature, such as our capabilities and limitations. Factors affecting humans tend to include

  • aspects of planned organisational activity (e.g., supervision, training, regulation, handover, communication, scheduling)
  • organisational artefacts (e.g., equipment, procedures, policy)
  • emergent aspects of organisations and groups (e.g., culture, workload, trust, teamwork, relationships)
  • aspects of the designed environment (e.g., airport layout, airspace design, hospital design, signage, lighting)
  • aspects of the natural environment (e.g., weather, terrain, flora, fauna)
  • aspects of transient situations (e.g., emergencies, blockages, delays, congestion, temporary activities)
  • aspect of work and job design (e.g., pacing, timing, sequencing, variety, rostering)
  • aspects of stakeholders (e.g., language, role)
  • aspects of human functions, qualities and states that affect performance (e.g.,
    • cognitive functions such as attention, detection, perception, memory, judgement and reasoning, decision making, motor control, speech;
    • physical functions and qualities such as strength, speed, accuracy, balance and reach;
    • physical, cognitive and emotional states such as stress and fatigue).

The following well-known definition from the UK Health and Safety Executive (1999) seems to emphasise the ‘factors that affect humans’ kind of human factors:

“Human factors refer to environmental, organisational and job factors, and human and individual characteristics, which influence behaviour at work in a way which can affect health and safety” (Health and Safety Executive, Reducing error and influencing behaviour HSG48)

Who uses it?

This kind of human factors is the most traditional in human factors guidance and courses, and so is familiar to human factors specialists. It naturally fits courses on human factors (as modules), texts on human factors (as chapters), and studies on human factors (which might consider specific factors as independent variables).

This kind of human factors is also of interest to safety specialists, who might use taxonomies to classify ‘causal factors’ to incidents and accidents, or select ‘performance shaping factors’ as part of human reliability assessments.

It also suits the way that organisations tend to be organised (functionally, e.g. training, procedures, engineering) and so tends to make natural sense in an organisational context; it is obvious that the various factors affect behaviour. It is just not obvious how.

The Good

Some of the positive aspects of this kind of human factors are shared with the ‘factors of humans‘ kind. One is a great body of knowledge to help understand, classify and predict or imagine these effects. The design of artefacts such as equipment, tools and procedures, as well as tasks, jobs and work systems, affect human performance in different ways. This understanding can therefore be applied to and integrated in the design of equipment, procedures, tools, regulations, roles, jobs, and management systems, etc.

The ‘factors affecting humans’ kind of human factors is also relatively easy to understand at a basic level. Most people seem to know that the design of artefacts (even simple ones, such as door handles, or more complicated ones such as self-assembly furniture instructions) affect our behaviour. The details of the effects are not obvious, but the existence of some effect is fairly obvious.

While the ‘factors of humans’ perspective goes down and in to the cognitive, emotional and physical aspects of human nature, the ‘factors affecting humans’ perspective extends also up and out into the system, environment and context of work. This acknowledges the influence of factors outside of humans on human performance, and therefore helps to explain it. ‘Human error’ is not usually ‘simple carelessness’, but a symptom of various aspects of the work situation. This acknowledges an important reality for any of us; our performance is subject to many factors, and many of these are beyond our direct control.

This kind of human factors therefore more clearly points to design as a primary means to influence performance and wellbeing, as well as instruction, training and supervision. The view of factors affecting humans also mirrors to some degree the way that organisations are designed and operated, as functional specialisms (e.g., training, procedures, design).

Together, ‘factors affecting humans’ and ‘factors of humans’ comprise what many would think of as ‘human factors’, especially staff and managers in organisations.

The Bad

Many of the downsides of the ‘factors of humans’ perspective on human factors are addressed by the ‘factors affecting humans’ perspective. But some other issues remain. One concerns the difficulty in understanding the influence of multiple, interacting factors affecting humans in the real work context. How do factors affect performance when those factors interact dynamically and in concert in the real environment, which is probably far messier than imagined?

In trying to understand performance, we tend to dislike the mess of complexity and instead prefer single-factor explanations. This can be seen in organisations, media, the justiciary, and even in science, which is one facet of human factors. But the effects of multiple interacting factors in messy environments are hard to extrapolate from experiments. Experiments tend to focus on each variable of interest (e.g., a new interface or shift system or a checklist; ‘independent variables’) while controlling, removing or ignoring myriad other factors that are relevant to work-as-done (e.g., readiness for change, culture, supervision, staffing pressures, unusual demand, history of similar interventions, resources available for implementation; ‘confounding variables’), in order to measure things of interest (e.g., time, satisfaction, errors; ‘dependent variables’). Even where we go beyond single factor explanations, the effects of multiple, interacting factors affecting humans in real environments are hard to understand from reading about these factors or from factorial tools such as taxonomic safety databases. They are also hard or impossible to estimate with predictive tools, such as human-reliability assessments or safety risk assessments.

A reductionist, factorial approach can hide system-wide patterns of influence and emergent effects. Factors can appear disconnected, when in reality they are interconnected. Influence appears linear, when it is non-linear. Effects appear resultant, when they are emergent. Wholes are split into parts. Information is analysed but not synthesised. Hence, when a change is introduced, in the full richness of the real environment, surprises are encountered. The air traffic control flight data interface is fine in standard conditions but not for complex re-routings at short notice under high traffic load. The new individual roster system is good for staff availability but adversely affects teamwork. The checklist is completed but before the task steps have actually been completed. Interventions on factors affecting humans are designed and implemented but don’t work as imagined; they are less effective than predicted, have unintended consequences or create new unforeseen influences, changing the context in unexpected ways. The direction of influence of ‘factors affecting humans’ is often assumed to be one-way (linear), as per the HSE definition above. But people also influence these influencing ‘factors’ in the context of a sociotechnical system. So the design of a shift system influences behaviour, but people also influence shift patterns (e.g., via shift swapping). Interfaces influence people, but people use interfaces outside of design intent. Feedback loops are hard to see with a fragmented and linear approach to human factors. These might sound like rather abstract or theoretical problems, but the examples above are just the first real ones that come to mind; there are many cases of interventions that fail in large part because factors are considered in a non-systemic and decontextualised way that is too far from the messy reality of work.

Additionally, when applied in a safety management context, the ‘factors affecting humans’ perspective is almost entirely negative. From a safety perspective, the positive influence of ‘factors affecting humans’ (and indeed ‘factors of humans’ and ‘the human factor’) is mostly ignored. What is it that makes people and organisations perform effectively to ensure that things go right? Safety management has little idea. Only the contribution of ‘factors’ to unwanted outcomes (real or potential) is usually considered. This can give human factors in safety a negative tone, reducing human activity to ‘causal factors’. Human factors (or ergonomics) is really about something much broader; improving performance and wellbeing, (especially) by design.

There can be something unintuitive and distancing about human factors viewed from a reductionist, factorial point of view. Perhaps it is partly that the narrative of real experience is lost amid the analysis. Consider textbooks, the initial source material for anyone learning human factors (or ergonomics) as a discipline. Relatively few human factors texts are organised around narrative. Instead, they are usually organised around ‘factors’. One of the rare examples of the narrative approach is Set Phasers on Stun by Steven Casey, while an example of the factorial approach is Human Performance: Cognition, Stress and Individual Differences, by Gerald Matthews, Stephen Western and Rob Stammers. Both are excellent in their own ways, but the latter is the default (and happens to be far less interesting to the wider audience). Rich narrative tries to recreate or bring to life lived experience and context, while a factorial or analytical approach deconstructs experience and context into concepts. (Again, an example is incident databases, which analyse factors extracted from multiple situations, partly with the intention of understanding factor prevalence across scale.)

Finally, but related to all of the above, this kind of human factors struggles with questions of responsibility (as with the ‘factors of humans‘ perspective). At what point does performance become unacceptable (e.g., negligent)? How do we locate responsibility and accountability amid the ‘factors’. And if top management is responsible for those ‘factors’, then what when they move on? The ‘human factor‘ perspective, while much misused, at least seems to acknowledge that human beings have some choice and, with that, responsibility. To those affected by situations involving harm (e.g., harmed patients and families, local communities affected by chemical exposure and oil spills), deconstructing the influences on behaviour, in an attempt to explain, may be seen as excusing unacceptable behaviour, sidestepping issues of responsibility and turning a blind eye to the dark sides of organisations, and even human nature.

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , | Leave a comment

Four Kinds of ‘Human Factors’: 2. Factors of Humans

In the first post in this series, I reflected on the popularisation of the term ‘human factors’ and discussion about the topic. This has brought into focus various differences in the meanings ascribed to ‘human factors’, both within and outside the discipline and profession itself. The first post explored human factors as ‘the human factor’. This second post explores another kind of human factors: Factors of Humans.

2706701983_dc3d66fb8a_z

Ear by Simon James CC BY-SA 2.0 https://flic.kr/p/58bycz

What is it?

This kind of human factors focuses primarily on human characteristics, understood primarily via reductionism. Factors of humans include, for example:

  • cognitive functions (such as attention, detection, perception, memory, judgement and reasoning (including heuristics and biases), decision making – each of these is further divided into sub-categories)
  • cognitive systems (such as Kahneman’s dual process theory, or System 1 and System 2)
  • types of performance (such as Rasmussen’s skill-based, rule-based, and knowledge-based performance)
  • error types (such as Reason’s slips, lapses, and mistakes, and hundreds of other taxonomies, including my own)
  • physical functions and qualities (such as strength, speed, accuracy, balance and reach)
  • behaviours and skills (such as situation awareness, decision making, teamwork, and other ‘non-technical skills’)
  • learning domains (such as Bloom’s learning taxonomy) and
  • physical, cognitive and emotional states (such as stress and fatigue).

These factors of humans may be seen as limitations and capabilities. As with human-factors-as-the-human-factor, the main emphasis of human-factors-as-factors-of-humans is on the human; but general constituent human characteristics, not the person as an individual. The factors of humans approach acts like a prism, splitting human experience into conceptual categories.

This kind of human factors is emphasised in a definition provided by human factors pioneer Alphonse Chapanis (1991):

“Human Factors is a body of knowledge about human abilities, human limitations, and other human characteristics that are relevant to design.”

But Chapanis went on to say that “Human factors engineering is the application of human factors information to the design of tools, machines, systems, tasks, jobs, and environments for safe, comfortable, and effective human use.” He therefore distinguished between ‘human factors’ and ‘human factors engineering’. The two would probably be indivisible to most human factors practitioners today (certainly those who identify as ‘ergonomists’, i.e., designers), and knowledge and application come together as parts of many definitions of human factors (or ergonomics). Human factors is interested in these factors of humans, then, to the extent that they are relevant to design, at least in theory (in practice, the sheer volume of literature on these factors suggests otherwise!).

Who uses it?

Factors of humans have been researched extensively, by psychologists (especially cognitive psychologists, and increasingly neuropsychologists), physiologists and anatomists, and ergonomists/human factors specialists. Human abilities, limitations and characteristics are therefore the emphasis of many academic books and scientific articles concerning human performance, applied cognitive psychology, cognitive neuropsychology, and human factors/ergonomics, and  is the standard fare of such courses.

This kind of human factors is also of interest to front-line professionals in non-technical skills training, where skilled performance is seen through the lenses of decision making, situational awareness, teamwork, and communication.

The Good

Factors of humans – abilities, limitations, and other characteristics – must be understood, at least at a basic level, for effective design and management. Decades of scientific research have produced a plethora of empirical data and theories on factors of humans, along with a sizeable corpus of measures. Arguably, literature is far more voluminous for this kind of human factors than any other kind. We therefore have a sophisticated understanding of these factors. Much is now known from psychology and related disciplines (including human factors/ergonomics) about sustained attention (vigilance), divided attention, selective attention, working memory, long term memory, skilled performance, ‘human error’, fatigue, stress, and so on. Much is also known about physiological and physical characteristics. These are relevant to the way we think about, design, perform, and talk about, record or describe human work: work-as-imagined, work-as-prescribed, work-as-done and work-as-disclosed. Various design guidelines (such as the FAA Human Factors Design Standard, HF-STD-001) have been produced on the basis of this research, and hundreds of HF/E methods.

This kind of human factors may also help people, such as front-line professionals, to understand their own performance in terms of inherent human limitations. While humanistic psychology emphasises the whole person, and resists reducing the person into parts, cognitive psychology emphasises functions and processes, and resists seeing the whole person. So while reductionism often comes in for attack among humanistic and systems practitioners, knowledge of limits to sustained attention, memory, judgement, and so on, may be helpful to better understand failure, alleviating the embarrassment or shame that often comes with so-called ‘human error’. Knowledge of social and cultural resistance to speaking up can help to bring barriers out into the open for discussion and resolution. So perhaps reductionism can help to demystify experience, help to manage problems by going down and in to our cognitive and physical make-up, and help to reduce the stigma of failure.

The Bad

Focusing on human abilities, human limitations, and other human characteristics, at the expense of the whole person, the context, and system interactions, comes with several problems, but only a few will be outlined here.

One problem relates to the descriptions and understandings that emerge from the reductive ‘factors of humans’ approach. Conceptually, human experience (e.g., of performance) is understood through one or more conceptual lenses (e.g., situation awareness, mental workload), which reflect partial and fragmented reflections of experience. Furthermore, measurement relating to these concepts often favours quantification. So one’s experience may be reduced to workload, which is reduced further to a number on a 10-point scale. The result is a fragmented, partial and quantified account of experience, and these numbers have special power in decision making. However, as humanistic psychology and systems thinking reminds us, the whole is greater than the sum of its parts; measures of parts (such as cognitive functions, which are not objectively identifiable) may be misleading, and will not add up to form a good understanding of the whole. Understanding the person’s experience is likely to require qualitative approaches, which may be more difficult to gain, more difficult to publish, and more difficult to digest by decision-makers.

Related to this, analytical and conceptual accounts of performance with respect to factors of humans can seem alien to those who actually do the work. This was pointed out to me by an air traffic controller friend, who said that the concepts and language of such human factors descriptions do not match her way of thinking about her work. Human factors has inherited and integrated some of the language of cognitive psychology (which, for instance, talks about ‘encoding, storing and retrieving’, instead of ‘remembering’; cognitive neuropsychology obfuscates further still). So while reductionism may help to demystify performance issues, this starts to backfire, and the language in use can mystify, leaving the person feeling that their experience has been described in an unnatural and decontextualised way. Gong further, the factors of humans approach is often used to feed databases of incident data. ‘Human errors’ are analysed, decomposed, and entered into databases to be displayed as graphs. In the end, there is little trace of the person’s lived experience, as their understandings are reduced to an analytical melting pot.

By fragmenting performance problems down to cognitive functions (e.g., attention, decision-making), systems (e.g., System 1), error types (e.g., slips, mistakes), etc, this kind of human factors struggles with questions of responsibility. At what point does performance become unacceptable (e.g., negligent)? On the one hand, many human factors specialists would avoid this question, arguing that this is a matter for management, professional associations, and the judicial system. On the other hand, many human factors specialists use terms such as ‘violation’ (often further divided into sub-types; situational violation, routine violation, etc) to categorise decisions post hoc. (Various algorithms are available to assist with this process.) To those caught up in situations involving harm (e.g., practitioners, patients, families), this kind of analysis, reductionism and labelling may be seen as sidestepping or paying lip service to issues of responsibility.

While fundamental knowledge on factors of humans is critical to understanding, influencing and designing for performance, reductionist (including cognitivist) approaches fail to shed much light on context. By going down and in to physical and cognitive architecture, but not up and out to context and the complex human-in-system interactions, this kind of human factors fails to understand performance in context, including the physical, ambient, informational, temporal, social, organisational, legal and cultural influences on performance. This problem stems partly from the experimental paradigm that is the foundation for most of the fundamental ‘factors of humans’ knowledge. This deliberately strips away most of the richness and messiness of real context, and also tends to isolate factors from one another.

Because this kind of human factors does not understand performance in context, it may fail to deal with performance problems effectively or sustainably. For instance, simple design patterns (general reusable solutions to commonly occurring problems) are often used to counter specific cognitive limitations. These can backfire when designed artefacts are used in natural environments, and the design pattern is seen as a hindrance to be overcome or bypassed (problems with the design and implementation of checklists in hospitals is an example). Another example may be found in so-called ‘human factors training’ (which, often, should be called ‘human performance training’). This aims to improve human performance by improving knowledge and skills concerning human cognitive, social and physical limitations and capabilities. While in some areas, this has had success (e.g., teamwork), in others we remain constrained severely by our limited abilities to stretch and mitigate our native capacities and overcome system conditions (e.g., staffing constraints). Of course, in the absence of design change, training may also be the only feasible option.

A final issue worth mentioning here is that, more than any other kind of human factors, the ‘factors of humans’ kind has arguably been over-researched. Factors of humans are relatively straightforward to measure in laboratory settings, and related research seems to attract funding and journal publications. Accordingly, there are many thousands of research papers on factors of humans. The relative impact of this huge body of research on the design of real systems in real industry (e.g., road transport, healthcare, maritime) is dubious, but that is another discussion for another time.

References

Chapanis, A. (1991). To communicate the human factors message, you have to know what the message is and how to communicate it. Bulletin of the Human Factors Society, 34, 1-4.

Posted in Human Factors/Ergonomics | Tagged , | Leave a comment

Four Kinds of ‘Human Factors’: 1. The Human Factor

Over the last decade or so, the term ‘human factors’ has gained currency with an increasing range of people, professions, organisations and industries. It is a significant development, bringing what might seem like a niche discipline into the open, to a wider set of stakeholders. But as with any such development, there are inevitable differences in the meanings that people attach to the term, the mindsets that they bring or develop, and their communication with others.  It is useful to know, then, what kind of ‘human factors’ we are talking about? At least four kinds seem to exist in our minds, each with somewhat different meanings and – perhaps – implications. These will be outlined in this short blog post series, beginning with the first: The Human Factor.

 

9043194694_4bf35fc685_z

Steph Kelly, Air Traffic Controller at Heathrow Airport. NATS UK Air Traffic Control            CC BY-NC-ND 2.0 https://flic.kr/p/eM7JHU 

What is it?

The first kind of human factors is the most colloquial: ‘the human factor’. Human-factors-as-the-human-factor seems enters discussions about human and system performance, usually in relation to unwanted events such as accidents and – increasingly – cybersecurity risks and breaches. It is rarely defined explicitly.

Who uses it?

As a colloquial term, ‘the human factor’ seems to be most often used by those with an applied interest in (their own or others’) performance. The term was the title of an early text on human factors in aviation (see David Beaty’s ‘The Human Factor in Aircraft Accidents’, originally published in 1969, now ‘The Naked Pilot: The Human Factor in Aircraft Accidents‘). It can be found in magazine articles concerning human performance by aviators (e.g., this series by Jay Hopkins in Flying magazine) and information security specialists (e.g., Kaspersky, Proofpoint). Journalists tend to use the term in a vague way to refer to any adverse human involvement. Aside from occasional books and reports on human factors (e.g., Kim Vicente’s excellent ‘The Human Factor: Revolutionizing the Way People Live with Technology‘), the term is rarely used by human factors specialists.

The Good

In a sense, ‘the human factor’ is more intuitively appealing than the term ‘human factors’, which implies plurality. It seems to point to something concrete – a person, a human being with intention and agency. And yet it also hints at something vague – mystery, ‘human nature’. Human-factors-as-the-human-factor might therefore be seen in the frame of humanistic psychology, reminding us that:

  1. Human beings, as human, supersede the sum of their parts. They cannot be reduced to components.
  2. Human beings have their existence in a uniquely human context, as well as in a cosmic ecology.
  3. Human beings are aware and aware of being aware – i.e., they are conscious. Human consciousness always includes an awareness of oneself in the context of other people.
  4. Human beings have some choice and, with that, responsibility.
  5. Human beings are intentional, aim at goals, are aware that they cause future events, and seek meaning, value and creativity. (Association for Humanistic Psychology in Britain)

The individual, and her life and experience, is something that cannot be reduced to ‘factors’ the same way as a machine can be reduced to its parts, nor isolated from her context. The individual cannot be fully generalised, explained or predicted, since every person is quite different, even if we have broadly similar capabilities, limitations, and needs. Importantly, we also have responsibility, borne out our goals, intentions and choices. This responsibility is something that professional human factors scientists and practitioners are often nervous about approaching, and may deploy reductionism, externalisation or obfuscation to put responsibility ‘in context’ (this is sometimes at odds with others such as front-line practitioners, patients and their families, management and the judiciary, who perceive these narratives as absolving or sidestepping individual responsibility; see also just culture regulation).

Unfortunately, these possible upsides to human-factors-as-the-human-factor are more imaginary than real, since the term itself is rarely used in this way in practice.

The Bad

In use, ‘the human factor’ is loaded with simplistic and negative connotations about people, almost always people at the sharp end. ‘The human factor’ usually frames the person as a source of trouble – an unreliable and unpredictable element of an otherwise (imagined to be) well-designed and well-managed system. It comes with a suggestion that safety problems – and causes of accidents – can be located in individuals; safety (or rather, unsafety) is an individual behaviour issue. By example, Kaspersky’s blogpost ‘The Human Factor in IT Security: How Employees are Making Businesses Vulnerable from Within’ repeatedly uses adjectives such as ‘irresponsible’ and ‘careless’ to describe users. That is not to say that people are never careless or irresponsible, since we observe countless examples in everyday life, and the courts deal with many in judicial proceedings, but the question is whether this is a useful way to frame human interaction with systems in a work context. In the press, ‘the human factor’ is often used as a catch-all ‘explanation’ for accidents and breaches. It is a throwaway cause.

The human-factors-as-the-human-factor mindset tends to generate a behaviour modification solution to reduce mistakes – psychology, not ergonomics – via fear (threats of punishment or sanctions), monitoring (monitoring and supervision), or awareness raising and training (information campaigns, posters, training).  The mindset may lead to sacking perceived ‘bad apples’, or removing people altogether (by automating particular functions). In some cases, each of these is an appropriate response (especially training, for issues requiring knowledge and skill), but they will tend not to be effective (or fair) without considering the system as a whole, including the design of artefacts, equipment, tasks, jobs and environments.

 

Posted in Human Factors/Ergonomics | Tagged , , , , | Leave a comment

Invitation, Participation, Connection

The text in this post is from the Editorial of HindSight magazine, Issue 25, on Work-as-Imagined and Work-as-Done, available for download here.


4344878104_e746795618_o.jpg

Image: Nathan CC BY-SA 2.0 https://flic.kr/p/7BWCTs

If a friend asked you what makes your organisation and industry so safe, what would you say? Our industry is often considered ‘ultra-safe’, and yet we rarely ask ourselves what keeps it safe. What are the ingredients of safe operations?

When we ask this question to operational controllers as part of the EUROCONTROL safety culture programme, it is revealing to hear how far outside of the ops room the answers extend. Operational work is of course done by operational people, but it is supported by a diverse range of people outside of the ops room: engineers and technicians, AIS and meteo staff, safety and quality specialists, technology and airspace designers, HR and legal specialists, procedure writers and training specialists, auditors and inspectors, senior and middle managers, regulators and policy makers.

Each of the above has an imagination about operational work – as they think it is, as they think is should be, and as they think it could be. (Operational also have some imagination about non-operational work!) We call this work-as-imagined. It is not the same as the reality of work activity: work-as-done. The degree of overlap depends on the effectiveness of interaction between operational and non-operational worlds.

This is important because non-operational imaginations produce regulations, policies, procedures, technology, training courses, airspace, airports, buildings, and so on. These need to be ‘designed for work-as-done’.

Designing for work-as-done requires that we bring together those who do the work and those who design and make decisions about the work. We have talked with over a thousand people, in hundreds of workshops, in over 30 ANSPs, to discuss work and safety. While there are some excellent examples of interaction and cooperation (e.g., new systems, procedures and airspace), there are also many examples of disconnects between work-as-imagined and work-as-done. Where this is the case, people have said to us that operational and non-operational staff rarely get together to talk about operational work.

With this issue of Hindsight, we wish to encourage more conversations. But how? In their book Abundant Community, John McKnight and Peter Block suggest three ingredients of a recipe that can be used to bring people together.

Invitation

Think of the boundaries of your work community and your workplace. Is there a ‘welcome’ mat at the door, or a ‘keep out’ sign? Several barriers keep us apart:

  • Organisational barriers: Goals, structures, systems and processes that define and separate functions, departments and organisations.
  • Social barriers: ‘In-groups’ (us) and ‘out-groups’ (them), defined by shared values, attitudes, beliefs, interests and ways of doing things.
  • Personal barriers: Individual choices and circumstances.
  • Physical barriers: The design of buildings and environments.

We must look honestly at these barriers because by separating us they widen the gap between work-as-imagined and work-as-done. According to McKnight and Block, “The challenge is to keep expanding the limits of our hospitality. Our willingness to welcome strangers. This welcome is the sign of a community confident in itself.” Hospitality is the bedrock of collaboration.

How can we reduce the separating effects of organisational, social, personal and physical barriers, and extend an invitation to others, inside and outside our ‘community’?

  1. Participation

The second ingredient is participation, of those at the ‘sharp end’ in work-as-imagined, and of those at the ‘blunt end’ in work-as-done. This requires:

Capability (useful knowledge, skills, and abilities); Opportunity (the time, place and authorisation to participate); and Motivation (the desire to participate and a constructive attitude) (C-O-M). Together, we try to understand People, Activities, Contexts and Tools (P-A-C-T) – ‘as-found’ now, and ‘as-imagined’ in the future (C-O-M-P-A-C-T).

The capability lies within two forms of expertise. The first is field expertise, held by experts in their own work – controllers, pilots, designers, etc. The second is emergent expertise. It is more than the sum of its parts and only emerges when we get together and interact.

But who are ‘we’? In his book The Difference, Scott Page of the University of Michigan’s Center for the Study of Complex Systems reviews evidence about how groups with diverse perspectives outperform groups of like-minded experts. Diversity not only helps to prevent groups from being blindsided by their own mindsets. Diverse and inclusive organisations and teams are more innovative and generate better ideas. This diversity does not only refer to inherited differences such as gender and nationality, but also diversity of thought, experience and approach. Multiple perspectives, including outside perspectives, are a source of resilience. If you are a controller, imagine a supervisor from another ANSP’s tower or centre observing your unit’s work for a day or so, and discussing this with you, perhaps questioning some practices. They would likely see things that you cannot.

How can we increase diverse participation in the development of policies, procedures, and technology, and in the understanding of work-as-done?

  1. Connection

Among your colleagues, you can probably pick out a small number who are exceptionally good at connecting people. According to McKnight and Block, these connectors, typically: are well connected themselves; see the ‘half-full’ in everyone; create trusting relationships; believe in their community; and, get joy from connecting, convening and inviting people to come together.

Connectors know about people’s gifts, skills, passions – their capabilities – even those at the edge of the community. They know how to connect them to allow something bigger to emerge. They have an outlook based on opportunities. They have a deep motivation to improve things. They can sometimes be found at the heart of professional associations. People turn to them for support. Connectors are as valuable as the most distinguished experts.

Some people naturally have a capacity for making connections, but each of us can discover our own connecting possibility to help improve work-as-imagined and work-as-done.

Who are the connectors in your community, and how can they and you help to improve and connect work-as-imagined with work-as-done?

In this issue, you will read about work-as-imagined and work-as-done from many perspectives. In reading the articles, we invite you to reflect on how we might work together to bridge the gaps that we find.


Shorrock, S. (2017). Editorial: Invitation, participation, connection. HindSight, Issue 25, Summer 2017, EUROCONTROL: Brussels.

Posted in Culture, Human Factors/Ergonomics, Safety, systems thinking, Uncategorized | Tagged , , , , , | Leave a comment

Just Culture in La La Land

Photo: Steven Shorrock CC BY-NC-SA 2.0 https://flic.kr/p/Rpf4za

It was always going to happen.

The wrong Best Picture winner was read out live on air at The Oscars. Someone had to take the blame. Attention first turned to Warren Beatty and Faye Dunaway. They, after all, ‘touched it last’. But they had mitigating circumstances; they were given the wrong envelope. In any case, and perhaps more to the point, they are unsackable.

And so we go back a step, and ask who gave the wrong envelope? Now we find our answer: the PricewaterhouseCoopers auditors Brian Cullinan and Martha Ruiz. Both were sacked from the role of overseer shortly after the mistake.

Three key charges are levelled against Cullinan. First, he gave the wrong envelope, confusing the right envelope and the spare envelope for an award just given. Second, Cullinan posted a photo of Emma Stone to his Twitter account just before the fatal mistake. Third, when the wrong Best Picture winner was read out, he didn’t immediately jump into action. And neither did Ruiz

They had one job to do. They had one job! And they messed up.

So what should be the response? The relevant concept here is ‘just culture’. In his book ‘Just Culture‘, Sidney Dekker says that “A just culture is a culture of trust, learning and accountability“.  He outlines two kinds of just culture.

Retributive Just Culture

The first kind of just culture is a retributive just culture. According to Dekker, this asks:

  • Which rule is broken?
  • Who did it?
  • How bad was the breach, and what should the consequences be?
  • Who gets to decide this?

This is the typical form of just culture found in societies around the world, for thousands of years. Most of us are familiar with this from being small children.

Dekker explains that with retributive just culture, we have three scenarios:

  • Honest mistake, you can stay.
  • Risk-taking, you get a warning.
  • Negligence, you are let go.

There are even commercialised algorithms to help organisations with this distinction and the appropriate response. David Marx’s Just Culture Algorithm advises to console true human errors, coach against risky behaviours, and ultimately discipline reckless behaviour.

If we look at the Oscars scenario, we can address the three charges made against Culling and Ruiz.

On the first charge – giving the wrong envelope – we can conclude that this is an example of an ‘honest mistake’ category. This ‘honest mistake’ was influenced by a confusable envelope. In human factors and psychology, we have researched and catalogued such actions-not-as-planned for decades through diary studies, experiments, report analysis, interviews and naturalistic observation. We have many terms for such an error, common terms including ‘slip’ and ‘skill-based error’. In doctoral research that I began 20 years ago in the context of air traffic control, I developed a technique called ‘technique for the retrospective and predictive analysis of cognitive error’ (‘TRACEr’, download here). With TRACEr, we would probably classify this kind of error as Right action on wrong object associated with Selection error involving Perpetual confusion and Spatial confusion, which would be associated with a variety of performance shaping factors – aspects of the context at the time such as design, procedure, pressure and distraction. We’ve all done it, like when you pick up the wrong set of near identical keys from the kitchen drawer, or the wrong identical suitcase from the airport luggage carousel. In stressful, loud, distracting environments and with confusable artefacts, the chances of such simple actions-not-as-planned increase dramatically.

On the second charge, some might argue that posing for a photograph and sending tweets just prior to handing out the ‘Best Picture’ envelope is risk-taking, or even negligence. The TMZ gossip site wrote, “Brian was tweeting like crazy during the ceremony, posting photos … so he may have been distracted. Brian has since deleted the tweets.” Meanwhile, People reported an anonymous source who claimed that “Brian was asked not to tweet or use social media during the show. He was fine to tweet before he arrived at the red carpet but once he was under the auspices of the Oscar night job, that was to be his only focus.” The source reportedly continued, “Tweeting right before the Best Picture category was announced was not something that should have happened.” We can’t verify whether this is true and if so, who asked him not to use social media during the show. It is certainly sensible advice, bearing in mind what we know about distraction in safety critical industries and its role in accidents such as the 2003 train crash at Santiago de Compostela.

But perhaps the acid test for this assertion is whether people would have said anything about that photograph or tweet had everything gone according to plan. Just culture requires that we isolate the outcome from the behaviour. Applying the definition and principles of retributive just culture, what we are interested in is the behaviour. If the right envelope was given, then the photo on twitter would likely have been retweeted hundreds or thousands of times, and reported on various gossip websites and magazines, with no judgement from the press and public about the wisdom of such an activity. Instead, the photo would have been celebrated, and any deviation from alleged instructions ‘not to tweet or use social media during the show‘ would have been laughed away.

The third charge, levelled at both accountants, was that they failed to respond in a timely manner on hearing “La La Land”. The prospect of an erroneous announcement was clearly imaginable to Cullinan and Ruiz, who spoke to The Huffington Post about this scenario just a week or so before that fateful night: “We would make sure that the correct person was known very quickly,” Cullinan said. “Whether that entails stopping the show, us walking onstage, us signaling to the stage manager — that’s really a game-time decision, if something like that were to happen. Again, it’s so unlikely.” But could it be that, live on the night of the biggest show on earth, with the eyes of tens of millions upon them, they froze? Again, TRACEr might classify this as OmissionNo decisionDecision freeze, with a variety of performance shaping factors such as stress and perhaps a lack of training (e.g., simulation or practice).

The ‘freeze’ response is the neglected sibling of ‘flight’ and ‘flight’, and occurs in traumatic situations. It’s the rabbit-in-the-headlights response. Many people involved in accidents and traumatic events have been known to freeze, including in aircraft accidents. It is a psychophysiological response and few of us can claim immunity. If we take this as an example of freeze, associated with confusion, shock and fear, then can we say this is an ‘honest mistake’? Even this seems not to fit well, but for the sake of retributive just culture process, let’s classify this omission as such (since it would seem hideously harsh to judge a psychophysiological response as ‘risk taking’ or ‘gross negligence’).

Now we have two counts of ‘honest mistake’ for Cullinan, and one for Ruiz, and one count for Cullinan where we are unsure of its classification. But if the tweet was not seen as a problem had the error not have occurred, then no harsh personal responses are justified.

But they had one job! And such an important job (by Hollywood standards)! And it’s not like that are losing their actual jobs or their liberty. It’s hard to feel sorry for two well paid accountants, mingling with Hollywood celebs during one of the biggest shows on earth.  And remember that the consequences for PwC are not insignificant. An unnamed source told ‘People’ that “The Academy has launched a full-scale review of its relationship with PwC but it is very complicated.” So surely cancelling their involvement is justified, along with a few stories in the media?

Put aside for one moment that the pair are celeb-mingling accountants, and think of them as Brian and Martha – two human beings with families and feelings and ordinary lives outside of this extraordinary day. Most of us have experienced some kind of humiliation in life. It is deeply unpleasant and the memory can resonate for months, years, or a lifetime. Most of us, though, have not felt this humiliation in front of tens of millions of people on live TV, played back by hundreds of millions afterwards. Most of us have not been the subject of thousands of global news stories – and over a million web pages – with front-page stories labelling us a ‘loser’ and a ‘twit’, and a ‘bungling bean counter’, with press hounding us and our families. Most of us have not been subject to hundreds of thousands of comments and memes on social media, nor have we needed bodyguards due to death threats. This is the reality for Brian Cullinan and Martha Ruiz.

Restorative Just Culture

There is another way, and according to Dekker this is restorative just culture. Dekker says that a restorative just culture asks:

  • Who is hurt?
  • What do they need?
  • Whose obligation is it to meet that need?
  • How do you involve the community in this conversation?

Here we might say that those hurt might include the producers of La La Land and Twilight, though neither have given that impression since the event. We might also list the The Academy and PwC, in terms of repetitional damage.

But the individuals most hurt are surely Brian and Martha. What do they need? That we don’t know, but it is certain that their needs are not met by the response so far. Whose obligation is it to meet that need? Here one might say it is the obligation of The Academy and PwC, but we all have an obligation at least not to cause further harm.

The event may live on as an example to individuals and organisations in safety-critical, security-critical and business-critical industries when ordinary front-line workers get caught up in accidents that they never wanted to happen. Should we scapegoat pilots and air traffic controllers, or doctors and nurses, for good-will actions and decisions with unintended consequences? Or should we seek to understand and redesign the system to increase the chances of success in the future? The choice will influence whether front-line workers disclose their ‘honest mistakes’, or cover them up. In his book Black Box Thinking, Matthew Syed explains that “Failure is rich in learning opportunities for a simple reason: in many of its guises, it represents a violation of expectation. It is showing us that the world is in some sense different from the way we imagined it to be.

The event is also a challenge to us, to society. Syed notes that “Society, as a whole, has a deeply contradictory attitude to failure. Even as we find excuses for our own failings, we are quick to blame others who mess up.” He continues, “We have a deep instinct to find scapegoats.” We are deeply hypocritical in our response to failure. He describes examples from healthcare and aviation, where, on reading or hearing about an accident, we feel “a spike of indignation“, “fury“, and a a desire to stigmatise.

Paradoxically, the families of victims of accidents often have empathy for the front-line workers involved, and have a far more systemic view of the events than the general public, politicians, or – in many cases – official accident reports. This can be seen in the case of Martin Bromiley, whose wife died in a routine accident. Martin Bromiley went on to set up the Clinical Human Factors Group, and campaigns for just culture (see this video). It can also be seen in the families of those who died in the train crash at Santiago de Compostela in 2013, which was blamed on ‘human error’ both in the press, and in the official accident report (Spanish version). Following a review of the official accident report by the European Railways Agency, Jesús Domínguez, chairman of the Alvia victims’ association, told The Spain Report that “it confirms that the sole cause is not human error and that the root causes of the accident still need to be investigated“. On 28 July 2013, Garzón Amo was charged with 79 counts of homicide by professional recklessness and an undetermined number of counts of causing injury by professional recklessness. The charges still stand today. (See Schultz, et al, 2016 for a more detailed treatment of the accident.)

Of course, we cannot compare the outcome of The Oscars with any event involving loss of life. But the point is that our corporate and societal responses are similar, and have recursive effects, as Syed explains:

It is partly because we are so willing to blame others for their mistakes that we are so keen to conceal our own. We anticipate, with remarkable clarity, how people will react, how they will point the finger, how little time they will take to put themselves in the tough, high-pressure situation in which the error occurred. The net effect is simple: it obliterates openness and spawns cover-ups. It destroys the vital information we need in order to learn.

A scapegoat, or safer systems? We can’t have both

So we have two options available to us. According to Dekker, retributive justice asks who was responsible, and sets an example where those responsible have crossed the line. Restorative asks what is responsible, then changes what led up to the incident, and meets the needs of those involved. Both are necessary, and both can work and result in fair outcomes for individuals and society, and better learning. But – especially outside of the judiciary – perhaps the latter is more effective and humane. If we want to learn and improve outcomes in organisations and society, focus on human needs and on improving the system.

The Just Culture in La La Land approach takes the retributive route, but gets it badly wrong. Blaming individuals for their actions-not-as-planned in messy environments has destructive and long-lasting effects on individuals, families, professions, organisations, industries and society as a whole.

In the end, we all have one job. Our job is to learn.

See also

Human Factors at The Oscars

Just culture: Who are we really afraid of?

Safety-II and Just Culture: Where Now?

Human Factors at The Fringe: My Eyes Went Dark

Never/zero thinking

‘Human error’ in the headlines: Press reporting on Virgin Galactic

Life After ‘Human Error’ – Velocity Europe 2014

Human error’: The handicap of human factors, safety and justice

Posted in Culture, Human Factors/Ergonomics, Humanistic Psychology, Safety, systems thinking | Tagged , , , | 4 Comments

Human Factors at The Oscars

5121440257_e81647480b_o.jpg

Photo: Craig Piersma CC BY-NC-ND 2.0 https://flic.kr/p/8NyHL6

“An extraordinary blunder”

It has variously been described as “an incredible and almost unbelievable gaffe” (Radio Times), the greatest mistake in Academy Awards history” (Telegraph), “an extraordinary blunder…an unprecedented error” (ITV News), “the most spectacular blunder in the history of the starry ceremony” and “the most awkward, embarrassing Oscar moment of all time: an extraordinary failure” (Guardian).

It was, of course, the Grand Finale of the Oscars 2017.

Faye Dunaway and Warren Beatty are all set to announce the best picture win. Beatty begins to read out the winners card. But he looks visibly puzzled, pausing and looking in the envelope to see if there is anything else that he’s missed. He begins to read out the winners card, “And the Academy Award…”. He pauses and looks in the envelope again. “…for Best Picture“. He looks at Dunaway, who laughs “You’re impossible!”, then hands the card to her. Dunaway, perhaps assuming this is all for effect, simply reads out what she sees, and announces “La La Land!“.

Music sounds and a narrator gives a 17-second spiel about the film: “La La Land has fourteen Oscar nominations this year, and is tied for the most nominated movie in Oscar history, winning seven Oscars…

The La La Land team exchange embraces and walk to the stage. Jordan Horowitz, a producer, delivers the first thank-you speech. Everything looks normal. But as the second and third thank-you speeches are being delivered, there is visible commotion. A member of the Oscars production team takes back the envelope that has been given to the La La Land producers.

The winner’s envelope is, in fact, the envelope for best actress, just given to  La La Land’s Emma Stone. Behind him, the PricewaterhouseCoopers overseers – Brian Cullinan and Martha Ruiz – are on stage, examining the envelopes.

At the end of his speech, Producer Fred Berger says nervously: “We lost, by the way”. Horowitz takes over, “I’m sorry, there’s a mistake. Moonlight, you guys won Best Picture“. Confused claps and cries ensue. “This is not a joke“, Horowitz continues. Beatty now has the right card, but Horowitz takes it out of Beatty’s hand and holds it up to show the names of the winning producers.

Beatty tries to explain, and is interrupted by host Jimmy Kimmel: “Warren what did you do?!“. Beatty continues, “I want to tell you what happened. I opened the envelope and it said, ‘Emma stone – La La Land’. That’s why I took such a long look at Faye and at you. I wasn’t trying to be funny.” Horowitz hands his Oscar to Barry Jenkins, Moonlight’s director.

It was “the first time in living memory that such a major mistake had been made” (Reuters). The accountancy firm PriceWaterhouseCoopers has apologised and promised an investigation. In a statement, they said, “The presenters had mistakenly been given the wrong category envelope and when discovered, was immediately corrected. We are currently investigating how this could have happened, and deeply regret that this occurred. We appreciate the grace with which the nominees, the Academy, ABC, and Jimmy Kimmel handled the situation”.

Such a mistake, in an ordinary setting, is usually quite uneventful. Similar sorts of things happen every day. The only thing that is “incredible“, “spectacular” and “extraordinary” is the context. It is worth, then, looking a little deeper at this extraordinary event, and considering how similar sorts of  events are played out in many ordinary, but critical, contexts.

Design

The design of the envelopes for the various Oscar awards is identical. The only difference between the envelopes is the text that indicates the category. There is no other means of coding (e.g., colour, pattern) to indicate any difference. Several industries have realised the problem with this approach, and in some ways this can be considered the beginnings of the discipline of human factors and ergonomics: “A seminal study that set the agenda for the scientific discipline of human factors was by the experimental psychologists, Fitts and Jones (1947), who adapted their laboratory techniques to study the applied problem of ‘pilot error’ during WWII. The problem they faced was that pilots of one aircraft type frequently retracted the gear instead of the flaps after landing. This incident hardly ever occurred to pilots of other aircraft types. They noticed that the gear and flap controls could easily be confused: the nearly identical levers were located right next to each other in an obscure part of the cockpit” (van Winsen and Dekker, 2016) .

This problem still exists today in settings far more important than The Oscars, but far less newsworthy…until disaster strikes. A notable example is medicine packaging, where medicine names look alike or sound alike or have very similar labels for different drugs or doses. Many packages and labels require users to force attention onto small details of text, perhaps with the addition of a small area of colour which, on its own, is quite inconspicuous. It is asking a lot of people to make critical – sometimes life-and-death-critical – decisions based on small design features. This is in addition to drugs that look alike or sound alike, such as Aminophylline and Amitriptyline, or Carbamazepine and Chlorpromazine, or Vinblastine and Vincristine.

Experience of human factors suggests a number of coding methods (e.g., shape, colour, size) that, used appropriately, can help to make vital distinctions. There are also several design guidelines for medicines by NHS NPSA (2007) and the European Medicines Agency (2015). In human factors/ergonomics, these are used as part of an iterative human-centred design method that understands stakeholders and context, identifies user needs, specifies design requirements, produces prototypes, and tests them.

In the absence of this process, what is amazing is not that such errors occur, but that they do not occur much more often than they do. Because it happens fairly infrequently, when it does happen it is often (and unhelpfully) branded ‘human error’. But this is not simply a problem of ‘human error’. It is a problem of design, where form (such as branding and aesthetics) so often trumps function. As Hollnagel (2016) states, “The bottom line is that the artefacts that we use, and in many cases must use, should be designed to fit the activity they are intended for“. Form-over-function design places the human in a position where they have to bridge the gap between form and function every time they use an artefact.

Safeguards

For the Oscars, two identical sets of the winners cards are made for ‘safety purposes’. These duplicate envelopes are held in the wings in case anything should go wrong with a presenter or an envelope. In this case, it may be that the duplicate of the Best Actress award, which had just been announced, was handed to Beatty as he walked out to announce the Best Picture winner.

Safeguards feature in most safety critical industries, and are often the result of a risk assessment that specifies a risk control for an identified risk. But the risk assessment process is often a linear cause-effect process, and it often stops at the risk control. And risk controls can have unintended consequences and introduce new risks. Consider this example in the context of aviation and air traffic control:

In early 2014, the UK experienced a prolonged period of low atmospheric pressure. At the same time, there was an unusual cluster of level busts [where aircraft go above or below the flight level or altitude instructed by ATC] at the transition altitude, which were thought to be linked to incorrect altimeter setting on departure into the London TMA [London airspace].

Level busts have been, and remain, a key risk in NATS operation. Longer-term strategic projects, such as the redesign of the London TMA and the raising of the Transition Altitude, are expected to provide some mitigation. However, to respond tactically to the perceived trend in the short-term, it was decided to issue a Temporary Operating Instruction (TOI) to controllers.

The TOI required the inclusion of additional phraseology when an aircraft was cleared from an altitude to a Flight Level during low pressure days. The additional phraseology was “standard pressure setting” e.g. “BigJet123, climb now FL80, standard pressure setting”. The change was designed to remind pilots to set the altimeter to the standard pressure setting (1013 hPa) and so reduce level busts associated with altimeter setting. As this phrase was deemed to be an instruction, it was mandatory for flight crews to read back this phrase.

The TOI was subject to the usual procedural hazard assessment processes and implemented on 20 February 2014 on a trial basis, with a planned end date of 20 May 2014, after which the trial results would be evaluated. The change was detailed in Notices to Airmen (NOTAMs).

During the first day of implementation, several occurrence reports were received from controllers, who noted that flight crews did not understand the meaning of the phraseology, and did not read back as required. This led to additional radio telephony to explain the instruction, and therefore additional workload and other unintended consequences.

Extract from case study by Foster, et al, in EUROCONTROL (2014). 

Every industry has many examples of ‘safeguards gone bad’. We often fail to understand how such changes change the context and introduce secondary problems.

Decision making under uncertainty

Beatty is standing there, with the eyes of tens of millions of viewers upon him. He is being recorded for perpetuity, for viewing by hundreds of millions more. He has to make a decision about an announcement, which will feel like a gold Olympic medal to a few producers. But he isn’t sure what’s going on. As Beatty  explained, “I opened the envelope and it said, ‘Emma Stone – La La Land’. That’s why I took such a long look at Faye and at you. I wasn’t trying to be funny“.

Here we cannot be certain what was going through Beatty’s mind, but could it be that – live on one of the most important TV events in the world – Beatty did not want to voice his confusion and uncertainty? He appeared visibly puzzled and gave the envelope to Dunaway to read out the ‘winner’. Dunaway could not have known about Beatty’s thoughts, since his behaviour could easily have been a time-filler or fumbling joke, and of course it made sense to her to simply read hat she saw: “La La Land“.

When under pressure, any delay can have associated costs. For Beatty, asking for clarification would have meant an awkward period of filler, a clumsy live-on-air check of envelopes, perhaps a loss of advertising time. In a state of confusion and self-doubt, perhaps it made sense to say nothing and pass the confusing artefact to someone else.

In many safety-critical activities, decisions are made under uncertainty. The information and situation may be vague, conflicting or unexpected. In some cases, there is a need to signal confusion or uncertainty, perhaps to get a check, or to ask for more time. It can seem hard for us to give voice to our uncertainty in this way, especially under pressure. When someone has a command position – in an operating theatre, cockpit, or at the Oscars  – it can be difficult for that person to indicate that they are not sure what is going on. This has played out in several accidents and moreover in everyday life. But sometimes, the most powerful phrase may be something along the lines of, “I do not understand what is going on”. This identifies a problematic situation and opens the door for other members of the team to help problem-solve. This kind of intervention is part of many training programmes for ‘team resource management’ (by whatever name), and can help everyone involved – no matter what their formal position – to voice and resolve their doubts, uncertainties and concerns.

It’s just an awards show

The events of Oscars 2017 will be emblazoned forever on the minds of participants and aficionados. But it will also soon be a feature of a trivia game or TV show. As host Jimmy Kimmel said “Let’s remember, it’s just an awards show.” But for those who have to put up with the same sorts of problems every day, it’s much more than that. In many industries, people help to ensure that things go well despite other aspects of the system and environment in which they work. For the most part, the human in the system is less like a golden Oscar, and more like a Mister Fantastic, using abilities of mind and body to connect parts of systems that only work because people make them work. This aspect of human performance in the wild is usually taken for granted. But in the real world, people create safety. And for that, they deserve an Oscar.

References

EUROCONTROL (2014). Systems Thinking for Safety: Ten Principles. A White Paper. Brussels: EUROCONTROL Network Manager, August 2014. Authors: Shorrock. S., Leonhardt, J., Licu, T. and Peters, C.

Hollnagel, E. (2016). The Nitty-Gritty of Human Factors (Chapter 4). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

van Winsen, R. and Dekker, S. (2016). Human Factors and the Ethics of Explaining Failure (Chapter 5). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

See also

Just Culture in La La Land

‘Human error’ in the headlines: Press reporting on Virgin Galactic

Life After ‘Human Error’ – Velocity Europe 2014

Human error’: The handicap of human factors, safety and justice
The HAL 9000 explanation: “It can only be attributable to human error”
Occupational Overuse Syndrome – Human Error Variant (OOS-HEV)
‘Human error’: Still undefined after all these years

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , | 10 Comments