Four Kinds Of Thinking: 1. Humanistic Thinking

Several fields of study and spheres of professional activity aim to improve system performance or human wellbeing. Some focus on both objectives (e.g., human factors and ergonomics, organisational psychology), while others focus significantly on one or the other. Disciplines and professions operating in these areas have a focus on both understanding and intervention. For each discipline, the focus of understanding and method of intervention will differ. For instance, for human factors and ergonomics, understanding is focused on system interactions, while intervention is via design. Understanding alone, when intervention is required, may be interesting, but not terribly useful. Intervening without understanding may have unintended consequences (and indeed it often does). With appropriate understanding and intervention, both system performance and human wellbeing have a chance of being improved.

Understanding and intervention for system performance and human wellbeing is rooted – to some extent – in four kinds of thinking. In this short series, I outline these.

  1. Humanistic thinking (this post)
  2. Systems thinking (forthcoming)
  3. Scientific thinking
  4. Design thinking

Unless we engage in the right kinds of thinking, it is likely that our understanding will be too flawed, partial, or skewed. In this case, intervention will be ineffective or even counterproductive. Integrating all four kinds of thinking involves compromises and trade-offs, as the kinds of thinking can conflict, presenting dilemmas that we must resolve.

Steven Shorrock https://flic.kr/p/aBYyUH CC BY-NC-SA 2.0

1. Humanistic thinking

“It is not enough that you should understand about applied science in order that your work may increase man’s blessings. Concern for the man himself and his fate must always form the chief interest of all technical endeavors; concern for the great unsolved problems of the organization of labor and the distribution of goods in order that the creations of our mind shall be a blessing and not a curse to mankind. Never forget this in the midst of your diagrams and equations.”

Albert Einstein, from a speech to students at the California Institute of Technology (in “Einstein Sees Lack in Applying Science“, The New York Times, 16 February 1931)

Why?

There are several reasons why humanistic thinking is important to (sociotechnical) system performance and human wellbeing. One reason relates to human wellbeing as something more than an absence of disease, illness or injury, encompassing the body, mind and spirit (or whatever term one wishes to use), individually and collectively. Having worked in psychopathology, Abraham Maslow, the father of humanistic psychology, found that the concepts, language, tools and methods did not serve the needs of the mass of relatively healthy and well-functioning people. According to Maslow, psychology was being viewed from the wrong end of the lens (pathology). The same might still be said of health, and of safety – akin to psychiatry when it comes to organisational functioning. Humanistic thinking encourages us to think more from the perspective of: what we want than what we don’t want; what works than what doesn’t; assets and human potential than deficits and constraints. Human wellbeing is ultimately linked to human flourishing, individually and collectively, or the idea of ‘actualisation’ in humanistic psychology.

A second reason for the need for humanistic thinking relates to holism. A reductionist focus (typical of science and engineering, including HF/E) tends to mask the person and their unique context. This is exacerbated by the industrial context, characterised by reductionism in the design, analysis, measurement and evaluation of work. The humanistic perspective provides another way of thinking about human beings and human work.

Another reason for the importance of humanistic thinking concerns the relationships through which our work flows. Any practitioner will be aware of the constraining or facilitating influence of relationships, regardless of the technical nature of the work. HF/E, for instance, is officially seen as a “scientific discipline” by the International Ergonomics Association but is more properly described as a blend of elements of science (to explain and predict), engineering (to design for improved performance), and craft (to implement and evaluate) (Wilson, 2000). Humanistic thinking helps to avoid scientism and hard engineering thinking. It also steers us away from ‘technical rationality’ (Schön, 1983) and its assumptions about ‘research application’, recognising that strong theories and inflexible methods can break down in messy situations, requiring reflection-in-action. Humanistic thinking orients our practice so that craft – including reflective practice – is properly valued.

What?

While typically associated with counsellors and psychotherapists, humanistic practitioners (or practitioners of anything, working humanistically) may work in fields such as medicine, education, and social work. Humanistic theory has also been applied to human work and organisational functioning. But what does it mean, exactly? The Association for Humanistic Psychology in Britain usefully summarise five basic postulates, which focus on a view of human beings rather than a discipline or profession.

  1. Human beings, as human, supersede the sum of their parts. They cannot be reduced to components.
  2. Human beings have their existence in a uniquely human context, as well as in a cosmic ecology.
  3. Human beings are aware and aware of being aware – i.e., they are conscious. Human consciousness always includes an awareness of oneself in the context of other people.
  4. Human beings have some choice and, with that, responsibility.
  5. Human beings are intentional, aim at goals, are aware that they cause future events, and seek meaning, value and creativity

There has been relatively little direct cross-pollination of humanistic psychology into many disciplines associated with human wellbeing. But humanistic thinking is ultimately concerned with people, relationships and contexts, and it has this in common with several disciplines (especially HF/E).

How?

Integrating humanistic thinking means going beyond the ‘tools and methods’ that can be found in so many textbooks (including those of HF/E). One of the essential approaches of humanistic psychology is otherwise known as ‘listening’, hanging out with people, relating to them, trying to empathise with them and their unique situation and perspective.

Empathy is a rich and complex concept, it may be viewed as a trait or state of a person, a process and a skill. Empathy is colloquially seen as ‘walking in another’s shoes’. In this sense, empathy can be thought of as ‘perspective taking’; the ability to perceive accurately the frame of reference of another person while maintaining a sense of separateness – the ‘as if’ quality.

Humanistic thinking in the context of improving system performance and human wellbeing, may involve empathising emotionally, cognitively, physically, and socially. But effective empathy is not as intuitive as we may think. Bohart et al suggests that we can distinguish between three different modes of empathy, these are:

  • Empathic rapport: using empathy to build rapport and support the person.
  • Process empathy: a moment-by moment empathy for the person’s experience, cognitively, emotionally, and physically.
  • Person empathy: this is known as ‘experiencing near understanding of the person’s world’ or ‘background empathy’.

Time spent training in fundamental counselling skills is particularly helpful. Also helpful is time spend understanding and practising ethnographic approaches, which are likely to be of more value to humanistic thinking than reductionist scientific and engineering methods. Combined, these can give an insight into the person that analytical methods, even systems thinking methods, cannot.

This kind of thinking might prompt questions such as:

  • What is this person’s story?
  • What does this (situation, job, etc) mean to this person, within the broader context of her or his life?
  • To what degree does the work context respect the person’s autonomy?
  • What tensions exist between freedom and constraint, and how does the person and other people address these?
  • What does a good job look like to this person?
  • How do people perceive their self and their situation, and how might this differ from their ideals?
  • How can work create space for greater flexibility and creativity?
  • How might work contribute to growth, and also to suffering?

Shadow side

Methods that come more from the biological, psychological, and engineering sciences tend to focus on reliability and validity. Humanistic approaches may seem to lack the same rigour, meaning that there are different perspectives on people and situations that cannot be controlled by method. While reductionism makes measurement and diagnosis easier, but also less meaningful, holism questions the idea of measurement and diagnosis, at least as it is often applied (e.g., as can be seen in person centred counselling vs psychiatry). Still, humanistic thinking may be accused of lacking ‘validity’ when viewed from a traditional scientific frame of reference.

Empathy is core to humanistic thinking, but the term has – especially recently – been misunderstood and misused in some quarters. In UX, quick and dirty ’empathy’ has sometimes become a proxy for research (and even a proxy for people, in some persona development). Another problem in practice is the line between empathy and sympathy. Sympathy involves losing a sense of separateness and impartiality or losing the ‘as if’ quality. Sympathy can block the capacity for empathy and so can be counterproductive, blurring the boundaries between ‘me’ and ‘you’, and associated issues (e.g., choice and responsibility).

Another shadow side consideration is that humanistic thinking, when considered on an individual level, may seem to result in unsustainable solutions, from a systems thinking perspective. In many cases, however, this conflict arises not from humanistic thinking per se (the five basic postulates above), but from from sympathy and charity.

Posted in Human Factors/Ergonomics, Humanistic Psychology | Tagged , , , , , | Leave a comment

How To Do Safety-II

Safety-II, its cousin Resilience Engineering (and offshoots such as resilient healthcare), as well as predecessor concepts and theories, have attracted great interest among organisations and their staff. People, especially front-line staff, understand the need to understand all outcomes – wanted and unwanted – and the systems and associated patterns of system behaviour that generate these outcomes. The trouble is, people are not sure where to start with ‘doing Safety-II’. Some methods and seemingly complicated words and ideas might seem off-putting. They don’t need to be. In this post, I will provide some initial ideas and inspiration for getting started. The ideas are in plain language without reference to any specific techniques.

Steven Shorrock https://flic.kr/p/qaBiNp CC BY-NC-SA 2.0)

Idea 1: Collaborate

Safety-II and Resilience Engineering are not solo efforts. You can do little of practical benefit alone. In fact, going alone will almost guarantee a miserable work life. You will start to see the reality of how patterns, system structures and mental models are connected to produce events, both wanted and unwanted. But you will have to stand back and watch how this complexity is boiled down to mechanistic thinking and methods that don’t describe how safety is created, or even how unsafe events really occur. You will also have to observe foes of intervention in action, which almost guarantee unintended consequences. For the sake of sanity, it is almost better not to know how complex systems fail, let alone how they work on a day-to-day basis. Finding a small number of open-minded people who are willing to expand their thinking and listen to ideas and experiences without prejudgement, and not hamstrung by personal barriers, is a good place to start. A diverse group that traverses organisational silos is helpful.

Idea 2: Read

If you want to do Safety-II, you have to read. At least a bit. You might find that you don’t have enough time to read technical books. You don’t have to, though you may well want to, at some point. Start by reading some short articles on Safety-II, and associated concepts, by authors with a pedigree in this area. You might want to expand your search terms to ‘systems thinking‘, ‘resilience engineering‘, ‘systems ergonomics and human factors‘. From here you might start to explore methods from social science (e.g., action research, practice theory, ethnography). See where the search takes you, from blog posts (search this blog for a few, as a start), through to White Papers, articles (email the author if you can’t access them), and books. A couple of short articles a week and you’ll be on your way to understanding the key ideas. Be mindful that some of what is written may be way off the mark (what Safety-II isn’t), as Safety-II, like anything else, is subject to the bandwagon effect.

Idea 3: Think

It might seen strange to suggest thinking as a way to do Safety-II or Resilience Engineering. But in many lines of work, we somehow manage to avoid taking a step back to think more holistically about outcomes, work, systems, and the mental models that give rise to all of this. I teach a systems thinking course which is about…thinking. At the end of the most recent course, one participant said that it was the first course that they had participated in where they actually had to think, and not just learn content or follow a process. The course doesn’t provide a process, but rather a space to think and challenge one’s own assumptions. The thinking required involves going up and out to the system as a whole, switching perspectives (stakeholders and situations), and generally questioning how things go. Thinking through situated examples is especially useful, so long as there are links to theory.

Idea 4: Listen and Talk

From the above, and the below, prepare some topics or questions on concepts, methods and everyday work for discussion. Find a room, get some drinks and snacks, and arrange some chairs in a circle. Try to get rid of tables and anything else that gets between you. The questions may emerge from your reading or from your experience…preferably both. E.g. If you had to explain to a neighbour why your organisation operated safely, what would you say? What do we do well? What dilemmas do we face? What surprises do we experience? How do we handle them? What unintended consequences have we experienced from interventions? What factors are at play when things go right and wrong? What is the role of designed artefacts and processes versus adaptive performance in creating safety? A good discussion will harvest new insights, including multiple perspectives and thick descriptions.

Idea 5: Write and Draw

Write about your experiences of work in the frame of Safety-II or Resilience Engineering. Think deeply about your own work and the situations you encounter and write in a way that you would explain it to a neighbour. Start to think about patterns of interactions inside and outside of your organisation – micro, meso, and macro. But keep it concrete. How do things influence each other at technical, individual, team, organisational, regulatory, governmental, media, and economic levels, to create patterns and associated wanted and unwanted outcomes? Put the concepts that you read about into the context of your practice and experience of the systems that you are a part of, or interact with. The concepts you encounter will make sense not only from the points of view of what you observe in others’ work, but in what you experience in your own. Keep it short and snappy. Think short vignettes, not a treatise. Sketch out the images that come to mind (e.g., rich pictures) and start to map out some influences that you come across. Remember, thinking is more important than method, and should always precede it.

Idea 6: Observe

Arrange to observe ordinary work. It is best to observe work that you are not intimately involved with, but that you can understand well enough to know what’s going on. This might be another hospital or ward, or another air traffic control room or sector, for instance. It is essential is that you have the right attitude – apprentice, not master. It is also essential that the people you are observing consent, and understand the purpose of the observation. If you have another role that may conflict with learning how things work (e.g., competency assessor) then you have some work to do to deconflict these roles and the mindsets and perceptions that may be associated. Don’t go with a checklist. Just hang out. Notice how people resolve the dilemmas created by goal conflicts, what trade-offs and compromises are necessary, how people work around a degraded environment (staffing and competency gaps, equipment problems, procedural complexity, etc), and how – despite the context – things work reasonably well most of the time.

Idea 7: Design

At this point, you may well have ideas about improving the system structure and patterns of system behaviour (including work), to help create the conditions for success to emerge. This effort will always start with understanding the system. You’ll need to understand interactions between people, their activities, their tools, and the contexts of work (micro, meso and macro). It is advisable to avoid major initiatives and ‘campaigns’. Small designed interventions are a good way forward. You may wish, for instance to: a) make small changes to work-as-done that help balance multiple goals; b) review procedures to remove or reconcile those that are problematic (e.g., conflicting, defunct, over-specified); c) help managers and support staff to become familiar with how the work works; d) adjust buffers or margins for performance; e) review onerous analyses of events could be better directed at patterns (e.g., onerous safety analysis of multiple events outside of one’s control); f) create a means of getting regular outside perspectives on your work (perhaps an observer swap arrangement); g) create a means to simulate unusual circumstances and allow experimental performance (not a competency check). The interventions may aim at reducing unhelpful gaps between the varieties of human work (e.g., the ignorance and fantasy, taboo, PR and subterfuge, defunct archetypes). After designing, iterate to the previous ideas.

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , ,

The Reality of Goal Conflicts and Trade-offs

by Steven Shorrock

This article is the Editorial published in HindSight 29October 2019 by EUROCONTROL (available soon at SKYbrary)

Jesper Sehested https://flic.kr/p/EHKy4a CC BY 20

“Safety is our number 1 priority!” It’s a phrase that’s sometimes used by trade and staff associations alike, and occasionally by pilots when we are encouraged to listen to the safety briefing, or when a departure is delayed for technical reasons. But I’ve noticed something. Over the last couple of decades that I’ve worked in aviation, I am hearing the phrase less and less. 

Perhaps this is something to do with the so-called ‘rhetoric-reality gap’. There are two kinds of goals, which relate to individuals and organisations. On the one hand, we have stated, declared goals. On the other, we have the goals that are evident from behaviour. In other words, ‘the purpose of a system is what it does’ (POSIWID) – a phrase coined by business professor Stafford Beer. The purpose of aviation is not to be safe per se, but to transport people and goods. In doing so, there are a number of goals. So how can we focus on what the system does and why it does what it does, in the way that it does? What a system does is subject to demand and pressure, resources, constraints, and expected consequences. 

So let’s look at the situation now. Demand is rising faster than at any time in history. According to Airbus, the number of commercial aircraft in operation will more than double in the next 20 years to 48,000 planes worldwide. And according to Boeing, 790,000 new pilots will be needed by 2037 to meet growing demand. But capacity is a critical concern. While average delays in Europe are down, capacity and staffing takes the lion’s share of delays, according to EUROCONTROL data. Airports are another major part of the capacity problem. IATA chief Alexandre de Juniac said last year, “We are in a capacity crisis. And we don’t see the required airport infrastructure investment to solve it.” 

Growing demand and increased capacity conflicts with environmental pressures. At a local level, this can be seen in the ongoing third runway saga at Heathrow, the busiest airport in Europe by passenger traffic. Despite receiving approval from Members of Parliament, expansion is opposed by local and climate groups. In Sweden, the word ‘flygskam’ or flight shame is becoming more than just a buzzword. Fewer passengers are flying to or from Swedavia’s ten airports. At a global level, Greta Thunberg recently headlined the UN Climate summit. She was photographed arriving not by plane, but by yacht, fitted with solar panels and underwater turbines. 

While aviation is particularly newsworthy with regard to climate change, the Intergovernmental Panel on Climate Change has estimated that aviation is responsible for around 3.5 percent of anthropogenic climate change, including both CO2- and non-CO2- induced effects. However, the media and public interest in aviation creates significant pressure. In 2008, aviation sector leaders signed a declaration committing to carbon-neutral growth from 2020, and by 2050 a cut in net emissions to half 2005 levels. 

As well as capacity and environmental demands and pressures, there are increasing concerns about cybersecurity (e.g., GNSS spoofing) and drones. Then there are more familiar financial pressures. At the time of writing, Thomas Cook, the world’s oldest travel company, collapsed and Adria Airways suspended flights. 

And now we come to safety. Accidents remain few in number, and flying continues to be the safest form of long-distance travel. But 2018 was a bad year for aviation safety, with 523 on-board fatalities, compared to 19 in 2017, according to IATA. Accidents involving B737 MAX aircraft raised new questions about safety at all levels. Unlike most goals, safety is a ‘background goal’ that tends to come into the foreground only when things suddenly go very badly wrong, or ‘miraculously’ right.

This is only one way in which goals differ. Some goals have a short-term focus, while others are longer term. Some goals are externally imposed, while others are internally motivated. Some goals concern production, others concern protection. Some goals relate well to quantitative measures, while others don’t. Some goals are more reactive, while others are more proactive. Sometimes, goals are compatible and can work together, while at other times they conflict and compete for resources and attention. 

Goal conflicts create dilemmas at all levels, from front line to senior management, regulation and government. Dilemmas create a need for trade-offs and compromises. These decisions are influenced by how we perceive capability, opportunities, and motivation. There are many kinds of trade-off decisions. A familiar trade-off to everyone is between thoroughness and efficiency. Too much focus on either can be a problem. Day-to-day pressures tend to push us toward greater efficiency, but when things go wrong, we realise (and are told) that more thoroughness was required. Another familiar trade-off is between the short- and long-term – the acute-chronic trade-off. Combined with pressure on efficiency, short-term goals tend to get the most attention. And we trade off individual and collective needs and wants, or a focus on components and the whole system. All of these trade-offs have implications for goals relating to safety, security, capacity, cost-efficiency, and the environment. To understand them, we need to understand five truths. 

Five Truths about Trade-offs 

1. Trade-offs occur at all levels of systems. Trade-offs occur in every layer of decision-making, from international and national policy-making to front-line staff. They occur over years and seconds. They occur in the development of strategy, targets, measures, policies, procedures, technology, and in operation. They are often invisible from afar. 

2. Trade-offs trickle down. Trade-offs at the top, especially concerning resources, constraints, incentives and disincentives, trickle down. If training is reduced for cost or staffing reasons, then staff will be less able to make effective trade-offs. If user needs are not met in a commercial-off-the-shelf system, staff will have to perform workarounds. 

3. Trade-offs combine in unexpected ways. Trade-offs made strategically, tactically and opportunistically combine to create both wanted and unwanted outcomes that were not foreseen or intended. We often treat this simplistically.

4. Trade-offs are necessary for systems to work. Trade-offs are neither good nor bad. They are necessary for systems – transport, health, education, even families – to work. And most trade-off decisions can only be made and enacted by people. 

5. Trade-offs require expertise. Trade-off decision-making often cannot be prescribed in procedures or programmed into computers. Decision-making therefore requires diverse expertise, which in turn needs time and support for development. In effect, expertise is about our ability to make effective trade-offs. 

An interesting thing about trade-offs is that they are tacitly accepted, but rarely discussed. Might ‘Safety first!’ risk making us complacent about safety? Reality always beats rhetoric in the end. So we have to talk about goal conflicts and trade-offs. Let us bring reality into the open.

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , | 1 Comment

Shorrock’s Law of Limits

Last year, I noticed a tweet from The European Cockpit Association (ECA), on EU flight time limitations (Commission Regulation (EU) 83/2014, applicable from 18 February 2016). The FTLs have been controversial since their inception. The ECA’s ‘Dead Tired‘ campaign website lists a number of stories from 2012-13, often concerning the scientific integrity of the proposals, and goal conflicts between working conditions and passenger safety versus commercial considerations. Consecutive disruptive schedules, night-time operations and inadequate standby rules have been highlighted as problems by the ECA. Didier Moraine, an ECA FTL expert, stated that “basic compliance with EASA FTL rules does not necessarily ensure safe rosters. They may actually build unsafe rosters.”

In May 2018, the ECA twitter account reported that EASA’s Flight Standards Director Jesper Rasmussen reminded a workshop audience that FTLs are to be seen as hard limits, not as targets.

A February 2019 study published by the European Union Aviation Safety Agency (EASA) found that that prescriptive limits alone are not sufficient to prevent high fatigue during night flights.

“When you put a limit on a measure, if that measure relates to efficiency, the limit will be used as a target.

This relates to Goodhart’s Law, expressed succinctly by anthropologist Marilyn Strathern as follows: “When a measure becomes a target, it ceases to be a good measure.” It also relates to The Law of Stretched Systems, expressed as follows by David Woods: “Every system is stretched to operate at its capacity; as soon as there is some improvement, for example in the form of new technology, it will be exploited to achieve a new intensity and tempo of activity.” Woods also notes that this law “captures the co-adaptive dynamic that human leaders under pressure for higher and more efficient levels of performance will exploit new capabilities to demand more complex forms of work.” But this particular aspect of system behaviour concerning limits, simple as it is, is not quite expressed by either.

An everyday example of the Law of Limits can be found in driving. As in most countries, British roads have speed limits, depending on the road type. In 2015, on 30 mph speed limit roads, the average free flow speed at which drivers choose to travel as observed at sampled automatic traffic counter (ATC) locations was 31 mph for cars and light goods vehicles. (The figure was 30 mph for rigid and articulated heavy goods vehicles [HGVs], and 28 mph for buses.) In the same year, on motorways with a 70 mph speed limit for cars and light goods vehicles, the average speed was 68 mph for cars and 69 mph for light goods vehicles. Most drivers will be familiar with the activity of driving as close to the limit as possible. Many things contribute to this, primarily a drive for efficiency coupled with a fear of consequences of exceeding the limit. Many more examples can be found in everyday life, where limits relating to any measure are imposed, and treated as targets when efficiency gains can be made.

The following is a post on Medium by David Manheim, a researcher and catastrophist focusing on risk analysis and decision theory, including existential risk mitigation, computational modelling, and epidemiology. It is reproduced here with kind permission.


Shorrock’s Law of Limits

Written by David Manheim, 25 May 2018

I recently saw an interesting new insight into the dynamics of over-optimization failures stated by Steven Shorrock: “When you put a limit on a measure, if that measure relates to efficiency, the limit will be used as a target.” This seems to be a combination of several dynamics that can co-occur in at least a couple of ways, and despite my extensive earlier discussion of related issues, I think it’s worth laying out these dynamics along with a few examples to illustrate them.

When limits become targets

First, there is a general fact about constrained optimization that, in simple terms, says that for certain types of systems the best solution to a problem is going to involve hitting one of the limits. This was formally shown in a lemma by Dantzig about the simplex method, where for any convex function the maximum must lie at an extreme point in the space. (Convexity is important, but we’ll get back to it later.)

When a regulator imposes a limit on a system, it’s usually because they see a problem with exceeding that limit. If the limit is a binding constraint — that is, if you limit something critical to the process, and require a lower level of the metric than is currently being produced, the best response is to hug the limit as closely as possible. If we limit how many hours a pilot can fly (the initial prompt for Shorrock’s law), or that a trucker can drive, the best way to comply with the limit is to get as close to the limit as possible, which minimizes how much it impacts overall efficiency.

There are often good reasons not to track a given metric, when it is unclear how to measure it, or when it is expensive to measure. A large part of the reason that companies don’t optimize for certain factors is because they aren’t tracked. What isn’t measured isn’t managed — but once there is a legal requirement to measure it, it’s much cheaper to start using that data to manage it. The companies now have something they must track, and once they are tracking hours, it would be wasteful not to also optimize for them.

Even when the limit is only sometimes reached in practice before the regulation is put in place, formalizing the metric and the limitation means that it becomes more explicit — leading to reification of the metric. This isn’t only because of the newly required cost of tracking the metric, it’s also because what used to be a difficult to conceptualize factor like “tiredness” now has a newly available albeit imperfect metric.

Lastly, there is the motivation to cheat. Before fuel efficiency standards, there was no incentive for companies to explicitly target the metric. Once the limit was put into place, companies needed to pay attention — and paying attention to a specific feature means that decisions are made with this new factor in mind. The newly reified metric gets gamed, and suddenly there is a ton of money at stake. And sometimes the easiest way to perform better is to cheat.

So there are a lot of reasons that regulators should worry about creating targets, and ignoring second-order effects caused by these rules is naive at best. If we expect the benefits to just exceed the costs, we should adjust those expectations sharply downward, and if we haven’t given fairly concrete and explicit consideration to how the rule will be gamed, we should expect to be unpleasantly surprised. That doesn’t imply that metrics can’t improve things, and it doesn’t even imply that regulations aren’t often justifiable. But it does mean that the burden of proof for justifying new regulation needs to be higher that we might previously have assumed.

Posted in systems thinking | Tagged , , , ,

What Human Factors isn’t: 4. A Cause of Accidents

‘Human Factors’ (or Ergonomics) is often presented as something that it’s not, or as something that is only a small part of the whole. Rather than just explain what Human Factors is, in this sporadic series of short posts I will explain what it isn’t. The posts outline a number of myths, misunderstandings, and false equivalencies.

In this series:

  1. What Human Factors isn’t: 1. Common Sense
  2. What Human Factors isn’t: 2. Courtesy and Civility at Work
  3. What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training
  4. What Human Factors isn’t: 4. A Cause of Accidents (this post)

Royal Navy Media Archive CC BY-NC 2.0 https://flic.kr/p/NqZrz5

Human Factors Isn’t a Cause of Accidents

An unfortunate use of the term ‘human factors’ in industry, and in the media, is as an explanation for failure. Through this lens, human factors is (or ‘are’, since the phrase tends to be used as a plural in this context) seen as a cause of accidents or other unwanted events. This immediately confuses the discipline and profession of Human Factors with a narrow, unsystemic view of factors of humans – human factors in the vernacular. (Much as I dislike capitalisation, I will use it here to separate the two.) While human limitations are relevant to accident analysis (and the analysis of work more generally), and indeed form part of many analytical methods, neither the vernacular ‘human factors’ nor the discipline of Human Factors is an explanation for failure. Below, I outline a few problems with this all-too-common perspective.

‘Failure’ means not achieving planned objectives. Since people set objectives, make plans and execute actions to achieve objectives, then almost all failure is associated with humans, unless there is some chance agency or natural phenomena involved (e.g., weather). Even then, one could take a counter-factual perspective, as is often done in accident analysis, and say that humans could have or should have predicted and planned for this.

Logically, ‘success’ has the same characteristics. Humans set objectives, make plans, and execute actions at all levels of system functioning, from law-making to front-line performance. So if failure is down to ‘human factors’ then so is success, which arguably accounts for the majority of outcomes in day-to-day work.

By this reasoning, ‘human factors’ as a cause of accidents is a monolithic explanation – even more so than ‘safety culture’. ‘Human factors’ as a cause of accidents explains both everything and nothing. Having said this, ‘human factors’ is often seen more specifically as a set of factors of humans (humans being unreliable and unpredictable elements of an otherwise well-designed and well-managed system) that are proximal to accidents.

This interpretation has been reinforced by the use of the word ‘organisational’ alongside ‘human’ in some quarters. For instance, the UK Health and Safety Executive used the term ‘Human and Organisational Factors‘ to broaden out the perceived scope of the ‘HOF’ contribution (to incidents and accidents), and there is a growing ‘Human and Organisational Performance’ movement, which has grown from ‘Human Performance‘. This is curious to many Human Factors professionals, because organisations – being created by, comprised of, and run by humans – were always within the scope of Human Factors (sometimes called ‘macro ergonomics‘) from the beginning.

The proximalisation and narrowing of ‘human factors’ becomes especially important with the post hoc ergo propter hoc fallacy, that because an event happened after something (an action or omission) then it happened because of that something. This is especially problematic in complex, high-hazard systems that are highly regulated and where systems are required to account for performance variability, in terms of design, management, and operation.

An example of proximalisation can be seen in the aftermath of the train that crashed at Santiago de Compostela in July 2013. Human error was immediately reported as the cause. A safety investigation by CIAF (here in Spanish), published in June 2014, found that driving staff failed to follow the regulations contained in the train timetable and the route plan”. Subsequently, the European Railway Agency (now the European Union Agency for Railways) found that the emphasis of the CIAF report is put on the direct cause (one human error) and on the driver’s (non-) compliance with rules, rather [than] on the underlying and root causes. Those causes are not reported as part of the conclusions of the report and typically are the most likely to include the organisational actions of Adif and Renfe.” As reported here, “many survivors, campaigners and rail analysts…questioned why rail officers in charge of the train and rail network had not factored in the possibility of human error – particularly at a bend as potentially dangerous as the Angrois curve – and had failed to put in place technology that could mitigate it”.

The safety investigation seemed to mirror a view of causation that allows for counterfactual reasoning only in the proximate sense – who touched it or failed to touch it last. In this case, and many others, it seemed that omissions are only causal when they occur at the sharp-end, even though sharp-end omissions typically occur over the course of seconds and minutes, not months and years.

In the case of Santiago de Compostela, the driver Francisco José Garzón Amo was the only person facing trial for much of the time since July 2013. However, several officials have been named in, and dropped from, judicial proceedings over the years. Their causal contributions seem to be harder to ascertain. At the time of writing, Adrés María Cortabitarte López, Director of Traffic Safety of ADIF, is also facing charges for disconnecting the ERTMS (European Railway Traffic Management System) without having previously assessed the risk to make that decision. (Ignacio Jorge Iglesias Díaz, director of the Laboratory of Railway Interoperability of Cedex said that ERTMS has a failure every billion hours, while part of the security provided by the ASFA system “rests on the human factor”.) As yet, over seven years later, there is no date set for the oral trial to find out if the accused are finally convicted of eighty crimes of involuntary manslaughter and 144 crimes of serious professional imprudence.

All of this is to say that there are consequences for both safety and justice of the framing of ‘human factors’ as a cause of accidents, and the scope of ‘human factors’ that is expressed or implied in discourse also has consequences. By framing people as the unreliable components of an otherwise well-designed and well-managed system, ‘human factors as a cause of accidents’ encourages brittle strategies in response to design problems – reminders, re-training, more procedures. But this is not all. This perspective, focusing on ‘human factors’ as the source of failure, but not the overwhelming source of success, encourages technological solutionism – more automation. This changes the nature of human involvement, rather than ‘reducing the human factor‘, and comes with ironies that are even less well understood.

So ‘human factors’ isn’t an explanation, but Human Factors theory and method can help to explain failure, and moreover, everyday work. Human factors isn’t a reason for failure, but Human Factors helps to reason about failure and – moreover – about everyday work.

Unfortunately, some Human Factors methods that have emerged from a Safety-I mindset (curiously different to the progressive mindset that created the discipline) may have encouraged a negative frame of understanding. The Human Factors Analysis and Classification System (HFACS), for instance, classifies accidents according to ‘unsafe acts’ (errors and violations), ‘preconditions for unsafe acts’, ‘unsafe supervision’, and ‘organizational influences’. The word ‘unsafe’ here is driven by outcome and hindsight biases. Arguably, it should not be attached to other words, since safety in complex sociotechnical systems is emergent, not resultant. Such Human Factors analysis tools typically classify ‘error’ (difficult as it is, to define) and ‘violation’ only at the sharp end (blunt end equivalents are seen as ‘performance shaping factors’ or in the case of HFACS – influences). So, inadvertently, Safety-I Human Factors may have encouraged proximalisation to some degree, linguistically and analytically, since errors are only errors when they can be conveniently bound, and everything else is a condition or influence – ever weakening with more time and distance from the outcomes. Again, this has implications for explanation and intervention.

Still, in the main, Human Factors is interested primarily in normal work, and sociotechnical system interaction is the primary focus of study, not accidents. Within this frame is the total influence of human involvement on system performance, and the effects of system performance on human wellbeing. Even within safety research and practice, there is an increasing emphasis in Human Factors on human involvement in how things go right, or just how things go – Safety-II.

But the term ‘human factors’ will probably be used in the vernacular for some time yet. My best advice for those who use the term ‘human factors’ in their work is to think very carefully before using the term as a cause of, or explanation for, failure. Doing so is not only meaningless, but has potential consequences for safety and justice, and even the future of work, which may be hard to imagine.

Posted in Human Factors/Ergonomics | Tagged , , , , , , ,

What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training

‘Human Factors’ (or Ergonomics) is often presented as something that it’s not, or as something that is only a small part of the whole. Rather than just explain what Human Factors is, in this sporadic series of short posts I will explain what it isn’t. The posts outline a number of myths, misunderstandings, and false equivalencies.

In this series:

  1. What Human Factors isn’t: 1. Common Sense
  2. What Human Factors isn’t: 2. Courtesy and Civility at Work
  3. What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training (this post)
  4. What Human Factors isn’t: 4. A cause of accidents

Royal Navy Media Archive CC BY-NC 2.0 https://flic.kr/p/CCkksw

Human Factors Isn’t Off-the-shelf Behaviour Modification Training

Human Factors and behaviour modification training have a somewhat complicated relationship. It is not easy to explain, especially in a way that everyone would agree. I will start by saying that one thing is certain: Human Factors and training-based behaviour modification are not equivalent. But, in my view, training-based behaviour modification can be an application of Human Factors. In other words, the two are not equivalent, but one can be an application of the other. I’ll try to explain.

Human Factors has a core focus that can be described in a few words as ‘fitting the work to the people’ or ‘designing for human use’. It does this in the context of the system as a whole. More formally, there are a number of definitions that help to make the point, but they tend include two foci: understanding system interactions as the method of understanding and design as the method of intervention. These foci are not contentious: they are core to many definitions and are the foci of Human Factors textbooks and degrees. My preferred definition was offered by my late PhD supervisor, Prof. John Wilson:

“Understanding the interactions between people and all other elements within a system, and design in light of this understanding.” (Wilson, 2014, p.12)

The word that is sometimes subject to discussion is the word ‘design’. In the context of Human Factors, it can be described as a process for solving problems and realising opportunities relating to interactions between people and all other elements within a system. Some definitions flesh this out a little more, including also the goals of Human Factors, e.g.:

“Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.” (International Ergonomics Association)

(Note that the terms ‘Human Factors’ and ‘Ergonomics’, which originate in the US and Europe respectively, are usually treated as synonymous within the discipline, but often one is chosen over the other in the profession, and in practice more generally, depending also on the country.)

Going back to the origins of Human Factors in WWII aviation, it began with observations around the lack of fit or compatibility between designed artefacts on the one hand, and human capabilities, limitations and needs on the other. While the intention of early researchers was not to create a new discipline, that is effectively what happened, as is the case, I suspect, with many disciplines.

In 1977, the Tenerife runway accident occurred. This led a renewed focus on behaviour, especially communication and teamwork, and ultimately the development of crew resource management (CRM). The term CRM was invented by American aviation psychologist John Lauber, who defined it as “using all the available resources – information, equipment, and people – to achieve safe and efficient flight operations”. The concept was further developed and tested by other applied psychologists, such as psychologist Bob Helmreich, drawing especially from social psychology.

It is worth saying here that Human Factors and Applied Psychology are closely related, sometimes indistinguishably so in practice. Applied Psychology is one of several core disciplines of HF, and a large proportion HF specialists, including those involved in the initial development of CRM and TEM, are psychologists. But the fields are also distinct disciplines and professions. Going back to the ICAO SHELL model, psychology tends to focus on the ‘liveware’ and ‘liveware-liveware’ interactions – people, individually and collectively. Human Factors tends to focus on the patterns of interactions between ‘liveware’, ‘software’ (including policies and procedures), ‘hardware’ and the ‘environment’ – the relationships between elements are more interesting and relevant than the elements themselves. Psychology is a human science focused on mind and behaviour. Human Factors is a design discipline focused on system interactions.

ICAO SHELL Model

Since ‘Human Factors’ was better embedded as a term in aviation, CRM was soon associated with Human Factors in a cockpit and crew context. In an sense, it is an aspect of ‘Human Factors in Operations‘, though even then, it is only one aspect – one application. CRM training typically comprises a training course and subsequent monitoring of CRM skills during simulator flights (line-oriented flight training, or LOFT). CRM training is now a regulatory requirement for commercial pilots under most regulatory bodies.

Threat and error management (TEM) also emerged, which is often seen as another application of HF, used alongside with normal operations safety survey (NOSS) in aviation. Interestingly, ICAO notes in a circular on TEM that,

It must be made clear from the outset that TEM and NOSS are neither human performance/Human Factors research tools, nor human performance evaluation/assessment tools. TEM and NOSS are operational tools designed to be primarily, but not exclusively, used by safety managers in their endeavours to identify and manage safety issues as they may affect safety and efficiency of aviation operations.

CRM, and to a lesser extent TEM, has since become widespread not only in aviation, but also in rail, shipping, healthcare, and other sectors.

The downside of this success is the perceived (but false) equivalence of ‘Human Factors’ and ‘training-based behaviour modification’. This perception is more prevalent among those who have received such training (e.g., pilots and clinicians), and where there are no or few Human Factors practitioners working more systemically. Unfortunately, the perception has spread to managers, who have come to see Human Factors as ‘done’ once training has been delivered. This creates a moral hazard. If there are now inadequate funds available to address wider systems problems, and if failure is seen as focused on individual and team performance, then failure is both more likely and more punishable.

So it is fair to say that Human Factors researchers and practitioners are uncomfortable with training as an intervention for problems that are not fundamentally associated with competency, at least in the first instance. Since training is about modifying people – fitting people to tasks – it seems to go against the philosophy of Human Factors. If interaction problems are rooted more in the design of activities, tools, and contexts of work, then those are the first ports of call when it comes to modification. “It’s easier to bend metal than twist arms”, wrote Sanders and McCormick (1993), while James Reason wrote “You cannot change the human condition, but you can change the conditions in which humans work” (2000).

From a practical point of view, training to modify behaviour is expensive and often ineffective in the short or long term, unless done in a way that integrates a thorough understanding of Human Factors. More is said on this by Russ et al‘s The science of human factors: separating fact from fiction, an excellent paper written by Human Factors specialists from psychological, engineering and clinical backgrounds.

But to put it into perspective, consider the National Health Service in England, which employs around 1.5 million people (1.1 FTE). Around half a million of these are doctors, nurses, midwives and ambulance staff. Training is essential for all staff, in order to do their jobs. But imagine training 500,000 staff to modify their behaviour in order to address problems. You’d still be left with inadequate staffing, poor rosters, confusing medicine packaging, badly designed equipment and facilities, too many policies and guidelines, shallow investigations, and stressful jobs and tasks, to pick just a few remaining problems. (And you’d still have to train the 140,000 or so pharmacists, radiographers, operating theatre practitioners and other scientific, therapeutic and technical staff.) During this training process, many staff would also have left, and new staff would have joined. And after a year or so, training would need to be refreshed. Training staff in behaviour modification can make painting the Forth bridge look easy.

Ultimately, all training aims to modify behaviour or practice, but it would be nonsensical to call all training ‘Human Factors’. ‘Human Factors’ is often invoked for so-called ‘non-technical’ skills rather than ‘technical skills’ – a false dichotomy on both theoretical and practical grounds, with unfortunate unintended consequences.

Still, I would argue that, if done well, behaviour modification training can be an application of Human Factors. If you’ve read this far, then you might be wondering how. One argument can be seen in in the example of CRM, which can be found in Human Factors journals and in some textbooks. However, to reinforce the point about non-equivalence, training-based behaviour modification approaches are indeed a minority of articles. Given the number of pages of journals and textbooks on Human Factors, I would estimate that training-based behaviour modification solutions are mentioned in fewer than 1 in every 100 pages.

So what might make training-based behaviour modification a ‘Human Factors’ intervention, since all training aims to modify behaviour? The conditions might involve the following sorts of activities, laid out below in a process.

  1. A problem or opportunity relating to the interaction between humans and other elements of a system has been identified and investigated.
  2. The interactions between people, activities, contexts and tools/technologies are analysed and understood using Human Factors methods.
  3. Needs arriving from 1 and 2 above are analysed and understood, considering both system performance and human wellbeing criteria.
  4. A range of solutions is considered, as ways of meeting these needs.
  5. Training is identified as an appropriate solution (typically, along with others).
  6. Training requirements are defined.
  7. A prototype training solution is developed (typically in conjunction with other prototype solutions).
  8. The prototype training solution is implemented and evaluated, ideally in conditions that are reasonably reflective of real working conditions.
  9. If the needs are not met, then the process returns to any of the steps 1 to 7 (the activities may need to be done more thoroughly, perhaps, or the problem or context may have changed).
  10. If the needs are met, then the training solution is implemented and sustained.

With such a process, we can say that training is a well-designed solution to a well-understood problem or opportunity. Training, in this context, is part of the work context, and must be designed. Where training is simply provided en masse without these steps (accepting that there will be compromises – the above is intended as a fairly robust process), then we would have to question whether training is a well-designed solution to a well-understood problem or opportunity.

What about simply teaching people about ‘factors of humans‘ – memory, attention, decision making, fatigue, and the like? Again, if something like the process above is followed, then one can be confident that this is a ‘Human Factors solution’. If the process is heavily compromised, or not followed at all, then there may well be too many assumptions about:

  • the problem or opportunity
  • the people, activities, contexts and tools (PACT) that are exposed to the problem or opportunity
  • the suitability of training as a solution
  • the adequacy of the development, evaluation and implementation of training
  • competing systems and behaviours that affect the behaviour targeted by training, and
  • the sustainability of training as a solution.

So how can you know training-based behaviour modification is a Human Factors intervention, or…just training? If a training-based behaviour modification solution is offered off the shelf, without following something like the 10 steps above, then it is probably fair to say that it isn’t a Human Factors intervention. One quick test is to check how soon training is proposed in response to an identified problem or opportunity. If any of steps 1 to 4 have been missed in any significant way (regarding the understanding of the problem/opportunity, context and possible solutions), then it’s probably not a Human Factors intervention, and it would be more appropriate (and helpful) to describe such training as something else (much ‘Human Factors Training’ would be better described as something more contextual and specific). If any of steps 6 to 9 have been missed (regarding the development, evaluation and implementation of training), then the training solution may not be well-designed, no matter how it is branded.

Posted in Human Factors/Ergonomics | Tagged , , , , ,

What Human Factors isn’t: 2. Courtesy and Civility at Work

‘Human Factors’ (or Ergonomics) is often presented as something that it’s not, or as something that is only a small part of the whole. Rather than just explain what Human Factors is, in this sporadic series of short posts I will explain what it isn’t. The posts outline a number of myths, misunderstandings, and false equivalencies.

In this series:

  1. What Human Factors Isn’t: 1. Common Sense
  2. What Human Factors isn’t: 2. Courtesy and Civility at Work
  3. What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training
  4. What Human Factors isn’t: 4. A cause of accidents

ResoluteSupportMedia CC BY 2.0 https://flic.kr/p/anoN2L

Human Factors Isn’t Courtesy and Civility at Work

Some myths about Human Factors are just plain wrong, such as the common sense myth. Others are more subtly wrong. One of these is the false equivalence of ‘Human Factors’ with ‘good behaviour at work’. Courtesy and civility and are fundamental human values, expressed differently in different cultures, and as such may be seen as ‘factors of humans‘ in the vernacular sense. They are themes that are increasingly common in healthcare in particular. These are undoubtably important aspects of life, including work-life. Research reported in healthcare journals has shown that rudeness has adverse consequences on the diagnostic and procedural performance of clinical team members, staff satisfaction and retention, among other outcomes. It is the focus of campaigns such as Civility Saves Lives. Reseach on social media indicates that incivility is a growing problem: it seems to be perceived as the norm of online interaction, rather than the exception. So courtesy and civility may also be seen as specific ‘factors affecting humans‘, and important aspects of professionalism, in a work context.

But to equate these values with Human Factors as a discipline or field of study (and practice) is erroneous. The terms rarely come up in the Human Factors and Ergonomics literature. I was unable to find either in the title, keywords or abstracts of any article published in ‘Human Factors’, ‘Ergonomics’ or ‘Applied Ergonomics’ – the top three journals in the discipline. Nor are the terms listed in any indices of Human Factors textbooks (at least the ones that I have). Human Factors practitioners are unlikely to have specific expertise the topic, though those working in healthcare may well be aware of some of the related healthcare literature. They would probably see these topics as a better fit with other disciplines.

So this wouldn’t be a surprise researchers and practitioners of Human Factors, since the terms seem not to fit the scope of Human Factors:

Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.

Practitioners of ergonomics and ergonomists contribute to the design and evaluation of tasks, jobs, products, environments and systems in order to make them compatible with the needs, abilities and limitations of people.

International Ergonomics Association (2019)

Courtesy and civility are critically important, and crop up in disciplines such as psychology, sociology, anthropology, organisational behaviour and human resources management, as well as professional studies, interdisciplinary studies and healthcare in particular. But an association with the term ‘Human Factors’ is unhelpful. First, it reduces the essential focus of Human Factors on design (of work), though one might argue that courteous and civil interactions can be designed and reinforced, for instance through teamwork training. (That being the case, courtesy and civility are aspects of a specific application of Human Factors, but should not be equated with the term.) Second, the terms distort the focus of Human Factors on ‘fitting the task to the person’. Third, a false equivalence with Human Factors may reinforce the myth that Human Factors is (or that human factors are) ‘common sense’; most people would understand their importance, and how to be courteous and civil in every day life, even if these behaviours lapse from time to time.

Courtesy and civility should be an important topic for social dialogue in all aspects of life. They are also important aspects of training – fitting the person to the job. But we should be careful in overemphasising courtesy and civility in conversations about ‘Human Factors’. The false equivalence of courtesy and civility with Human Factors risks diluting its scope to ‘everything human’ – all humanities – along with its essential focus on designing for system performance and human wellbeing.

Posted in Human Factors/Ergonomics | Tagged , , , ,