Shorrock’s Law of Limits

Last year, I noticed a tweet from The European Cockpit Association (ECA), on EU flight time limitations (Commission Regulation (EU) 83/2014, applicable from 18 February 2016). The FTLs have been controversial since their inception. The ECA’s ‘Dead Tired‘ campaign website lists a number of stories from 2012-13, often concerning the scientific integrity of the proposals, and goal conflicts between working conditions and passenger safety versus commercial considerations. Consecutive disruptive schedules, night-time operations and inadequate standby rules have been highlighted as problems by the ECA. Didier Moraine, an ECA FTL expert, stated that “basic compliance with EASA FTL rules does not necessarily ensure safe rosters. They may actually build unsafe rosters.”

In May 2018, the ECA twitter account reported that EASA’s Flight Standards Director Jesper Rasmussen reminded a workshop audience that FTLs are to be seen as hard limits, not as targets.

A February 2019 study published by the European Union Aviation Safety Agency (EASA) found that that prescriptive limits alone are not sufficient to prevent high fatigue during night flights.

“When you put a limit on a measure, if that measure relates to efficiency, the limit will be used as a target.

This relates to Goodhart’s Law, expressed succinctly by anthropologist Marilyn Strathern as follows: “When a measure becomes a target, it ceases to be a good measure.” It also relates to The Law of Stretched Systems, expressed as follows by David Woods: “Every system is stretched to operate at its capacity; as soon as there is some improvement, for example in the form of new technology, it will be exploited to achieve a new intensity and tempo of activity.” Woods also notes that this law “captures the co-adaptive dynamic that human leaders under pressure for higher and more efficient levels of performance will exploit new capabilities to demand more complex forms of work.” But this particular aspect of system behaviour concerning limits, simple as it is, is not quite expressed by either.

An everyday example of the Law of Limits can be found in driving. As in most countries, British roads have speed limits, depending on the road type. In 2015, on 30 mph speed limit roads, the average free flow speed at which drivers choose to travel as observed at sampled automatic traffic counter (ATC) locations was 31 mph for cars and light goods vehicles. (The figure was 30 mph for rigid and articulated heavy goods vehicles [HGVs], and 28 mph for buses.) In the same year, on motorways with a 70 mph speed limit for cars and light goods vehicles, the average speed was 68 mph for cars and 69 mph for light goods vehicles. Most drivers will be familiar with the activity of driving as close to the limit as possible. Many things contribute to this, primarily a drive for efficiency coupled with a fear of consequences of exceeding the limit. Many more examples can be found in everyday life, where limits relating to any measure are imposed, and treated as targets when efficiency gains can be made.

The following is a post on Medium by David Manheim, a researcher and catastrophist focusing on risk analysis and decision theory, including existential risk mitigation, computational modelling, and epidemiology. It is reproduced here with kind permission.


Shorrock’s Law of Limits

Written by David Manheim, 25 May 2018

I recently saw an interesting new insight into the dynamics of over-optimization failures stated by Steven Shorrock: “When you put a limit on a measure, if that measure relates to efficiency, the limit will be used as a target.” This seems to be a combination of several dynamics that can co-occur in at least a couple of ways, and despite my extensive earlier discussion of related issues, I think it’s worth laying out these dynamics along with a few examples to illustrate them.

When limits become targets

First, there is a general fact about constrained optimization that, in simple terms, says that for certain types of systems the best solution to a problem is going to involve hitting one of the limits. This was formally shown in a lemma by Dantzig about the simplex method, where for any convex function the maximum must lie at an extreme point in the space. (Convexity is important, but we’ll get back to it later.)

When a regulator imposes a limit on a system, it’s usually because they see a problem with exceeding that limit. If the limit is a binding constraint — that is, if you limit something critical to the process, and require a lower level of the metric than is currently being produced, the best response is to hug the limit as closely as possible. If we limit how many hours a pilot can fly (the initial prompt for Shorrock’s law), or that a trucker can drive, the best way to comply with the limit is to get as close to the limit as possible, which minimizes how much it impacts overall efficiency.

There are often good reasons not to track a given metric, when it is unclear how to measure it, or when it is expensive to measure. A large part of the reason that companies don’t optimize for certain factors is because they aren’t tracked. What isn’t measured isn’t managed — but once there is a legal requirement to measure it, it’s much cheaper to start using that data to manage it. The companies now have something they must track, and once they are tracking hours, it would be wasteful not to also optimize for them.

Even when the limit is only sometimes reached in practice before the regulation is put in place, formalizing the metric and the limitation means that it becomes more explicit — leading to reification of the metric. This isn’t only because of the newly required cost of tracking the metric, it’s also because what used to be a difficult to conceptualize factor like “tiredness” now has a newly available albeit imperfect metric.

Lastly, there is the motivation to cheat. Before fuel efficiency standards, there was no incentive for companies to explicitly target the metric. Once the limit was put into place, companies needed to pay attention — and paying attention to a specific feature means that decisions are made with this new factor in mind. The newly reified metric gets gamed, and suddenly there is a ton of money at stake. And sometimes the easiest way to perform better is to cheat.

So there are a lot of reasons that regulators should worry about creating targets, and ignoring second-order effects caused by these rules is naive at best. If we expect the benefits to just exceed the costs, we should adjust those expectations sharply downward, and if we haven’t given fairly concrete and explicit consideration to how the rule will be gamed, we should expect to be unpleasantly surprised. That doesn’t imply that metrics can’t improve things, and it doesn’t even imply that regulations aren’t often justifiable. But it does mean that the burden of proof for justifying new regulation needs to be higher that we might previously have assumed.

Posted in systems thinking | Tagged , , , ,

What Human Factors isn’t: 4. A Cause of Accidents

‘Human Factors’ (or Ergonomics) is often presented as something that it’s not, or as something that is only a small part of the whole. Rather than just explain what Human Factors is, in this sporadic series of short posts I will explain what it isn’t. The posts outline a number of myths, misunderstandings, and false equivalencies.

In this series:

  1. What Human Factors isn’t: 1. Common Sense
  2. What Human Factors isn’t: 2. Courtesy and Civility at Work
  3. What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training
  4. What Human Factors isn’t: 4. A Cause of Accidents (this post)

Royal Navy Media Archive CC BY-NC 2.0 https://flic.kr/p/NqZrz5

Human Factors Isn’t a Cause of Accidents

An unfortunate use of the term ‘human factors’ in industry, and in the media, is as an explanation for failure. Through this lens, human factors is (or ‘are’, since the phrase tends to be used as a plural in this context) seen as a cause of accidents or other unwanted events. This immediately confuses the discipline and profession of Human Factors with a narrow, unsystemic view of factors of humans – human factors in the vernacular. (Much as I dislike capitalisation, I will use it here to separate the two.) While human limitations are relevant to accident analysis (and the analysis of work more generally), and indeed form part of many analytical methods, neither the vernacular ‘human factors’ nor the discipline of Human Factors is an explanation for failure. Below, I outline a few problems with this all-too-common perspective.

‘Failure’ means not achieving planned objectives. Since people set objectives, make plans and execute actions to achieve objectives, then almost all failure is associated with humans, unless there is some chance agency or natural phenomena involved (e.g., weather). Even then, one could take a counter-factual perspective, as is often done in accident analysis, and say that humans could have or should have predicted and planned for this.

Logically, ‘success’ has the same characteristics. Humans set objectives, make plans, and execute actions at all levels of system functioning, from law-making to front-line performance. So if failure is down to ‘human factors’ then so is success, which arguably accounts for the majority of outcomes in day-to-day work.

By this reasoning, ‘human factors’ as a cause of accidents is a monolithic explanation – even more so than ‘safety culture’. ‘Human factors’ as a cause of accidents explains both everything and nothing. Having said this, ‘human factors’ is often seen more specifically as a set of factors of humans (humans being unreliable and unpredictable elements of an otherwise well-designed and well-managed system) that are proximal to accidents.

This interpretation has been reinforced by the use of the word ‘organisational’ alongside ‘human’ in some quarters. For instance, the UK Health and Safety Executive used the term ‘Human and Organisational Factors‘ to broaden out the perceived scope of the ‘HOF’ contribution (to incidents and accidents), and there is a growing ‘Human and Organisational Performance’ movement, which has grown from ‘Human Performance‘. This is curious to many Human Factors professionals, because organisations – being created by, comprised of, and run by humans – were always within the scope of Human Factors (sometimes called ‘macro ergonomics‘) from the beginning.

The proximalisation and narrowing of ‘human factors’ becomes especially important with the post hoc ergo propter hoc fallacy, that because an event happened after something (an action or omission) then it happened because of that something. This is especially problematic in complex, high-hazard systems that are highly regulated and where systems are required to account for performance variability, in terms of design, management, and operation.

An example of proximalisation can be seen in the aftermath of the train that crashed at Santiago de Compostela in July 2013. Human error was immediately reported as the cause. A safety investigation by CIAF (here in Spanish), published in June 2014, found that driving staff failed to follow the regulations contained in the train timetable and the route plan”. Subsequently, the European Railway Agency (now the European Union Agency for Railways) found that the emphasis of the CIAF report is put on the direct cause (one human error) and on the driver’s (non-) compliance with rules, rather [than] on the underlying and root causes. Those causes are not reported as part of the conclusions of the report and typically are the most likely to include the organisational actions of Adif and Renfe.” As reported here, “many survivors, campaigners and rail analysts…questioned why rail officers in charge of the train and rail network had not factored in the possibility of human error – particularly at a bend as potentially dangerous as the Angrois curve – and had failed to put in place technology that could mitigate it”.

The safety investigation seemed to mirror a view of causation that allows for counterfactual reasoning only in the proximate sense – who touched it or failed to touch it last. In this case, and many others, it seemed that omissions are only causal when they occur at the sharp-end, even though sharp-end omissions typically occur over the course of seconds and minutes, not months and years.

In the case of Santiago de Compostela, the driver Francisco José Garzón Amo was the only person facing trial for much of the time since July 2013. However, several officials have been named in, and dropped from, judicial proceedings over the years. Their causal contributions seem to be harder to ascertain. At the time of writing, Adrés María Cortabitarte López, Director of Traffic Safety of ADIF, is also facing charges for disconnecting the ERTMS (European Railway Traffic Management System) without having previously assessed the risk to make that decision. (Ignacio Jorge Iglesias Díaz, director of the Laboratory of Railway Interoperability of Cedex said that ERTMS has a failure every billion hours, while part of the security provided by the ASFA system “rests on the human factor”.) As yet, over seven years later, there is no date set for the oral trial to find out if the accused are finally convicted of eighty crimes of involuntary manslaughter and 144 crimes of serious professional imprudence.

All of this is to say that there are consequences for both safety and justice of the framing of ‘human factors’ as a cause of accidents, and the scope of ‘human factors’ that is expressed or implied in discourse also has consequences. By framing people as the unreliable components of an otherwise well-designed and well-managed system, ‘human factors as a cause of accidents’ encourages brittle strategies in response to design problems – reminders, re-training, more procedures. But this is not all. This perspective, focusing on ‘human factors’ as the source of failure, but not the overwhelming source of success, encourages technological solutionism – more automation. This changes the nature of human involvement, rather than ‘reducing the human factor‘, and comes with ironies that are even less well understood.

So ‘human factors’ isn’t an explanation, but Human Factors theory and method can help to explain failure, and moreover, everyday work. Human factors isn’t a reason for failure, but Human Factors helps to reason about failure and – moreover – about everyday work.

Unfortunately, some Human Factors methods that have emerged from a Safety-I mindset (curiously different to the progressive mindset that created the discipline) may have encouraged a negative frame of understanding. The Human Factors Analysis and Classification System (HFACS), for instance, classifies accidents according to ‘unsafe acts’ (errors and violations), ‘preconditions for unsafe acts’, ‘unsafe supervision’, and ‘organizational influences’. The word ‘unsafe’ here is driven by outcome and hindsight biases. Arguably, it should not be attached to other words, since safety in complex sociotechnical systems is emergent, not resultant. Such Human Factors analysis tools typically classify ‘error’ (difficult as it is, to define) and ‘violation’ only at the sharp end (blunt end equivalents are seen as ‘performance shaping factors’ or in the case of HFACS – influences). So, inadvertently, Safety-I Human Factors may have encouraged proximalisation to some degree, linguistically and analytically, since errors are only errors when they can be conveniently bound, and everything else is a condition or influence – ever weakening with more time and distance from the outcomes. Again, this has implications for explanation and intervention.

Still, in the main, Human Factors is interested primarily in normal work, and sociotechnical system interaction is the primary focus of study, not accidents. Within this frame is the total influence of human involvement on system performance, and the effects of system performance on human wellbeing. Even within safety research and practice, there is an increasing emphasis in Human Factors on human involvement in how things go right, or just how things go – Safety-II.

But the term ‘human factors’ will probably be used in the vernacular for some time yet. My best advice for those who use the term ‘human factors’ in their work is to think very carefully before using the term as a cause of, or explanation for, failure. Doing so is not only meaningless, but has potential consequences for safety and justice, and even the future of work, which may be hard to imagine.

Posted in Human Factors/Ergonomics | Tagged , , , , , , ,

What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training

‘Human Factors’ (or Ergonomics) is often presented as something that it’s not, or as something that is only a small part of the whole. Rather than just explain what Human Factors is, in this sporadic series of short posts I will explain what it isn’t. The posts outline a number of myths, misunderstandings, and false equivalencies.

In this series:

  1. What Human Factors isn’t: 1. Common Sense
  2. What Human Factors isn’t: 2. Courtesy and Civility at Work
  3. What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training (this post)
  4. What Human Factors isn’t: 4. A cause of accidents

Royal Navy Media Archive CC BY-NC 2.0 https://flic.kr/p/CCkksw

Human Factors Isn’t Off-the-shelf Behaviour Modification Training

Human Factors and behaviour modification training have a somewhat complicated relationship. It is not easy to explain, especially in a way that everyone would agree. I will start by saying that one thing is certain: Human Factors and training-based behaviour modification are not equivalent. But, in my view, training-based behaviour modification can be an application of Human Factors. In other words, the two are not equivalent, but one can be an application of the other. I’ll try to explain.

Human Factors has a core focus that can be described in a few words as ‘fitting the work to the people’ or ‘designing for human use’. It does this in the context of the system as a whole. More formally, there are a number of definitions that help to make the point, but they tend include two foci: understanding system interactions as the method of understanding and design as the method of intervention. These foci are not contentious: they are core to many definitions and are the foci of Human Factors textbooks and degrees. My preferred definition was offered by my late PhD supervisor, Prof. John Wilson:

“Understanding the interactions between people and all other elements within a system, and design in light of this understanding.” (Wilson, 2014, p.12)

The word that is sometimes subject to discussion is the word ‘design’. In the context of Human Factors, it can be described as a process for solving problems and realising opportunities relating to interactions between people and all other elements within a system. Some definitions flesh this out a little more, including also the goals of Human Factors, e.g.:

“Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.” (International Ergonomics Association)

(Note that the terms ‘Human Factors’ and ‘Ergonomics’, which originate in the US and Europe respectively, are usually treated as synonymous within the discipline, but often one is chosen over the other in the profession, and in practice more generally, depending also on the country.)

Going back to the origins of Human Factors in WWII aviation, it began with observations around the lack of fit or compatibility between designed artefacts on the one hand, and human capabilities, limitations and needs on the other. While the intention of early researchers was not to create a new discipline, that is effectively what happened, as is the case, I suspect, with many disciplines.

In 1977, the Tenerife runway accident occurred. This led a renewed focus on behaviour, especially communication and teamwork, and ultimately the development of crew resource management (CRM). The term CRM was invented by American aviation psychologist John Lauber, who defined it as “using all the available resources – information, equipment, and people – to achieve safe and efficient flight operations”. The concept was further developed and tested by other applied psychologists, such as psychologist Bob Helmreich, drawing especially from social psychology.

It is worth saying here that Human Factors and Applied Psychology are closely related, sometimes indistinguishably so in practice. Applied Psychology is one of several core disciplines of HF, and a large proportion HF specialists, including those involved in the initial development of CRM and TEM, are psychologists. But the fields are also distinct disciplines and professions. Going back to the ICAO SHELL model, psychology tends to focus on the ‘liveware’ and ‘liveware-liveware’ interactions – people, individually and collectively. Human Factors tends to focus on the patterns of interactions between ‘liveware’, ‘software’ (including policies and procedures), ‘hardware’ and the ‘environment’ – the relationships between elements are more interesting and relevant than the elements themselves. Psychology is a human science focused on mind and behaviour. Human Factors is a design discipline focused on system interactions.

ICAO SHELL Model

Since ‘Human Factors’ was better embedded as a term in aviation, CRM was soon associated with Human Factors in a cockpit and crew context. In an sense, it is an aspect of ‘Human Factors in Operations‘, though even then, it is only one aspect – one application. CRM training typically comprises a training course and subsequent monitoring of CRM skills during simulator flights (line-oriented flight training, or LOFT). CRM training is now a regulatory requirement for commercial pilots under most regulatory bodies.

Threat and error management (TEM) also emerged, which is often seen as another application of HF, used alongside with normal operations safety survey (NOSS) in aviation. Interestingly, ICAO notes in a circular on TEM that,

It must be made clear from the outset that TEM and NOSS are neither human performance/Human Factors research tools, nor human performance evaluation/assessment tools. TEM and NOSS are operational tools designed to be primarily, but not exclusively, used by safety managers in their endeavours to identify and manage safety issues as they may affect safety and efficiency of aviation operations.

CRM, and to a lesser extent TEM, has since become widespread not only in aviation, but also in rail, shipping, healthcare, and other sectors.

The downside of this success is the perceived (but false) equivalence of ‘Human Factors’ and ‘training-based behaviour modification’. This perception is more prevalent among those who have received such training (e.g., pilots and clinicians), and where there are no or few Human Factors practitioners working more systemically. Unfortunately, the perception has spread to managers, who have come to see Human Factors as ‘done’ once training has been delivered. This creates a moral hazard. If there are now inadequate funds available to address wider systems problems, and if failure is seen as focused on individual and team performance, then failure is both more likely and more punishable.

So it is fair to say that Human Factors researchers and practitioners are uncomfortable with training as an intervention for problems that are not fundamentally associated with competency, at least in the first instance. Since training is about modifying people – fitting people to tasks – it seems to go against the philosophy of Human Factors. If interaction problems are rooted more in the design of activities, tools, and contexts of work, then those are the first ports of call when it comes to modification. “It’s easier to bend metal than twist arms”, wrote Sanders and McCormick (1993), while James Reason wrote “You cannot change the human condition, but you can change the conditions in which humans work” (2000).

From a practical point of view, training to modify behaviour is expensive and often ineffective in the short or long term, unless done in a way that integrates a thorough understanding of Human Factors. More is said on this by Russ et al‘s The science of human factors: separating fact from fiction, an excellent paper written by Human Factors specialists from psychological, engineering and clinical backgrounds.

But to put it into perspective, consider the National Health Service in England, which employs around 1.5 million people (1.1 FTE). Around half a million of these are doctors, nurses, midwives and ambulance staff. Training is essential for all staff, in order to do their jobs. But imagine training 500,000 staff to modify their behaviour in order to address problems. You’d still be left with inadequate staffing, poor rosters, confusing medicine packaging, badly designed equipment and facilities, too many policies and guidelines, shallow investigations, and stressful jobs and tasks, to pick just a few remaining problems. (And you’d still have to train the 140,000 or so pharmacists, radiographers, operating theatre practitioners and other scientific, therapeutic and technical staff.) During this training process, many staff would also have left, and new staff would have joined. And after a year or so, training would need to be refreshed. Training staff in behaviour modification can make painting the Forth bridge look easy.

Ultimately, all training aims to modify behaviour or practice, but it would be nonsensical to call all training ‘Human Factors’. ‘Human Factors’ is often invoked for so-called ‘non-technical’ skills rather than ‘technical skills’ – a false dichotomy on both theoretical and practical grounds, with unfortunate unintended consequences.

Still, I would argue that, if done well, behaviour modification training can be an application of Human Factors. If you’ve read this far, then you might be wondering how. One argument can be seen in in the example of CRM, which can be found in Human Factors journals and in some textbooks. However, to reinforce the point about non-equivalence, training-based behaviour modification approaches are indeed a minority of articles. Given the number of pages of journals and textbooks on Human Factors, I would estimate that training-based behaviour modification solutions are mentioned in fewer than 1 in every 100 pages.

So what might make training-based behaviour modification a ‘Human Factors’ intervention, since all training aims to modify behaviour? The conditions might involve the following sorts of activities, laid out below in a process.

  1. A problem or opportunity relating to the interaction between humans and other elements of a system has been identified and investigated.
  2. The interactions between people, activities, contexts and tools/technologies are analysed and understood using Human Factors methods.
  3. Needs arriving from 1 and 2 above are analysed and understood, considering both system performance and human wellbeing criteria.
  4. A range of solutions is considered, as ways of meeting these needs.
  5. Training is identified as an appropriate solution (typically, along with others).
  6. Training requirements are defined.
  7. A prototype training solution is developed (typically in conjunction with other prototype solutions).
  8. The prototype training solution is implemented and evaluated, ideally in conditions that are reasonably reflective of real working conditions.
  9. If the needs are not met, then the process returns to any of the steps 1 to 7 (the activities may need to be done more thoroughly, perhaps, or the problem or context may have changed).
  10. If the needs are met, then the training solution is implemented and sustained.

With such a process, we can say that training is a well-designed solution to a well-understood problem or opportunity. Training, in this context, is part of the work context, and must be designed. Where training is simply provided en masse without these steps (accepting that there will be compromises – the above is intended as a fairly robust process), then we would have to question whether training is a well-designed solution to a well-understood problem or opportunity.

What about simply teaching people about ‘factors of humans‘ – memory, attention, decision making, fatigue, and the like? Again, if something like the process above is followed, then one can be confident that this is a ‘Human Factors solution’. If the process is heavily compromised, or not followed at all, then there may well be too many assumptions about:

  • the problem or opportunity
  • the people, activities, contexts and tools (PACT) that are exposed to the problem or opportunity
  • the suitability of training as a solution
  • the adequacy of the development, evaluation and implementation of training
  • competing systems and behaviours that affect the behaviour targeted by training, and
  • the sustainability of training as a solution.

So how can you know training-based behaviour modification is a Human Factors intervention, or…just training? If a training-based behaviour modification solution is offered off the shelf, without following something like the 10 steps above, then it is probably fair to say that it isn’t a Human Factors intervention. One quick test is to check how soon training is proposed in response to an identified problem or opportunity. If any of steps 1 to 4 have been missed in any significant way (regarding the understanding of the problem/opportunity, context and possible solutions), then it’s probably not a Human Factors intervention, and it would be more appropriate (and helpful) to describe such training as something else (much ‘Human Factors Training’ would be better described as something more contextual and specific). If any of steps 6 to 9 have been missed (regarding the development, evaluation and implementation of training), then the training solution may not be well-designed, no matter how it is branded.

Posted in Human Factors/Ergonomics | Tagged , , , , ,

What Human Factors isn’t: 2. Courtesy and Civility at Work

‘Human Factors’ (or Ergonomics) is often presented as something that it’s not, or as something that is only a small part of the whole. Rather than just explain what Human Factors is, in this sporadic series of short posts I will explain what it isn’t. The posts outline a number of myths, misunderstandings, and false equivalencies.

In this series:

  1. What Human Factors Isn’t: 1. Common Sense
  2. What Human Factors isn’t: 2. Courtesy and Civility at Work
  3. What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training
  4. What Human Factors isn’t: 4. A cause of accidents

ResoluteSupportMedia CC BY 2.0 https://flic.kr/p/anoN2L

Human Factors Isn’t Courtesy and Civility at Work

Some myths about Human Factors are just plain wrong, such as the common sense myth. Others are more subtly wrong. One of these is the false equivalence of ‘Human Factors’ with ‘good behaviour at work’. Courtesy and civility and are fundamental human values, expressed differently in different cultures, and as such may be seen as ‘factors of humans‘ in the vernacular sense. They are themes that are increasingly common in healthcare in particular. These are undoubtably important aspects of life, including work-life. Research reported in healthcare journals has shown that rudeness has adverse consequences on the diagnostic and procedural performance of clinical team members, staff satisfaction and retention, among other outcomes. It is the focus of campaigns such as Civility Saves Lives. Reseach on social media indicates that incivility is a growing problem: it seems to be perceived as the norm of online interaction, rather than the exception. So courtesy and civility may also be seen as specific ‘factors affecting humans‘, and important aspects of professionalism, in a work context.

But to equate these values with Human Factors as a discipline or field of study (and practice) is erroneous. The terms rarely come up in the Human Factors and Ergonomics literature. I was unable to find either in the title, keywords or abstracts of any article published in ‘Human Factors’, ‘Ergonomics’ or ‘Applied Ergonomics’ – the top three journals in the discipline. Nor are the terms listed in any indices of Human Factors textbooks (at least the ones that I have). Human Factors practitioners are unlikely to have specific expertise the topic, though those working in healthcare may well be aware of some of the related healthcare literature. They would probably see these topics as a better fit with other disciplines.

So this wouldn’t be a surprise researchers and practitioners of Human Factors, since the terms seem not to fit the scope of Human Factors:

Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.

Practitioners of ergonomics and ergonomists contribute to the design and evaluation of tasks, jobs, products, environments and systems in order to make them compatible with the needs, abilities and limitations of people.

International Ergonomics Association (2019)

Courtesy and civility are critically important, and crop up in disciplines such as psychology, sociology, anthropology, organisational behaviour and human resources management, as well as professional studies, interdisciplinary studies and healthcare in particular. But an association with the term ‘Human Factors’ is unhelpful. First, it reduces the essential focus of Human Factors on design (of work), though one might argue that courteous and civil interactions can be designed and reinforced, for instance through teamwork training. (That being the case, courtesy and civility are aspects of a specific application of Human Factors, but should not be equated with the term.) Second, the terms distort the focus of Human Factors on ‘fitting the task to the person’. Third, a false equivalence with Human Factors may reinforce the myth that Human Factors is (or that human factors are) ‘common sense’; most people would understand their importance, and how to be courteous and civil in every day life, even if these behaviours lapse from time to time.

Courtesy and civility should be an important topic for social dialogue in all aspects of life. They are also important aspects of training – fitting the person to the job. But we should be careful in overemphasising courtesy and civility in conversations about ‘Human Factors’. The false equivalence of courtesy and civility with Human Factors risks diluting its scope to ‘everything human’ – all humanities – along with its essential focus on designing for system performance and human wellbeing.

Posted in Human Factors/Ergonomics | Tagged , , , ,

What Human Factors isn’t: 1. Common Sense

‘Human Factors’ (or Ergonomics) is often presented as something that it’s not, or as something that is only a small part of the whole. Rather than just explain what Human Factors is, in this sporadic series of short posts I will explain what it isn’t. The posts outline a number of myths, misunderstandings, and false equivalencies.

In this series:

  1. What Human Factors isn’t: 1. Common Sense (this post)
  2. What Human Factors isn’t: 2. Courtesy and Civility at Work
  3. What Human Factors isn’t: 3. Off-the-shelf Behaviour Modification Training
  4. What Human Factors isn’t: 4. A cause of accidents

NATS – UK air traffic control, CC BY-NC-ND 2.0, https://flic.kr/p/ESpkCk

Human Factors Isn’t Common Sense

People sometimes assert that ‘Human Factors’ is common sense. The same is less often said of ‘ergonomics’ (which is equivalent within the discipline or knowledge base) and rarely said of ‘human factors engineering’ (also equivalent, but seems different because of the ‘engineering’ bit). ‘Common sense’ is also notoriously uncommon. Common frustrations with everyday door handles, shower controls, and websites are testament to this. So ‘Human Factors is common sense’ betrays a lack of understanding of both Human Factors and common sense.

Anyone who describes Human Factors as common sense implies that the interaction of physical, biological, social and engineering sciences, and the application of this to the design of work (including the artefacts and environments of work), is obvious and straightforward, and can therefore be done by anyone based on knowledge and skills that are commonly available. This couldn’t be further from the truth. Most aspects of Human Factors are difficult and complex, including: 1) the research and experience bases that contribute to the knowledge base of Human Factors; 2) the interaction of the empirical findings from the research in these fields; 3) the extrapolation and application of the knowledge base to work environments, including highly-regulated safety-critical environments that require specific evidence for claims; and 4) the practice skills, relationships and resources that are needed to do this in environments as diverse as healthcare, power generation, defence, manufacturing, transportation, and agriculture, between which Human Factors practitioners, and others who apply Human Factors methods and knowledge, often traverse.

The ‘common sense’ claim betrays a lack of understanding of the foundation, scope and application of Human Factors. Typically, the claim comes from those who confuse Human Factors with ‘behaving safely’. While human performance is a key aspect of Human Factors, the primary method of intervention is (work) design, not behaviour modification. Behaviour modification is usually best filed under ‘Applied Psychology’ (also – related – Human Performance as a sphere of professional activity).

Even when the aim and scope of Human Factors are better understood, the ‘common sense’ claim confuses hindsight with foresight. When a task, artefact or environment is well-designed – it is more likely to be unremarkable, or even unnoticable. It blends in with, and subtly assists, the purposive flow of experience. It is part of ‘how things ought to be’. So it may intuitively feel like common sense because it doesn’t make the day longer and harder than it needs to be. But the activities to bring about these things, including the competencies, relationships, tools, time, project arrangements, and other resources, are not common. In sectors such as air traffic control, rail, defence, and major hazard industries, including regulators, designing for system effectiveness and human wellbeing requires the support of suitably qualified and experienced practitioners working as part of teams in multiple organisational divisions – operational, design and engineering, safety and R&D.

If Human Factors is common sense, then so are architecture, surgery, and electrical engineering, or (as foundation disciplines of Human Factors) psychology, biological sciences, and industrial design.

The common sense claim wouldn’t matter much if it were not for the false and dangerous conclusion that follows: that because ‘Human Factors’ is common sense, then no competent design support is needed. People can carry on and ‘Human Factors’ will just happen as the natural order of things. The ‘natural order’ came to light in the 1940s, when ‘common sense’ cockpits led to many gear-up crash landings. Today, the ‘human factors as common sense’ myth leaves heathcare workers with dangerously confusing devices, medicine packaging, and unforgiving work environments, the consequences of which are inherited by them, by patients, by families, and by society generally.


 

Posted in Human Factors/Ergonomics | Tagged , , ,

The Organisational Homelessness of ‘Human Factors’

Most fields of professional activity have a settled home within the divisional and departmental structures of organisations. Operational staff work in operational divisions. Engineering staff work in engineering divisions. Everyone else tends to know their place: finance, human resources, legal, safety, environment, quality, security, corporate communications, and so on.

Not so for human factors (or ergonomics; HF/E). Within organisations that are large enough to have a divisional structure, ‘human factors’ can be found in a variety of divisions.

In this post, I outline four common homes for HF/E within organisations (after Kirwan, 2000), drawing on personal experience in each of the four organisational divisions in different organisations over the past 21 years, and some of the little literature on this (Kirwan, 2000; Shorrock and Williams, 2016). I conclude with some of the implications of organisational homelessness.

5429942502_cfbb0aa85c_o.jpg

Photo: Dave Gray, Design by Division, CC BY-ND 2.0, https://flic.kr/p/9gPSJj

Human Factors in Operations Divisions

‘Human performance’ is, naturally, core to HF/E (but not equivalent), and in sectors such as transportation, energy production, manufacturing, power generation. and mineral extraction, HF/E is sometimes located in operational divisions of organisations. When housed here, HF/E practitioners may assist with the design and assessment of work, training, non-technical skills and team [team/bridge/rail] resource management, procedure and job aid design, observational safety, assessments and advice on fatigue and shiftwork, staffing and rostering, maintenance, personal resilience and confidence, stress management, safety investigation, quality improvement, and advice and support on human performance more generally. Such issues are reflected in texts such as Flin et al’s Safety at the sharp end and Davies and Matthews’ Human performance: Cognition, stress and individual differences.

Being close to operational teams and work-as-done can be especially rewarding. It is the only way to really understand The Messy Reality and Taboo issues. Problems and opportunities for work-as-done are hard to see from afar (if you want to understand risk, you need to get out from behind your desk). This divisional location can provide credibility with front-line operational staff, the beneficiaries of most HF interventions, and allow for the development of the relationships required for problem solving and opportunity management.

The other side of this coin is that there is a particular risk in Ops of becoming too close to operational staff, while also under the operational management structure. Independence can be compromised.

Housed in operations, human factors – as a design discipline – may also be in the unhappy position of inheriting upstream design decisions…and any resulting problematic situations. Without proper involvement to the design process, problems may come to light late in the design and development process. At this stage, there is considerably less opportunity for influence. HF/E practitioners in this context can also risk losing design skills, and also lose track of research; the research-practice gap can seem especially wide from Ops, where it tends to be valued least of all.

The shorter term focus of operations also brings an acute-chronic trade-off: when time is limited (i.e., all the time) handling today’s problems and opportunities leaves less time for future problems and opportunities.

Human Factors in Engineering Divisions

Human factors is, fundamentally, a design discipline. This is sometimes a surprise to some who perceive it as a behavioural (or ‘human performance’) discipline, which might be seen to be more naturally aligned with operations. However, human factors – by definition – operates primarily through design, not behaviour modification. This is exemplified by various textbooks, including old classics such as Sanders and McCormick’s Human Factors in Engineering and Designand Wilson and Sharples’ Evaluation of Human Work and, more generally, ISO 9241 – Ergonomics of hums-system interaction, especially Part 210: Human-centred design for interactive systems).

The international Ergonomics Association – the umbrella organisation for all HF/E societies and associations around the world – defines the profession as that which “applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance”. So HF/E specialists can often be found in engineering divisions of organisations.

In this organisational context, HF/E can help to address the design of equipment, tools, artefacts and infrastructure, such as control rooms, buildings, and signage. In such cases, the costs of not integrating human factors are extremely high. Compared to procedures and work routines in operational contexts, equipment, tools, artefacts and infrastructure are difficult and expensive to modify. Often, operations inherit design problems and have to adjust to them, sometimes with HF/E support in operations…

There are downsides to be aligned with the engineering divisions of organisations. Practitioners will tend to find they have to work within existing design and engineering processes, which may not be ideal for iterative human factors design. Being part of the design and engineering tribe brings some distance from operations – socially and culturally.  As a result of organisational silos, the practitioner embedded in this context may well be closer to work-as-imagined and work-as-prescribed than work-as-done. Some who identify as human factors specialists – especially when previously integrated in safety or operations – will need to develop new design and engineering skills to be accepted. Designers and engineers, meanwhile, can naturally find it frustrating to have to pass a ‘human factors test’, or depend on knowledge that they do not have.

Human Factors in Safety (and Health) Divisions

Many organisations have a division of safety, focusing on operational safety (major hazards) or occupation safety, or both. Human Factors practitioners in this context – especially n high-risk industries – are likely support activities such as safety investigation, safety assessment (e.g., human reliability assessment), safety surveys, specific activities such as fatigue and stress management, and perhaps safety policy and the development of safety management systems. Safety departments may exist within a broader safety, health, environment, quality and, increasingly, security, in which cases other activities may be supported (e.g., concerning noise, vibration, the thermal environment, vision).

This context can be a good compromise between operations and engineering, affording close cooperation with both engineering project teams and operations, given sufficient attention to forging relationships across organisational boundaries. High level independent influence on strategic decisions (e.g., via safety management system requirements) can also be a benefit.

Safety divisions (and departments) are, however, often seen as external to both operations and engineering (both culturally and organisationally, requiring, for instance, internal contracting for services). HF/E may be seen as an interference, or supporting only one aspect of system performance (accident prevention), and not activities that support effectiveness more generally. Safety (and health) is only one of the goals of HF/E, which seeks to optimise system performance and human well-being.

Human Factors in R&D Divisions

For some HF/E practitioners outside of academia, R&D divisions offer a chance to do industry-centred research and development from the inside. Within government, inter-government or commercial organisations, HF/E practitioners conduct applied research on all aspects of the discipline – physical, cognitive, social, and organisational.

It is intellectually stimulating and offers a chance to generate and apply knowledge, with a longer time horizon (see Chung et al, 2016). It can offer the chance to imagine future work, and understand work-as-done now. From a professional development perspective, R&D offers the best chance to try to keep up with the impossible task of keeping up with the literature for any particular aspect of HF/E.

But of the four options outlined above, practitioners in R&D may experience the greatest distance both from front-line staff and senior management. This is reflected in outputs. As Kirwan (2000) notes, “There are three main types of papers, in order of importance to the company: trade journals, conference papers, and journal papers. The order of importance to the company and to the success of the unit is the reverse of the academic ordering of importance” (p. 668). This can be a surprise to practitioners. While Kirwan also noted, that “[journal] papers will be of greater perceived importance to the company if the HF group is located within a research division in that company”, there are in practice several barriers to publication as well as research application in organisations (Chung and Shorrock, 2010; Salmon and Williams, 2016), helping to explain the small minority of industry practitioners that author HF/E journal articles; as low as 3% in 2000 and 2010, compared to 76% and 81% of papers authored by research institution authors only, in the same years (Chung and Williamson, 2018).

This may reflect a decline in in-house HF/E R&D. Some major organisations that were previously heavy hitters in R&D no longer have a large R&D function, or no longer perform HF/E R&D.

Organisational Misfits…or Connectors at the Edge?

To many, the organisational homelessness of human factors brings confusion about the nature of the discipline and profession. Is it about design, or engineering, or operations, or safety, or health…? Human factors has a sort of identity problem.

This identity problem might be seen as fundamentally exogenous, existing in large part because of the functional structures of (especially) large organisations, which divide decision making from work, design and engineering from operations, research from practice, system performance from human well-being. These are all within the scope of HF/E; none can be excluded. But organisations are what they are, and command-and-control structures resist systems thinking.

So HF/E is indeed an organisational misfit, which might seem ironic since HF/E is concerned with the fit between system elements. HF/E is no more at home in operations, engineering, safety, R&D, or other organisational functions. Individual practitioners, may feel more at home in one context in particular, but will often be found at the edge of functions, interfacing with other functions at the organisational system as a whole. Organisations, meanwhile, may see a better fit for HF/E in one division, or indeed – perhaps ideally – spread over several. But there is no universally appropriately home. Traditional organisational structures are simply at odds with systems disciplines that work across functional divisions, especially those that do not reflect the flow of work or influence in a system.

For any individual practitioner, experience of a variety of organisational functions is helpful to understand the internal processes and sub-cultures that exists within organisations, and to identify the formal and informal bridges that exist, or can be built, between them.

So organisational homelessness can be a weakness, but also a source of strength. As a systems discipline, HF/E sees the whole, and focuses on interaction and influence, not just parts. As well as providing technical HF/E support, practitioners using an HF/E approach might ideally combine a systems and humanistic approach, mediating, bridging and connecting different organisational functions as connectors. This quote, from an interview on learning from communities with Cormac Russell, describes well this ideal:

“There are people who are loosely called ‘connectors’ at the edge, who move quite fluidly.  I think about them as multicultural in a sense, in that they can move in between any groupings really but they have that competency and capability.” Cormac Russell

In organisations that divide by design, bridging is just as important as bonding…or more so. Organisational homelessness can help practitioners to navigate different worlds, without getting entrenched in one.

References

Chung, A.Z.Q. and Shorrock, S.T. (2011). The research-practice relationship in ergonomics and human factors – surveying and bridging the gap. Ergonomics, 54(5), 413-429. [pdf]

Chung, A.Z.Q., Shorrock, S., and Williamson, A. (2016). Chapter 9: Integrating research into practice in human factors and ergonomics. In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. CRC Press.

Chung, A.Z.Q., and Williamson, A. (2018). Theory versus practice in the human factors and ergonomics discipline: Trends in journal publications from 1960 to 2010. Applied Ergonomics,66, 41-51.

Davies, D.R. and Matthews, G. (2013). Human performance: Cognition, stress and individual differences.Psychology Press.

Flin, R., O’Connor, P., Chrichton, M. (2008). Safety at the sharp end: A guide to non-technical skills. Ashgate.

Kirwan, B. (2000). Soft systems, hard lesson. Applied Ergonomics, 31, 663-678.

McCormick, E.J. and Sanders, M.S. (1992). Human Factors in Engineering and Design. McGraw-Hill.

Salmon, P. and Williams, C. (2016). Chapter 10: The challenges of practice-oriented research. In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. CRC Press.

Shorrock, S. and Williams, C. (2016). Chapter 8: Organisational contexts for human factors and ergonomics in practice. In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. CRC Press.


This is a repost of a the original, posted 06/04/2018, then lost to the technical vagaries of WordPress.

Posted in Human Factors/Ergonomics, systems thinking | Tagged , , , , ,

Reflections from the edge

Image: Steven Shorrock CC BY-NC-SA 2.0 https://flic.kr/p/pdNPXP

I have ‘worked on work’ for my whole professional career. For the majority of that time, I have worked primarily in aviation. Unlike many in the industry my primary interest is not in aviation, any more than it is in any other activity. My primary interest is not even in safety. My professional interest is, and always has been, in work and people. 

I grew up in a family business. My family, on both sides, were very much working class, though my parents were entrepreneurial and opened a market stall, which grew into a small number of shops and a small distribution business. My siblings and I were co-opted into this effort and this took up our Saturdays and holidays for as long as I can really remember. 

I was the more sensitive and reflective of the older siblings, ill-suited to some of the work, though truck driving was enjoyable in later years. So, I was the first in our known family history to decide to – or be able to – enter higher education.

Being raised in a family business, at least of the sort that I was, is not something that I can recommend, and was not a choice. This upbringing did, however, give me an immense interest in work. And so it was clear to me, from teenage years, that I would study work. This was reflected in every subject choice through high school, college and universities.

Growing up in a family business also helped me to develop a particular capacity for observation from the edge. In a sense, my whole late childhood was an exercise in crude ethnography, though I never wrote up my observations. Some of these observations related to myself and our family dynamics, such as the confusing role transitions, blends and conflicts between life as a son, brother, and employee.

Of course, I was never really asked about my observations on work. No one was. Work was just something you got on with, under a particular power structure, with particular unspoken assumptions, and particular pressures. As an inside-outsider, I could see these, and in organisations of all sorts, insider-outsiders have a particular edge on seeing things from a different – less acculturated – perspective.

This made me think about the ‘outsiders within’. There are always people who are more naturally on the edge, of groups, departments, divisions, professions. They may be more interested in the edges, in the connections, and may be naturally drawn to connecting the disconnected. From the edge, they may not be fully accepted as a ‘true’ member of any particular tribe, and so may have relatively little power and may not be heard often. But they may be accepted into many tribes, as a guest, which may well afford them an understanding of the bigger picture, as well as the unseen within. 

As Kurt Vonnegut’s character Finnerty said in Player Piano, “I want to stay as close to the edge as I can without going over. Out on the edge you see all kinds of things you can’t see from the center.” 

So in your organisations, who would these people be? What might they see from the edge that others don’t? 

Posted in Culture, Humanistic Psychology, systems thinking | Tagged , , , | 1 Comment