If you work in an industry such as transportation or healthcare – where human involvement is critical – you have probably heard people talk about ‘the human factor’. This elusive term is rarely defined, but people often refer to reducing it, or perhaps mitigating it.
This is a common frustration in human factors and ergonomics. Perhaps it is time that we discuss it further. The question is, when we talk about reducing ‘ the human factor’, what are we actually reducing?
Are we reducing the human to bad outcomes?
The ‘human factor’ in safety nearly always seems to be a negative thing. So perhaps we associate people with bad outcomes. After all, when accidents happen, there are people there. That is a consistent finding! But when accidents don’t happen, there are also people there. People don’t go to work to have accidents. And people are associated with both normal, uneventful work (most of the time), and abnormal situations – accidents and notable or exceptional successes (rarely). The principle of equivalence posits that success and failure come from the same source – ordinary work: “When wanted or unwanted events occur in complex systems, people are often doing the same sorts of things that they usually do – ordinary work. What differs is the particular set of circumstances, interactions and patterns of variability in performance. Variability, however, is normal and necessary, and enables things to work most of the time.” Reducing unwanted outcomes is obviously a good thing to do, as is increasing wanted outcomes (which achieves the same effect, and more besides). But it is not straightforward, and means looking at the socio-technical system as a whole, not just ‘the human factor’.
Are we reducing the human to ‘human error’?
‘Human error’ might be defined as “Someone did (or did not do) something that they were not (or were) supposed to do according to someone.” Reducing ‘the human factor’ by focusing myopically on ‘human errors’ may constrain and reduce the opportunity for necessary performance adjustments and variability to such a degree that there is no room for flexibility when it is needed. Providing the ability to prevent and recover from unintended interactions is a good thing, but only as part of a wider systems view that addresses other important human needs.
Are we reducing the human to a faulty information processing machine?
Information processing models have been popular in cognitive psychology and human factors for decades. They have helped us to make sense of experimental data and get a handle on aspects of human functioning and performance, such as attention, perception, and memory and (arguably to a lesser extent) decision making. They have helped us to understand the limits and quirks of our cognitive abilities. But a person is not model. These seductively simple models – boxes, lines, arrows and labels – are engineering approximations of human experience. Their geometric orderliness and linearity can never hope to capture our lived experience. A person is a unique individual, in a constantly changing context. What we see and hear, what we pay attention to and remember, what we decide and do, are dynamically and subjectively enmeshed in that context. And so is what we feel.
Are we reducing the human to emotional aberrations?
Human emotion is not given nearly as much attention as cognition in either safety or psychology, though there is plenty of evidence that thoughts and feelings are interdependent. But where emotion is thought to influence human behaviour at work, such as in emergencies or in very boring work situations, emotion-as-abberation comes info focus as some something to be removed or reduced. But emotions cannot be cleanly sorted into ‘good’ or ‘bad’ piles, and humans cannot be reduced to their emotions.
Are we reducing human involvement in socio-technical systems?
With such thinking about people at work, we risk squashing them out of the system. There has long been a mindset that people are a source of trouble in industrial environments – that if only it weren’t for the people, the world would be an engineer’s (or manager’s) paradise. Some would wish to automate people out, and it is happening at an ever faster pace. Aside from the societal cost, this often just changes the nature of human involvement – often for the worse. While some jobs are so dangerous that human involvement should be reduced, in most cases, the people are what makes the imperfect system work as a whole.
We are human.
We need not reduce ourselves to bad outcomes, human errors, cognitive abstractions, or emotional aberrations, else we are on a road to reducing our involvement altogether. In doing that, we reduce our choice and control, our responsibility and accountability, our capacity and potential, and our meaning and value.
I recently read a patient safety article that talked about “reducing human factors”, and like you thought that didn’t sound right. But you’ve done a better job than me of explaining why it’s such a problematic phrase, so thanks for that.
Often, to get engineers to understand, I’ve talked of The Human Component (one that breaks easily, has complicated maintenance, etc.)
It’s quite wrong. The technical bits should be thought of as components of the human system.