‘Human Error’: Still Undefined After All These Years

Mystique

Despite the pervasive and controversial nature of the notion of ‘human error’ in academia, industry and society more generally, there is still – after several decades of research – little agreement on what ‘human error’ means. Tom Singleton (1973) stated that, on attempting to consult a dictionary, “one is sent on a semantic circular tour” (p. 727). Ridley (1991) added that ‘human error’ had “joined the ranks of terminology that are common parlance but have a variety of similar, but by no means equivalent, meanings” (p. 61). Whilst ‘human error’ has been subject to philosophical inquiry for centuries, and featured in the psychological literature since the late 19th Century, it was only investigated as a distinct field of study in it own right since the 1970s, with the rebirth of the cognitive tradition. From this period, and in particularly from the mid-1980s, working definitions were more forthcoming, even though the concept was known to be elusive and ambiguous (Rasmussen, 1981; Rasmussen et al., 1987; Wioland and Amalberti, 1996). This post compares and critiques a number of definitions proposed by leading thinkers in the field of human factors, as context to some previous posts.

Singleton (1973) defined error as a “deviation from an optimal path towards an objective”. Such a path may be physical or conceptual, and usually has a certain degree of tolerance. The path may also be self-defined or defined by others, in which case it becomes a rule. This definition (derived from the Latin, errare) would seem to assume, however, that an optimum path exists, and that any deviations from such an optimum are erroneous, and that tolerance can be known in advance. These assumptions are obviously problematic.

Stephen Pheasant (1991) proposed a more binary definition of human error as simply “an incorrect belief or an incorrect action” (p. 181). Similarly, this definition assumes that it is possible to define ‘correct action’. Moreover, beliefs in dynamic and complex (or simply messy) situations cannot always be categorised as ‘correct’ and ‘incorrect’, and one could question whether an ‘incorrect’ belief’ (e.g. that something is ‘safe’), without subsequent related action (or inaction), is an error. On a practical basis, both Singleton’s and Pheasant’s definitions are problematic in their simplistic use of ‘correctness’ or optimality as a gauge indicator of error.

James Reason, one of the most influential theorists in the field, defined human error as: “a generic term to encompass all those occasions in which a planned sequence of mental or physical activities fails to achieve its intended outcome, and when these failures cannot be attributed to the intervention of some chance agency” (1990, p. 9), and later, more succinctly, as: “the failure of planned actions to achieve their desired ends – without the intervention of some unforeseen event” (1997, p. 71). These intention-focused definitions do not appear to account for the situation whereby no action is taken, perhaps where there is no plan or decision as such. The other possibility is that the outcome is intended by the actor, but is not desired by another party. The outcome may also be more immediate or short-term, or something that develops over the longer-term. It is pertinent to query, then, whose intentions we are to judge by, which outcome(s) we select, and at which point in the blunt-end to sharp-end continuum an action becomes a ‘human error’.

Reason (1997) commented that, in his discussion, the terms ‘correct’ and ‘incorrect’ relate to the accuracy of risk perception: “A correct action is one taken in the basis of an accurate risk appraisal. An incorrect action is one based upon an inaccurate or inappropriate assessment of the associated risks.” (p. 73). But again, successful actions – with respect to personal goals – may not necessarily be correct actions with respect to system goals. Also, what is correct is often unknowable in advance or at the time of performance, or else there may be varying degrees of correctness or adequacy of performance, up to a tolerance limit that may itself vary depending on the context.

Sanders and McCormick (1993) offered an outcome-oriented definition of human error as “an inappropriate or undesirable human decision or behaviour that reduces, or has the potential for reducing, effectiveness, safety, or system performance” (p. 658). Similarly, Wickens et al (1998) defined human error as “inappropriate human behavior that lowers levels of system effectiveness or safety, which may or may not result in an accident or injury.” What is ‘appropriate’ may well depend on the outcome, which may be appropriate on some criteria but not others; actions may enhance aspects of system effectiveness (such as runway capacity) while simultaneously reducing safety by some convention (such as wake vortex separation between aircraft). The addition of “which may or may not result in an accident or injury” to Wickens’ definition does not add anything, except to say that no actual accident is required, only the potential (maybe). This can be very difficult to determine, and invokes the need for counterfactual reasoning.

Jens Rasmussen (1987b) had earlier argued that, since variation in human behaviour (performance variability) is an important ingredient of learning, and experiments on the environment are necessary for problem solving, then the definition of error should be related to a lack of recovery from unacceptable effects of exploratory behaviour. This is an interesting point. The same behaviours in the same context could be described as errors or non-errors, depending on recovery. In practice, what is or is not acceptable cannot always be specified beforehand, may well depend on contextual or even chance-like factors (e.g. weather, in the case of aviation and other transport modalities). Similarly, recovery may or may not be possible depending on context (e.g. displays, rules, performance criteria, organisational expectations), which may be variable, ambiguous or otherwise suboptimal. If the ‘unacceptable effects’ involve physical loss, such as in the case of an accident, then few ‘errors’ might be identified in practice. What is unacceptable tends become a socially constructed post-hoc judgement.

Such considerations lead us to almost absurdist definitions such as that of Woods, Johannesen, Cook and Sarter (1994), who defined human error (when viewed as a potential cause) as “a specific variety of human performance that is so clearly and significantly substandard and flawed when viewed in retrospect that there is no doubt that it should have been viewed by the practitioner as substandard at the time the act was committed or omitted” (p. 2).

Meanwhile, others had offered still more definitions, combining various criteria. Senders and Moray (1991) suggested that a human error occurs when an action was “not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits” (p. 25). Ridley (1991) suggested that “error is the adoption or continuation (with or without conscious awareness) of a course of action or inaction, when more than one option exists, that results in a deviation of a sequence of events from a prescribed or expected or desired norm” (p. 63).

The many differences in the definitions of ‘human error’ stem from several sources. Psychologists seem to prefer to define errors according to deviations from intentions, expectations, cognitive ‘mechanisms’ and states, or personally preferred outcomes. Safety and design specialists typically seem to refer to deviation from norms, such as those prescribed in procedures or design documentation. Others, such as the public or those in a position to make professional judgements (such as managers, human resource specialists, judiciary), may rely on hindsight judgements with respect to cause and effect, perhaps adding considerations of intention.  Rasmussen (1987a) noted that judgements regarding these norms or criteria can change, perhaps depending on system, safety, or legal requirements, forever changing the definition of error. The same actions may be classified as correct by the performer and erroneous by others (Taylor, 1987), or even vice versa.

Over twenty years ago, Erik Hollnagel (1993) commented that “Most authors wisely refrain from giving a clear definition” (p. 5). Defining ‘human error’ seems to be a fool’s errand, but that would seem to make fools of us when we casually use the term to explain failure, as we so often do. Over 20 years since Rasmussen, Hollnagel and others pointed out the problems with the term, we still routinely discuss ‘human error’ in all contexts without really paying much attention to explaining what we mean and do not mean. Does this matter? Reason (1990), argued that the study of error, being largely an inductive mode of enquiry, does not demand precise axioms and definitions at the outset, as do the deductive sciences. But with such a range of potential meanings, it is hard to defend this position. And outside of academic enquiry, such differences have very real implications.

Since the validity and usefulness of the term ‘human error’ have long been disputed (see Hollnagel, 1993), various synonyms have cropped up. Rasmussen (1982) used the term ‘human malfunction’ to avoid connotations of blame and failure (though I am not sure that this term helps at all). Later (Rasmussen, 1987a), used the terms ‘man-machine misfits’ and ‘man-task misfits’. Occasional misfits might be traced to variability on the part of the system or the person, but frequent misfits might be considered design issues. Hollnagel (1993) employed the term ‘erroneous action’ to characterise a type of action without implying anything about the cause. And of course, ‘human error’ has been dissected into hundreds of sub-classes and variants, perhaps the best known being ‘slips’, ‘lapses’ and ‘mistakes’ (Reason, 1990). Since ‘human error’ can refer to any cognitive processes and states or actions, sub-classes tend to be defined more specifically, though even some of these terms (e.g. lapse, mistake) are used more generally with a range of meanings. Still other replacement terms, such as ‘loss of crew resource management’ or ‘loss of situational awareness’ are just fashionable synonyms according to Sidney Dekker (2002). The various departures from the term are not widely accepted or adopted. Much contemporary discussion has retained the term, while acknowledging its problematic status, sometimes adding inverted commas to ‘human error’ to remind readers of the issues (e.g. Dekker 2014).

To summarise the various definitions, you end up with something like this:

“‘Human error’ is the commission or omission of a human action, or a psychological state or activity, which is inappropriate in light of personal expectations, and/or intended behaviours/states, and/or prescribed written or unwritten rules or norms, and/or potential or actual outcomes and/or others’ evaluations.”

Alternatively, a slightly tongue-in-cheek definition of ‘human error’ that covers all bases in a more efficient way…

“Someone did (or did not do) something that they were not (or were) supposed to do according to someone.” 

…which doesn’t really help, but does at least point to the shapeshifting nature of ‘human error’.


See also

Author: stevenshorrock

This blog is written by Dr Steven Shorrock. I am an interdisciplinary humanistic, systems and design practitioner interested in work and life from multiple perspectives. My main interest is human functioning and system behaviour, in work and life generally. I am a Chartered Ergonomist and Human Factors Specialist with the CIEHF and a Chartered Psychologist with the British Psychological Society. I work as a human factors practitioner and psychologist in safety critical industries. I am also an Adjunct Associate Professor at University of the Sunshine Coast, Centre for Human Factors & Sociotechnical Systems. I blog in a personal capacity. Views expressed here are mine and not those of any affiliated organisation, unless stated otherwise. LinkedIn: www.linkedin.com/in/steveshorrock/ Email: contact[at]humanisticsystems[dot]com

8 thoughts

  1. Very interesting read, thank you.
    As a safety specialist I’ve never liked the term human error as it implies blame. Neville Stanton states the more modern term to replace human error is: human performance variability. This term accepts that this is variety in human performance. Some of which might be outside the safety boundaries. I find the term human performance variability still requires an explanation when using it to explain a finding in an accident but feels better to use to assist senior management in moving away from blaming the people ‘ at the sharp end’.

    Like

  2. I like this. A lot. I’ve printed out a copy and stuck it in my “important stuff” binder. (Old-fashioned, I know… but paper is somehow more “real” to me.) I would like to mention, however, that there is a very simple and practical definition of human error that you didn’t include, but which is deserving of some serious thought…

    “A human error is whatever I did or did not do that got me into trouble with my spouse.”

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.