In the first post in this series, I reflected on the popularisation of the term ‘human factors’ and discussion about the topic. This has brought into focus various differences in the meanings ascribed to ‘human factors’, both within and outside the discipline and profession itself. The first post explored human factors as ‘the human factor’. This second post explores another kind of human factors: Factors of Humans.
What is it?
This kind of human factors focuses primarily on human characteristics, understood primarily via reductionism. Factors of humans include, for example:
- cognitive functions (such as attention, detection, perception, memory, judgement and reasoning (including heuristics and biases), decision making – each of these is further divided into sub-categories)
- cognitive systems (such as Kahneman’s dual process theory, or System 1 and System 2)
- types of performance (such as Rasmussen’s skill-based, rule-based, and knowledge-based performance)
- error types (such as Reason’s slips, lapses, and mistakes, and hundreds of other taxonomies, including my own)
- physical functions and qualities (such as strength, speed, accuracy, balance and reach)
- behaviours and skills (such as situation awareness, decision making, teamwork, and other ‘non-technical skills’)
- learning domains (such as Bloom’s learning taxonomy) and
- physical, cognitive and emotional states (such as stress and fatigue).
These factors of humans may be seen as limitations and capabilities. As with human-factors-as-the-human-factor, the main emphasis of human-factors-as-factors-of-humans is on the human; but general constituent human characteristics, not the person as an individual. The factors of humans approach acts like a prism, splitting human experience into conceptual categories.
This kind of human factors is emphasised in a definition provided by human factors pioneer Alphonse Chapanis (1991):
“Human Factors is a body of knowledge about human abilities, human limitations, and other human characteristics that are relevant to design.”
But Chapanis went on to say that “Human factors engineering is the application of human factors information to the design of tools, machines, systems, tasks, jobs, and environments for safe, comfortable, and effective human use.” He therefore distinguished between ‘human factors’ and ‘human factors engineering’. The two would probably be indivisible to most human factors practitioners today (certainly those who identify as ‘ergonomists’, i.e., designers), and knowledge and application come together as parts of many definitions of human factors (or ergonomics). Human factors is interested in these factors of humans, then, to the extent that they are relevant to design, at least in theory (in practice, the sheer volume of literature on these factors suggests otherwise!).
Who uses it?
Factors of humans have been researched extensively, by psychologists (especially cognitive psychologists, and increasingly neuropsychologists), physiologists and anatomists, and ergonomists/human factors specialists. Human abilities, limitations and characteristics are therefore the emphasis of many academic books and scientific articles concerning human performance, applied cognitive psychology, cognitive neuropsychology, and human factors/ergonomics, and is the standard fare of such courses.
This kind of human factors is also of interest to front-line professionals in non-technical skills training, where skilled performance is seen through the lenses of decision making, situational awareness, teamwork, and communication.
Factors of humans – abilities, limitations, and other characteristics – must be understood, at least at a basic level, for effective design and management. Decades of scientific research have produced a plethora of empirical data and theories on factors of humans, along with a sizeable corpus of measures. Arguably, literature is far more voluminous for this kind of human factors than any other kind. We therefore have a sophisticated understanding of these factors. Much is now known from psychology and related disciplines (including human factors/ergonomics) about sustained attention (vigilance), divided attention, selective attention, working memory, long term memory, skilled performance, ‘human error’, fatigue, stress, and so on. Much is also known about physiological and physical characteristics. These are relevant to the way we think about, design, perform, and talk about, record or describe human work: work-as-imagined, work-as-prescribed, work-as-done and work-as-disclosed. Various design guidelines (such as the FAA Human Factors Design Standard, HF-STD-001) have been produced on the basis of this research, and hundreds of HF/E methods.
This kind of human factors may also help people, such as front-line professionals, to understand their own performance in terms of inherent human limitations. While humanistic psychology emphasises the whole person, and resists reducing the person into parts, cognitive psychology emphasises functions and processes, and resists seeing the whole person. So while reductionism often comes in for attack among humanistic and systems practitioners, knowledge of limits to sustained attention, memory, judgement, and so on, may be helpful to better understand failure, alleviating the embarrassment or shame that often comes with so-called ‘human error’. Knowledge of social and cultural resistance to speaking up can help to bring barriers out into the open for discussion and resolution. So perhaps reductionism can help to demystify experience, help to manage problems by going down and in to our cognitive and physical make-up, and help to reduce the stigma of failure.
Focusing on human abilities, human limitations, and other human characteristics, at the expense of the whole person, the context, and system interactions, comes with several problems, but only a few will be outlined here.
One problem relates to the descriptions and understandings that emerge from the reductive ‘factors of humans’ approach. Conceptually, human experience (e.g., of performance) is understood through one or more conceptual lenses (e.g., situation awareness, mental workload), which reflect partial and fragmented reflections of experience. Furthermore, measurement relating to these concepts often favours quantification. So one’s experience may be reduced to workload, which is reduced further to a number on a 10-point scale. The result is a fragmented, partial and quantified account of experience, and these numbers have special power in decision making. However, as humanistic psychology and systems thinking reminds us, the whole is greater than the sum of its parts; measures of parts (such as cognitive functions, which are not objectively identifiable) may be misleading, and will not add up to form a good understanding of the whole. Understanding the person’s experience is likely to require qualitative approaches, which may be more difficult to gain, more difficult to publish, and more difficult to digest by decision-makers.
Related to this, analytical and conceptual accounts of performance with respect to factors of humans can seem alien to those who actually do the work. This was pointed out to me by an air traffic controller friend, who said that the concepts and language of such human factors descriptions do not match her way of thinking about her work. Human factors has inherited and integrated some of the language of cognitive psychology (which, for instance, talks about ‘encoding, storing and retrieving’, instead of ‘remembering’; cognitive neuropsychology obfuscates further still). So while reductionism may help to demystify performance issues, this starts to backfire, and the language in use can mystify, leaving the person feeling that their experience has been described in an unnatural and decontextualised way. Gong further, the factors of humans approach is often used to feed databases of incident data. ‘Human errors’ are analysed, decomposed, and entered into databases to be displayed as graphs. In the end, there is little trace of the person’s lived experience, as their understandings are reduced to an analytical melting pot.
By fragmenting performance problems down to cognitive functions (e.g., attention, decision-making), systems (e.g., System 1), error types (e.g., slips, mistakes), etc, this kind of human factors struggles with questions of responsibility. At what point does performance become unacceptable (e.g., negligent)? On the one hand, many human factors specialists would avoid this question, arguing that this is a matter for management, professional associations, and the judicial system. On the other hand, many human factors specialists use terms such as ‘violation’ (often further divided into sub-types; situational violation, routine violation, etc) to categorise decisions post hoc. (Various algorithms are available to assist with this process.) To those caught up in situations involving harm (e.g., practitioners, patients, families), this kind of analysis, reductionism and labelling may be seen as sidestepping or paying lip service to issues of responsibility.
While fundamental knowledge on factors of humans is critical to understanding, influencing and designing for performance, reductionist (including cognitivist) approaches fail to shed much light on context. By going down and in to physical and cognitive architecture, but not up and out to context and the complex human-in-system interactions, this kind of human factors fails to understand performance in context, including the physical, ambient, informational, temporal, social, organisational, legal and cultural influences on performance. This problem stems partly from the experimental paradigm that is the foundation for most of the fundamental ‘factors of humans’ knowledge. This deliberately strips away most of the richness and messiness of real context, and also tends to isolate factors from one another.
Because this kind of human factors does not understand performance in context, it may fail to deal with performance problems effectively or sustainably. For instance, simple design patterns (general reusable solutions to commonly occurring problems) are often used to counter specific cognitive limitations. These can backfire when designed artefacts are used in natural environments, and the design pattern is seen as a hindrance to be overcome or bypassed (problems with the design and implementation of checklists in hospitals is an example). Another example may be found in so-called ‘human factors training’ (which, often, should be called ‘human performance training’). This aims to improve human performance by improving knowledge and skills concerning human cognitive, social and physical limitations and capabilities. While in some areas, this has had success (e.g., teamwork), in others we remain constrained severely by our limited abilities to stretch and mitigate our native capacities and overcome system conditions (e.g., staffing constraints). Of course, in the absence of design change, training may also be the only feasible option.
A final issue worth mentioning here is that, more than any other kind of human factors, the ‘factors of humans’ kind has arguably been over-researched. Factors of humans are relatively straightforward to measure in laboratory settings, and related research seems to attract funding and journal publications. Accordingly, there are many thousands of research papers on factors of humans. The relative impact of this huge body of research on the design of real systems in real industry (e.g., road transport, healthcare, maritime) is dubious, but that is another discussion for another time.
Chapanis, A. (1991). To communicate the human factors message, you have to know what the message is and how to communicate it. Bulletin of the Human Factors Society, 34, 1-4.