This is the fourth in a series of posts on different ‘kinds’ of human factors, as understood both within and outside the discipline and profession of human factors and ergonomics itself. The first post explored human factors as ‘the human factor’. The second post explored human factors as ‘factors of humans’. The third post explored human factors as ‘factors affecting humans’. This post explores a fourth kind of human factors: Socio-technical system interaction.
Posts in this series:
What is it?
This kind of ‘human factors’ aims to understand and design or influence purposive interaction between people and all other elements of socio-technical systems, concrete and abstract. For industrial applications, a good shorthand for this is ‘work’. The following definition, from the International Ergonomics Association, and adopted by the Human Factors and Ergonomics Society and Chartered Institute of Ergonomics and Human Factors and other societies and associations, characterises this view of human factors.
“Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.”
Note from this definition that ‘human factors’ is formally indistinguishable from ‘ergonomics’. While some people attempt to make a distinction between the terms, the relevant professional societies and associations do not, and typically instead recognise that the two terms have different origins (in the US and Europe, respectively). The terms are often used interchangeably by HF/E specialists, akin to ‘counselling’ and ‘psychotherapy’, with scientific journals (e.g., Ergonomics, Human Factors, Applied Ergonomics) using one term or the other but with the same scope. (The equivalence of the terms of sometimes a surprise to those who are not formally trained in human factors and ergonomics, especially those from anglophone backgrounds since many languages use translations of ‘ergonomics’ (ergonomia, ergonomie, ergonomija, eirgeanamaíocht, ergonoomika, ergonomika…).
It is relevant that ‘ergonomics’ derives from the Greek ergo (‘work’) and nomos (‘laws’). There are, in fact, very few accepted laws in human factors/ergonomics (aside from familiar laws such as Fitts’ Law and Hicks’ Law), but many would acknowledge and agree on certain ‘principles’. It is also relevant that the origin of human factors and ergonomics was in the study of interaction between people and equipment and how the design of this equipment influenced performance. Notably, Fitts and Jones (1947) analysed ‘pilot error’ accidents and found that these were really symptoms of interaction with aircraft cockpit design features. For instance, flap and gear controls looked and felt alike and were colocated (a problem that has been largely solved in cockpits but remains in pharmacy in terms of medicines).
The beginnings of human factors and ergonomics, then, focused not on the human or the factors that affect the human per se, but on interaction, and how context shapes that interaction. If we ignore context, ‘factors of humans’ and ‘factors that affect humans’ become less problematic. If I turn on the wrong burner on my stove (which I do, about 30-40% of the time), it is not a problem. I simply turn it off and now I know the correct dial to turn. If I want to be sure I can bend down to look at the little diagram, but often I can’t be bothered. If an anaesthetist presses the wrong button, she might turn off the power to a continuous-flow anaesthetic machine inadvertently because of a badly positioned power switch. If the consequence of my turning the wrong dial were more severe, I would bother to check the little diagram often, but I would still make mistakes, mostly because the layout of the stoves is incompatible with the layout of the dials, which look identical and are co-located.
This fourth kind of human factors is a scientific discipline, especially from an academic point of view, and a design discipline, especially from an applied point of view. But what we are designing is not so much an artefact or procedure, as the interactions between people, tools, and environments, in particular contexts. This design involves science, engineering and craft.
Human-factors-as-sociotechnical-interaction has a dual purpose to improve system performance and human wellbeing. System performance includes all system goals (e.g., production, efficiency, safety, capacity, security, environment). Human wellbeing, meanwhile, includes human needs and values (e.g., health, safety, meaning, satisfaction, comfort, pleasure, joy).
Who uses it?
This perspective – more nuanced than the other three – is most prevalent among professional human factors specialists/ergonomists, who are accredited, certified, registered or chartered by relevant societies and associations. However, it is also natural fit with the work of system engineers, interaction designers, and even anthropologists.
This kind of human factors takes account of human limitations and capabilities, influences on human performance, and human influences on system performance. It is rooted in:
- systems thinking, including an understanding of system goals, system structure, system boundaries, system dynamics and system outcomes;
- design thinking, and the principles and processes of designing for human use; and,
- scientific understanding of people and the nature of human performance, and empirical study of activity.
This kind of human factors also makes system interaction and influence visible. It uses systems methods to understand and map this interaction, and how interaction propagates across scale, over time, as non-linear interactions within and between systems: legal, regulatory, organisational, social, individual, informational, technical, etc. While the ‘factors affecting humans’ perspective tends to be restricted to linear ‘resultant’ causation, the systems interaction perspective is alert to emergence.
As an example, what can seem like a simple and common sense intervention from one perspective (e.g., a performance target, such as the four-hour accident and emergency target in UK hospitals), can create complex non-linear interactions and emergent phenomena across almost all aspects of the wider context noted above. (See the example from General Practitioner Doctor Margaret McCartney in this post, concerning targets for dementia screening [examples are at the bottom of the post]).
Human factors as system interaction considers all stakeholders’ needs and system/design requirements, in the context of all relevant systems, including an intervention (or designed solution) as a system (e.g., a sat nav), the context as a system (e.g., vehicles, drivers, pedestrians, roads, buildings), competing systems (e.g., smartphone apps, signs), and systems that collaborate with the intervention system to deliver a function (e.g., satellites, power sources). Most failed interventions can be traced to a failed understanding of one or more of these systems, especially the context as a system. (See the example from surgeon Craig McIlhenny in this post on the installation of a fully computerised system for ordering tests [radiology requests, lab requests, etc.])
This kind of human factors is the only kind that really recognises the world as it is: complex interaction and interdependency across micro, meso, and macro scales. Also unlike the other three kinds of human factors, at least in terms of their connotations, human-factors-as-sociotechnical-interaction has a clear dual purpose: improved system performance and human well-being. It is one of the only disciplines to have this dual focus.
This kind of human factors is it is the least intuitive of the four. It is much easier to restrict ourselves to discussion of ‘the human factor’, ‘factors of humans’ and ‘factors affecting humans’, since these tend to restrict us to isolated factors and linear cause-effect thinking, usually within a restricted system boundary. This kind of human factors is therefore the perspective that tends to be neglected in favour of simplistic approaches to ‘human factors’.
It is also the most difficult of the four kinds of human factors to address in practice. In safety management, for instance, the tools that are routinely in use tend not to address system interactions. Taxonomies focus on ‘factors of humans’ and ‘factors affecting humans’, but do not model system interactions. Fault and event trees map interactions but only in the context of failure, and the interactions typically are fixed (unchanging), linear (lacking feedback loops), and assume direct cause-effect relationships, with no consideration of emergence. There is an important distinction here between thinking systemically (thinking in an ordered or structured way) and systems thinking (thinking about the nature and functioning of systems).
When human factors is approached as the study and design or influence of system interaction, it is rare that simple, straightforward answers can be given to questions. The reason that “it depends” (usually an unwanted answer to a question) is because the answer to a question, the solution to a problem, or the realisation of an opportunity in a sociotechnical system does depend on many factors: the stakeholders (and their skills, knowledge, experience, etc), their activities, the artefacts that they interact with, the demand and pressure, resources and constraints, incentives and punishments, and other aspects of the wider context – informational, temporal, technical, operational, natural, social, financial, organisational, political, cultural, and judicial. Not all of these will always be relevant, but they need to be considered in the context of interactions across scale and over time.
It is fair to say that this kind of human factors is depersonalising. As we study, map and design system interaction, the person (‘the human factor’) can seem to be an anonymous system component, certainly less interesting than system interaction. Even tools that we use to try to capture this in design – such as personas – tend to depict imaginary people. So this kind of human factors can feel more like an engineering discipline than a human discipline. It is important that this be addressed in the way that human factors is practised, both in general interpersonal approach and via qualitative methods that aim at understanding personal needs, assets and experience. Systems thinking and design thinking must be combined with humanistic thinking.
Finally, as with the second and third kinds of human factors, this kind struggles with issues of responsibility and accountability (the concepts, subtly different in English, are no different in many languages). Responsibility for system outcomes now appears to be distributed among complex system interactions, which change over time and space. Outcomes in complex sociotechnical systems are increasingly seen as emergent, arising from the nature of complex non-linear interactions across scale. But when something goes wrong, we as people, and our laws, demand that accountability be located. The nature of accountability often means that this must be held by one person or body. People at all levels – minister, regulator, CEO, manager, supervisor, front line operator – have choice. With that choice comes responsibility and accountability. A police officer chooses to drag a woman by the hair for trying to vote. A senior nurse chooses whether to bully junior nurses. A professional cyclist chooses to take prohibited drugs. A driver chooses whether to drink before driving, to drive without insurance, to drive at 60mph in a 30mph zone, or to or send text messages while driving. There may well be contextual influences on all of these behaviours, but we make choices in our behaviour. In these kinds of cases, it is important that ‘systems thinking’ is not used to scatter such choices into the ether of ‘the system’, stripping people of responsibility and accountability. That would be the ruin of both systems thinking and justice.