Four Kinds of Thinking: 2. Systems Thinking

Several fields of study and spheres of professional activity aim to improve system performance or human wellbeing. Some focus on both objectives (e.g., human factors and ergonomics, organisational psychology), while others focus significantly on one or the other. Disciplines and professions operating in these areas have a focus on both understanding and intervention. For each discipline, the focus of understanding and method of intervention will differ. For instance, for human factors and ergonomics, understanding is focused on system interactions, while intervention is via design. Understanding alone, when intervention is required, may be interesting, but not terribly useful. Intervening without understanding may have unintended consequences (and indeed it often does). With appropriate understanding and intervention, both system performance and human wellbeing have a chance of being improved.

Understanding and intervention for system performance and human wellbeing is rooted – to some extent – in four kinds of thinking. In this short series, I outline these.

  1. Humanistic thinking
  2. Systems thinking (this post)
  3. Scientific thinking (forthcoming)
  4. Design thinking

Unless we engage in the right kinds of thinking, it is likely that our understanding will be too flawed, partial, or skewed. In this case, intervention will be ineffective or even counterproductive. Integrating all four kinds of thinking involves compromises and trade-offs, as the kinds of thinking can conflict, presenting dilemmas that we must resolve.

NASA Goddard Space Flight Center https://flic.kr/p/vj7kj2 CC BY 2.0

1. Systems Thinking

“In systems thinking, increases in understanding are believed to be obtainable by expanding the systems to be understood, not by reducing them to their elements.”

Russell L. Ackoff, in “Creating the corporate future“, in Understanding Business: Environments edited by Michael Lucas and Vivek Suneja

Why?

It should be self-evident that thinking about systems, or thinking in systems, is important to improving system performance and human wellbeing. Systems cannot be understood, let alone improved, without thinking. These ‘systems’, however, are social constructs, defined for a particular purpose with respect to a boundary of interest. This explains Ackoff’s quote about ‘expanding’ the systems to be understood. He, and others (e.g., Checkland, Meadows), are referring to expanding our perspective on the system boundary that we define (zooming out). And when it comes to wellbeing, the broader system in which people work and live has the lion’s share of influence on wellbeing: social conditions, working hours, pressure, opportunities to rest, exposure to hazardous substances and conditions, nutrition, housing, and so on.

Systems thinking is an alternative to analytical thinking – taking things apart, conceptually or physically, and trying to infer the behaviour of the whole from the behaviour of the parts (reductionism). This approach takes us down a never-ending path that, often in business, leads to the individual, even their brains (or other organs), down to microscopic units of analysis. This strips out the context that is so vital to understanding and intervention. Ignorance of systems leads to interventions based on poor understanding, and hence are ineffective (e.g., unsustainable), have unintended consequences, and are often counterproductive.

The best (e.g,, most efficient, safest) parts, designed and managed separately, will not result in the best system. It may well result in a system that works very badly, or not at all. Similarly, an organisation broken down into parts (departments, measures, etc), without significant attention to how it functions as a whole, will not function effectively. Frequently, the result is separate silos and activities that run at cross purposes and are in competition.

Systems thinking is also an alternative to linear cause-effect thinking – considering one thing to be the cause of another, if it is necessary and sufficient to produce the behaviour (determinism). Neither reductionism nor determinism have a practical stopping point; it is always possible to go further (though the stopping point is, in practice, often defined by disciplinary boundaries). Also, neither allow for the humanistic principles of choice and intentionality.

What?

When asking ‘what is systems thinking?’, it is helpful to ask ‘what is a system?’ In her book Thinking in Systems, Donella Meadows described a system as “a set of elements or parts that is coherently organized and interconnected in a pattern or structure that produces a characteristic set of behaviours, often classified as its ‘function’ or ‘purpose'”. Russell Ackoff, meanwhile, defined a system as two or more elements that satisfies the following conditions: 1. The behaviour of each element has an affect on the behaviour of the whole; 2. The behaviour of the elements and their effects on the whole are interdependent; 3. However subgroups of elements are formed, each has an effect on the behaviour of the whole and none has an independent effect on it.

Therefore, a system cannot be understood via reductionism and determinism – our dominant modes of thinking. Systems thinking offers a complementary approach that works with the following axioms, among others:

  1. A ‘system’ is a social construct. Systems are not out there waiting to be found, but in us waiting to be identified.
  2. System boundaries are not fixed and are often permeable. Systems exist alongside and within other systems. Boundaries are social constructions.
  3. Systems have a purpose, which can be seen in what the system does. Some systems have purposes of their own, and their parts have purposes of their own. Other systems have purposes that we ascribe to them, or purposes that belong to a bigger system. Purposes at different system levels interact, and often conflict.
  4. A system does something that none of its parts can do, so the essential properties of any system cannot be inferred from its parts (holism). The performance of a system depends more on how its parts interact than how they function independently.
  5. Influence is more important to systems thinking than cause-effect (determinism). Patterns of system behaviour allow us to observe influence. Where cause-effect relations can be ascertained in complex systems, they are often non-linear; small changes can produce disproportionately large (and unpredictable) effects. Effects usually have multiple causes, however. These causes may not be traceable and are socially constructed. 
  6. Complex systems have a history and have evolved irreversibly over time with the environment. Apparent order and tractability is often an artefact of hindsight. 
  7. There will be different assumptions about the ‘system’ under consideration. Systems-as-imagined rarely correspond fully to entities in the world.
  8. Synthesis and analysis are critical to systems thinking, but synthesis distinguishes systems thinking from reductionist thinking.
  9. Understanding can only ever be partial, and can only be approached via interdisciplinary efforts. No single discipline is sufficient and understanding is multi-layered.
  10. There are multiple perspectives on a system from different stakeholders. These multiple perspectives are not a weakness; they are necessary for understanding.

How?

Ackoff explains that systems thinking reverses the three steps of the ‘machine age’. We move from reductionist thinking to systems thinking. The three steps are therefore:

  1. Identify a containing whole (system) of which the thing to be explained is a part;
  2. Explain the behaviour or properties of the containing whole.
  3. Then explain the behaviour or properties of the thing to be explained in terms of its role(s) or function(s) within its containing whole.

The process of synthesis therefore involves zooming out, while analysis involves zooming in. For instance, I work predominantly in aviation, and in particular in air traffic management. I work with air traffic controllers and pilots, and most other professions involved in air traffic management. But in understanding the behaviour of air traffic controllers, I do not go first down to the cognitive and the neuropsychological and the biological. I go up – in the case of air traffic controllers – to include the working position, the sector, the control unit, the airspace, the organisation, the regulatory environment, transport system, the government, the judiciary, and the press, for example, depending on the boundary judgements that are made. Where I choose to draw the system boundary will depend on my purpose, my understanding, and perhaps my scope for intervention.

So let’s say we seek to understand and intervene with respect to something so local as occurrence reporting. Analysis may allow us to describe certain phenomena, such as low rates of reporting and perhaps self-protective reporting behaviour (e.g., basic description of an outcome). That does not give us understanding. For that, we need to go up and out, to the organisation and perhaps to the judicial system, for both will influence reporting behaviour. Effectively, we are zooming out before zooming in, in order to understand why things work in the way that they work. Now, we can approach an understanding, but never attain one. We therefore remain humble and anchored by uncertainty – a friend of intervention. Now we find that our ability to influence the judicial system is severely limited by hard constraints (e.g., penal codes), but with some room for manoeuvre (e.g., interpretation). So we have some possibilities (e.g., education of judiciary), but otherwise must now go down and in. With more understanding of the whole, we may be able to intervene where this will do some good (and more good than harm).

There are a number of methods and tools that can help us along the way, such as stakeholder maps, system maps, influence diagrams, multiple cause diagrams, rich pictures, stock and flow diagrams, system archetypes, and so on. But the insights, understandings and perspectives that emerge along the way (e.g., via conversation) are always more important than the outputs.

This kind of thinking might prompt questions such as:

  • What is the purpose of the system, evident from system behaviour?
  • What goal conflicts does the system produce?
  • What is the system boundary?
  • What are the elements of the system?
  • How do these elements interact, and what kinds of patterns emerge?
  • Who are the people that contribute to, influence, and are affected by the system?
  • What is the mental model that affects systems structure and patterns of system behaviour?
  • What kinds of outcomes emerge from system behaviour?
  • What patterns or archetypes may help explain the system behaviour that we are seeing?
  • How might a proposed intervention influence system behaviour?
  • What perspectives might we take?

Shadow side

The shadow sides of systems thinking are less about systems thinking than the way in which we typically think, especially in a business context. But this kind of thinking is arguably the most difficult of the four. We are trained to think analytically from schooling (where subjects are studies, which are further reduced to topics), and throughout work life. We also, as Ackoff remarked, think analytically quite intuitively – taking things apart in order to understand how they work. We therefore tend to think in terms of parts and linear cause-effect thinking, within a restricted system boundary. This kind of thinking is more suited to simple or obvious systems than the complex world in which we now live. Systems thinking is difficult, tiring, and we are not trained to do it.

Systems thinking also often frustrates our objectives. The problem here is not systems thinking, though, but our objectives. For instance, we may wish to introduce a performance target, league table, or new incentive. Systems thinking may well suggest unintended consequences, as these interventions sub-optimise the system, perhaps introducing internal competition and unwanted adaptive behaviours. Even the addition of three words in radio-telephone communication phraseology for clearances to pilots or seemingly small changes to communication with respect to drivers crossing a runway may have significant unwanted consequences, not predicted by analytical methods. Systems thinking exposes problems with our understanding and intervention. Sometimes, scientific evidence is demanded for these objections, even though the same was not provided for the intervention itself (and the underlying understanding), or else the evidence was so constrained by reductionism and determinism as to be meaningless in messy, real world settings.

Additionally, the tools that are routinely in use tend to be reductive, rather than synthetic, aimed at analysing components, not understanding interactions. Where interactions are modelled, they are typically assumed to be fixed (unchanging) and linear (lacking feedback loops), assuming direct cause-effect relationships, with no consideration of emergence. Such tools may also focus only on certain outcomes (e.g., failures), thus giving a partial view of system performance. There is an important distinction between thinking systemically (thinking in an ordered or structured way) and systems thinking (thinking about systems). The former may reinforce doing the wrong thing right (e.g., consistently), making our efforts ever more problematic.

Systems thinking, like scientific thinking, can be depersonalising. The person can seem to be an anonymous system component, less interesting than system interaction. To counter this, systems thinking must be combined with humanistic thinking.

Finally, this kind of thinking can make issues of responsibility and accountability difficult to ascertain. Responsibility for system outcomes now appears to be distributed among complex system interactions, which change over time and space. Outcomes in complex sociotechnical systems are increasingly seen as emergent, arising from the nature of complex non-linear interactions across scale. But when something goes wrong, we as people, and our laws, demand that accountability be located. The nature of accountability often means that this must be held by one person or body. People at all levels – minister, regulator, CEO, manager, supervisor, front line operator – have choice. With that choice comes responsibility and accountability, but choices are also made in a context, including conflicts between goals, production pressure, and degraded resources, including restricted or contradictory information. While it is simplest to ignore aspects of systems that influence decisions and behaviour, it is also unfair, as well as counterproductive in our efforts to improve systems performance and human wellbeing.

Author: stevenshorrock

This blog is written by Dr Steven Shorrock. I am an interdisciplinary humanistic, systems and design practitioner interested in work and life from multiple perspectives. My main interest is human functioning and system behaviour, in work and life generally. I am a Chartered Ergonomist and Human Factors Specialist with the CIEHF and a Chartered Psychologist with the British Psychological Society. I work as a human factors practitioner and psychologist in safety critical industries. I am also an Adjunct Associate Professor at University of the Sunshine Coast, Centre for Human Factors & Sociotechnical Systems. I blog in a personal capacity. Views expressed here are mine and not those of any affiliated organisation, unless stated otherwise. LinkedIn: www.linkedin.com/in/steveshorrock/ Email: contact[at]humanisticsystems[dot]com

One thought

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.