Maslow’s Hammer: How Tools Bias Attention and Straightjacket Thinking

In May 2013, Edition 5 of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) was published by the American Psychiatric Association. According to the American Psychiatric Association, the DSM handbook establishes consistent and reliable diagnoses that can be used in the research of mental disorders. It is used, directly or indirectly, by a multitude of professions related to healthcare, including clinicians (especially psychiatrists), researchers, regulators, health insurance companies, and pharmaceutical companies, as well as policy makers and the legal system. The manual was adapted from a U.S. military technical bulletin, and first published in 1952 as DSM-I. At this point, it comprised 106 mental disorders. It was to become the authoritative guide to the diagnosis of mental disorders in the U.S. and many other countries. 

4904264919_9639f817f3
Image: JLM Photography CC BY-NC-ND 2.0 http://flic.kr/p/8tnD6H

Disorder or normal individual variation?

The latest edition, like every edition before it, has been controversial. The original DSM-I included homosexuality as a “sociopathic personality disturbance”, and homosexuality remained in various guises until 1974. This is one of many controversial ‘disorders’ that have appeared and disappeared over the years, with the total number of listed disorders increasing to nearly triple.While each edition brings new criticisms, one fundamental criticism has remained since the inception of the DSM more than 60 years ago. As a tool, the DSM directs the way that many professions think about people, and especially what is considered normal – with enormous health, finanial and legal consequences.

Concerning the latest edition, The British Psychological Society expressed concern that “clients and the general public are negatively affected by the continued and continuous medicalisation of their natural and normal responses to their experiences… which demand helping responses, but which do not reflect illnesses so much as normal individual variation“. The criticisms came to public attention after an open letter and accompanying petition was published by the Society for Humanistic Psychology.  It is probably fair to say that the DSM is among the most controversial professional tools ever devised.

Maslow’s Hammer

The father of humanistic psychology, Abraham Maslow, also had something to say about this developing trend – nearly half a century ago. Maslow was a revolutionary who thought differently about people. While most in psychology were concerned with the abnormal and the unhealthy, Maslow was interested in the normal and the healthy. Despite a difficult childhood (and himself having been diagnosed as ‘mentally unstable’ by a psychologist), he thought of people as basically OK. But with this perspective, and having worked in psychopathology, he had a problem: the tools and methods did not serve the needs of the mass of relatively healthy and well-functioning people. According to Maslow, psychology was being viewed from the wrong end of the lens. He was interested in nonpathology, and healthy psychological functioning. In his book ‘Psychology of Science’ he wrote, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as is it were a nail.

Unsafe hammers and unsafe nails

In safety management – like health, education, engineering, design, management, manufacturing, logistics, etc – we have have thousands of tools. They are designed to help standardise practice and deliver more reliable and valid outputs; surely good things, to the extent that these concepts have meaning in the messy, complex and diverse world of industry. But tools come at a cost which is less often considered. In the safety profession, when we use tools that look only at the negative, that is all we can see. One safety management tool is our very own DSM – the safety taxonomy.

I have been involved in the development of several safety taxonomies, for predictive and retrospective use. This stemmed from an interest in error taxonomy in the early 1990s, which developed into a PhD. I developed and co-developed tools to dissect errors horizontally and vertically, resulting in catalogues of cognitive errors and influencing factors. All were rooted in theoretical literature and applied empirical work with air traffic controllers, in control centres and simulators.

One particular tool seemed to be a success. It was adapted and adopted for international use. It was encoded in databases. Training programmes were developed. It was incorporated as a core part of another tool, which itself was the basis for an award from the British Psychological Society. People became attached to it, beguiled by its dazzling array of error modes and mechanisms. But as probably the most experienced user – having analysed many hundreds of incident reports, interviews, and observations – I grew increasingly uncomfortable. My problem was not with the specifics of any particular category, but with the very concept of human error, the reductionist nature of human error-related research and practice, and how “human error” was used in the media and justice system to infer culpability.

Ludvig’s limits

Another concern emerged. In using this and other similar safety tools, I had determined thousands of ways that something could or did go wrong, and had hundreds of terms to describe this. But these safety tools were actually focusing on unsafety. Prodded out of error-fixation by the writings of Jens Rasmussen, Erik Hollnagel, Sidney Dekker, and others, I wondered how this affected the way I thought about safety. Years earlier, I had studied language and thought, and remembered a quote from Ludwig Wittgenstein in ‘Philosophical Investigations’ (1953), translated as “The limits of my language are the limits of my world. All I know is what I have words for.”

Without adopting a strong interpretation of this, it seems possible that the limits of our language about safety limit our thought about work. Sure enough, according to Lera Boroditsky, professor of psychology at Stanford University and editor in chief of Frontiers in Cultural Psychology, “It turns out that if you change how people talk, that changes how they think.” Many institutions and societies have published safety glossaries (such as that of the IET), and the profession has a rich vocabulary of failure. By contrast, the vocabulary of success – in safety management at least – could fit on half a page. Our myopic focus on failure is now itself coming into focus. This is something that Erik Hollnagel has pointed out in several presentations and publications on Safety-I and Safety-II: “On the whole, data are difficult to find, there are few models, even fewer methods, and the vocabulary is scant in comparison to that for what goes wrong” (Eurocontrol, 2013).

Like the DSM, safety taxonomies provide a catalogue of disorder, delineating error from non-error, failure from non-failure. But more recent thinking in safety suggests that success and failure come from the same place: performance variability. Just as the DSM can medicalise normal individual variation, safety tools can pathologise normal performance variability – the same variability that is needed to allow humans to remain the most flexible component of any system. Failure-focused safety tools cannot see this variability because the language and structure does not allow them to. While we cannot and should not get rid of tools, they should at least allow us to see success as clearly as failure.

Don’t peg down a paradigm!

4700552289_3659cc3ed2
Image: Pig Monkey CC BY-NC-SA 2.0 http://flic.kr/p/8anyqZ

Tools have several appealing qualities. They can help to systematise disorder, simplify complexity, reduce uncertainty, standardise practice and constrain variability. They give confidence and bestow expertise, and in doing so they reduce anxiety. Collectively, they help to define a profession. But if we are not mindful, and if we do not remain sceptical, tools become ingrained in our systems and mindsets, and ultimately supplant purpose. In doing so, they peg down old paradigms. This means that we lose the power to change and transcend paradigms – and therefore we lose the most powerful leverage points to intervene in a system, according to systems thinking hero Donella Meadows (2008).

Without wanting to discard the toolbox, it is important to remember that purpose comes before method, and thinking comes before purpose. If tools straightjacket thinking, how will we know if we are doing the right thing?

References

EUROCONTROL (2013). From Safety-I to Safety-II: A White Paper. Brussels: EUROCONTROL.

Hollnagel, E. A. (2012). A Tale of Two Safeties. Accessed at  http://www.resilienthealthcare.net/A_tale_of_two_safeties.pdf 

Hollnagel, E. (in press). Is safety a subject for science? Safety Science (also downloadable here)

Meadows, D. H. 2008. Thinking in systems : a primer. Chelsea Green Publishing, Vermont.

Author: stevenshorrock

This blog is written by Dr Steven Shorrock. I am an interdisciplinary humanistic, systems and design practitioner interested in work and life from multiple perspectives. My main interest is human functioning and system behaviour, in work and life generally. I am a Chartered Ergonomist and Human Factors Specialist with the CIEHF and a Chartered Psychologist with the British Psychological Society. I work as a human factors practitioner and psychologist in safety critical industries. I am also an Adjunct Associate Professor at University of the Sunshine Coast, Centre for Human Factors & Sociotechnical Systems. I blog in a personal capacity. Views expressed here are mine and not those of any affiliated organisation, unless stated otherwise. LinkedIn: www.linkedin.com/in/steveshorrock/ Email: contact[at]humanisticsystems[dot]com