Work and how to survive it: Lesson 2. Understand variation inside your organisation

Much of my practice is informed by counselling and psychotherapy as well as humanistic psychology more generally. One of my problems with these fields, however, is that insights and discussions are largely kept within the world of psychotherapy. What a waste! The vast majority of people are not engaged in psychotherapy and for the most part, psychotherapy pays little attention to applying itself to the mundane issues of everyday life, outside of counselling rooms. This is a second in a series reflecting [for now] on excerpts from Life and How To Survive It, by the psychotherapist Robin Skynner and the comedian John Cleese, with some reflections on work and organisations. 


Other posts in the series:


In Chapter 1, John Cleese and Robin Skynner are talking about people and families at different levels of mental health. Cleese asks about families that are unusually mentally healthy.

Robin …in trying to describe excellent mental health, and compare it with ill-health, and with the ‘average’ health in between that most of us enjoy most of the time … it’s difficult not to talk as if they are quite different from one another, and inhabited by different people. But, in fact, our level of health is changing all the time. We all feel more healthy in our better moments, when we are ‘in a good mood’, when things are going well, when we feel loved and valued, when we have done our best. And we can all feel less healthy under stress, when our usual sources of support are removed, when we have ‘let ourselves down’, when we ‘get out of bed on the wrong side’. Also, our level of health is not the same in all areas of our functioning. A person who is ‘average’ overall may be outstandingly healthy in some respects, even though functioning poorly in others.
John And obviously the overall level can change over time, too. Otherwise you’d be out of a job. I mean people can get more mentally healthy, can’t they?

In my last post, I wrote about the everyday experience of work, which is often ignored in safety, for several reasons, sometimes beyond the control of safety practitioners. Within this great area of day-to-day activity, many things are happening that we can easily miss unless we pay attention to them. One is that performance changes over time. One aspect of this is what is sometimes called ‘practical drift’. In Friendly Fire,  Scott Snook defines practical drift as “the slow uncoupling of practice from procedure” (p. 24). It is one way how we end up in the work archetype of The Messy Reality.

This is very hard to see from the inside, as it tends to happen slowly and tends to help achieve a range of goals that are more positively reinforced within the organisation (e.g., cost efficiency and production). But without paying attention to normal, everyday work, we don’t see what is going on. Importantly, we don’t see changes in the normal operating point, and associated behaviours, especially when these changes happen slowly and are only exposed to those who are closely associated with the work, whether front-line staff, middle managers or the Board

Drift
Figure 1: Drift toward failure. Adapted from EUROCONTROL (2013).

It often takes an outsider to see this practical drift. As Edward Hall (1959) wrote in his book The Silent Language, “culture hides much more than it reveals, and strangely enough, what it hides, it hides most effectively from its own participants” (p.39). We are victims of our cultures – professional, organisational, and national – and insights often require an outside perspective. By ‘outsider’, I simply mean someone who is seen as an outsider by those in a particular in-group, or at least someone who is on the edge of the group.

Outsiders not only see this drift more clearly, but have ‘permission’ to ask about it. This can be associated with their relative innocence. Outsiders may be able to ask the sorts of questions that a child asks: Why do you do that? What do you do it like that? What is that for? The outsider will often, however, need a basic knowledge of the work, especially for less observable forms of work and work that is very complex.

‘Permission to question’ can also be because the questioner has been accepted into a particular role. One of these has been termed ‘barbarian’, by Steele (1975) in Consulting for Organizational Change. Steele characterises this role as “violating comfortable but limiting norms and taboos that are preventing the system from being as effective as it might be. (A counter measure against tunnel vision.)”.  This relates to the archetype Taboo. In-group members will find it difficult to raise taboo issues and will often need exceptional interpersonal skill to do so in a way that helps others gain insight.

An outsider may be a cultural insider, e.g., an air traffic control supervisor or anaesthetist from elsewhere. In this case, the person is an outsider in terms of workgroup and location, but an insider in terms of profession. Supervisors observing the work of other workgroups is one way to help people ‘see’ (and improve) their performance. They may be able to see things and ask questions that true insiders can’t.

Another kind of shift or change is where performance moves towards exceptionally good performance, where work is sustainably productive, innovative, healthy, joyful, etc.  Again, if normal, routine, day-to-day performance is unknown and generally ignored (not subject to anything like the same kind of attention as incidents), then we may just gratefully accept the marginally reduced number of incidents (on the left hand side of Figure 2), but not see the way that work is changing for the better, including the ‘good practice’ that contributes to it. In our Ignorance and Fantasy of this day-to-day work, we may well implement changes (rules, limits, targets, league tables, incentives, punishments, etc) that pull the operating point back, halting progress.

Postive drift
Figure 2: Shift toward exceptional performance. Adapted from EUROCONTROL (2013).

As well as changes over time, a second thing is happening that we can easily miss unless we pay attention: there are differences between different parts of an organisation. As Skynner reminds us, “our level of health is not the same in all areas of our functioning”. In travelling to over 50 air traffic control units and centres of various kinds, I have seen and heard about large variations in many aspects of practice and performance. In most cases, where units and facilities are isolated geographically or culturally (e.g., by profession), these differences are unknown or not appreciated beyond the facility, and often beyond the department, work group, or room. Therefore, good practice in one area of an organisation is not known in another that is similar in context and could benefit. For example, one particular air traffic control tower had developed its own refresher training arrangements. These innovative practices could have been of great help to other towers but, lacking day-to-day contact with the tower in question, were unknown. (See Issues 25 and 26 of HindSight Magazine, on ‘Work-as-Imagined and Work-as-Done’ and ‘Safety at the Interfaces: Collaboration at Work’.)

These differences may also be papered over by the way that we measure performance. For instance, if we average measures across the whole organisation, or if we measure things that do not reflect differences between different areas of an organisation, then again we will be less likely to see and pay attention to them. This means we must pay careful attention to the way that differences may express themselves in terms of department, location, profession, gender, age, experience, and on. In many cases, the differences within organisations are greater than the differences between them, but if we don’t pay attention to what’s going on, we’ll never really know.

References

EUROCONTROL (2013). From Safety-I to Safety-II. A White Paper. Brussels:  

EUROCONTROL Network Manager, September 2013. Authors: Hollnagel, E., Leonhardt, J., Shorrock. S., Licu, T. [pdf] (Contributor)

EUROCONTROL (2017) HindSight Magazine. Safety at the Interfaces: Collaboration at Work. Issue 26, Winter. Brussels: EUROCONTROL. [webpage] [pdf]

EUROCONTROL (2017) HindSight Magazine. Work-as-Imagined and Work-as-Done. Issue 25, Summer. Brussels: EUROCONTROL. [pdf]

Skynner, R. and Cleese, J. (1994). Life and How To Survive It. Mandarin.

Snook, S.A. (2000). Friendly fire. Princeton, NJ: Princeton University Press.

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , , , , | 2 Comments

Work and how to survive it: Lesson 1. Understand ‘how work goes’

I have recently been reading Life and How To Survive It, nearly 20 years after first reading it. It is a book on relationships and psychology, written in conversational question and answer style, by the psychotherapist Robin Skynner and the comedian John Cleese. 

Robin Skynner was a child psychiatrist and family therapist who practised psychotherapy with individuals, couples, families, groups, and institutions, where he employed group-analytic principles. Skynner, a former WWII bomber pilot, was a pioneering thinker and practitioner whose insights on families and groups are of value to those seeking to understand behaviour in organisations. In this sporadic series of posts, I will share a few of these, as they might apply to work and organisations. 


Posts in the series:



At the start of Life and How To Survive It (p. 2) John Cleese asks about families that are unusually mentally healthy.

John Well, I’d like to know more about them. Especially as I’ve never heard anyone talk about them.

Robin No, that’s right, the research is hardly mentioned.

John I wonder why. You’d think everyone would want to learn about exceptionally well adjusted people and find out what they know that we don’t. Yet even the other shrinks I know don’t seem familiar with this research. But then, the odd thing about psychiatry is that it’s based upon the study of people who aren’t doing very well – people who have more ‘problems’ than normal.

Robin Yes, that’s basically true.

John And the more you think about that, the stranger it seems. I mean, if you wanted to write a book about how to paint, or play chess, or be a good manager you’d start by studying people who are good at those things. And you wouldn’t expect heavy sales of a book called Play Championship Golf by Learning the Secrets of the Worst 20 Players in the World.

Robin True. Doctors do at least study normal physical functions – anatomy, physiology – before going on the wards to study disease. Psychiatrists seem interested almost entirely in people who are abnormal.

Using this analogy, safety scientists and practitioners might be considered a branch of organisational psychiatry, almost entirely focused on the ‘abnormal’ of work and organisations (albeit in terms of outcomes, not necessarily behaviour or processes). The trouble with this is that we fail to understand ordinary day-to-day work and organisational behaviour, let alone that which is especially effective. (The exceptions to this are branches of study and practice on High Reliability Organisations and Appreciative Inquiry, which are more interested in the latter. But these are rarely part of normal safety management and represent niche areas of safety research.)

For a few reasons, possibly chief among them regulatory requirements in highly regulated industries, the vast majority of effort of safety scientists and practitioners is on abnormal and unwanted outcomes, and the work and processes that precede these. My estimation, based on significant contact with safety practitioners and researchers in many countries, is that this tends to take up over 90% of work hours, and many safety practitioners I know place the estimate closer to 100%. Rarely among those working as safety scientists or practitioners is there, or has there ever been, any significant systematic study of normal work (e.g., via ethnography, systems thinking, systems ergonomics, work psychology, organisational behaviour).

The disconnect between our focus of attention (unwanted events) and what we desire (safe or, more generally, effective work and systems) is what I have previously characterised as déformation professionnelle, a play on words referring to job conditioning or acclimatisation, which affects most or all professions, in some way. As noted by literary theorist Kenneth Burke, “A way of seeing is also a way of not seeing — a focus upon object A involves a neglect of object B” (1935, 1984, p. 49). In the case of safety, object A is relatively tiny in number, and object B is huge in number. Because it is so ordinary, we tend not to ‘see’ it (see Figure 1 below, from the EUROCONTROL (2013) White Paper on Safety-I and Safety-II).

Figure 1: The focus of safety.

What this means in practice for safety is that analyses and conclusions about unwanted situations can be based on flawed assumptions about normal work, from the perspective of work-as-imagined (see the archetype, Ignorance and Fantasy). Following safety incidents, even or especially ‘first of a kind’ incidents, an investigation might recommend a new rule. In such cases, where normal work has not been studied and understood, the rule can bring unintended consequences. [Readers with front-line experience will now bring several examples to mind.] The reason is that the rule acts as an unreasonable constraint on normal work, perhaps requiring significantly more time or other resources, which are unavailable, or reduced demand, which is not possible. In the absence of additional resources or reduced demand, the rule may be bypassed or, if enforced, causes secondary problems and leaves the system in a more pressured or fragile state. Meanwhile, those recommending the rule remain unaware of its failure, and assume – through lack of feedback and no further related incidents – that the rule is successful. Examples of unintended consequences in interventions can be found under the Congruence archetype.

Seeing only how things go wrong means that we neglect how things go right (e.g., a desired situation), and – most importantly – how things go, in a more ordinary or general sense. Things can go wrong in countless ways, but in many forms of work, desired outcomes tend to come about in a relatively small number of ways, at a fundamental level. A golfer can hit the ball in any direction and at a wide range of angles. The number of ways to miss a hole is effectively infinite. In comparison, the number of ways to hole the ball is relatively small. The same goes when reverse parking a car, when landing an aircraft or piloting a ship to port. There are many variations in how this is done, and some ways are especially effective, but there are countless ways in which to get it wrong. Hence training is focused, in the main on how to get it right, and not on how not to get it wrong (though many trip hazards will be important to know about).

By studying ordinary, everyday functioning in organisations, and ‘exceptionally well adjusted’ functioning, we can better understand when a sociotechnical system really is healthy or unhealthy, in what ways, how and when this is expressing itself, who is affected, and why (considering sociotechnical system interaction).

This does mean that safety scientists and practitioners, and anyone else interested in the quality and improvement of human work and sociotechnical systems, must spend more time understanding (and in) the world of work-as-done, and the messy reality of work (remembering that is it for the most part the work context that is messy, not the work itself). This is no mean feat when one’s work is driven by regulatory requirements, but if we wish to understand work and systems, and not just sporadic symptoms of unwanted interactions, then we must somehow prioritise time and other resources. As I reflected in this post, if you want to understand work, you have to get out from behind your desk.

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , , , , | Leave a comment

HindSight 27 on Competency and Expertise is out now!

HindSight Issue 27 is now available in print and online at SKYbrary and on the EUROCONTROL website. You can download the full issue, including an online supplement, and individual articles. HindSight magazine is free and published twice a year, reaching tens of thousands of readers in aviation and other sectors worldwide. You will find an introduction to this Issue below, along with links to the magazine and the individual articles.

HS27 cover_Page_01


Welcome

“Welcome to Issue 27 of HindSight magazine. The theme of this issue is ‘Competency and Expertise’. It is a topic that links to all previous Issues of HindSight.

Our ability to work effectively depends on the competency and expertise front-line practitioners and all involved in the operational, technical, support, and management functions. Safety isn’t something that is just ‘there’ in the aviation system. People actively create safety. But how do we create safety? And what do we need to do to help ensure that we can continue to do so? Competency and expertise is an important part of the answer.

In this issue, we have articles from operational, safety, human factors and psychology specialists. This is part of what makes HindSight unique – it brings together those who do the operational work, those who support operational work in a variety of ways, and those who study operational work to help better understand it. We are proud to give a voice to some of the world’s leading academic thinkers, and to operational and support specialists who have stories, experience and practical insights to convey. The key is that the articles are interesting and useful to the primary readers of HindSight: air traffic controllers and professional pilots, and hopefully to others who support operational work. Do we succeed? Let us know! In this Issue we explore the nature of competency and fundamental applications and implications for operational training, selection, and procedures, including non-technical skills and contingency. We then zoom out to regulatory and future issues. The regular feature on ‘Views from Elsewhere’ continues with articles from surgery and rail. These articles raise questions for us in aviation, and provide some practical ideas. And in this issue we have articles drawing from the world of sport. HindSight continues online over at SKYbrary with further articles in the online supplement, from aviation and other industries, on the theme of competency and expertise.

We also have ‘What we do’ good practice snippets. We’d particularly like to hear from more readers for this section. And this brings me to the next Issue, which will feature articles on ‘Change’. All readers have been affected by changes, in procedures, regulations, technology, people, incentives, organisation, etc. The pace of change will only increase. How do we change to adapt to the dynamic world of air traffic management? And how do we as individuals, teams, and organisations adapt to these changes? Let us know, in a few words or more, for your magazine on the safety or air traffic management – HindSight.”

HindSight 27 Articles

Foreword

Editorial

Op-ed

Fundamental Issues

Non-technical Skills

Contingency

View from the Air

Regulatory Issues

Future Issues

Views from elsewhere

What we do

Interview

HindSight 27 On-line Supplement

See all editions of HindSight magazine

Posted in Human Factors/Ergonomics, Safety | Tagged , , , , , , , , , , , , , | Leave a comment

Twelve Properties of Effective Classification Schemes

Most organisations seem to use a classification system (or taxonomy) of some sort, for instance for safety classification, and much time is spent developing and using such taxonomies. Importantly, decisions may be made on the basis of the taxonomy and associated database outputs (or it may be that much time is spent on development and use, but little happens as a result). There is therefore a risk of time and money spent unnecessarily, with associated opportunity costs. Still, taxonomies are a requirement in all sorts of areas, and several things should be kept in mind when designing and evaluating a taxonomy. This posts introduces twelve properties of effective classification systems.


Effective classification schemes are difficult to develop. The following properties
need to be considered to develop a valid classification scheme that is accepted and produces the desired results.

1. Reliability

A classification scheme must be used reliably by different users (inter-coder reliability or consensus) and by the same users over time (intra-coder reliability or consistency). Reliability will depend on many factors, including the degree of true category differentiation, the adequacy of definitions, the level of hierarchical taxonomic description being evaluated, the adequacy of the material being classified, the usability of the method, the adequacy of understanding of the scheme and method, and the suitability of reliability measurement. Adequate reliability can be very difficult to achieve (see Olsen and Shorrock, 2010 $$), and the heterogeneity of methodologies employed by researchers measuring reliability of incident coding techniques make it more difficult to to critically compare and evaluate different schemes (see Olsen, 2013 $$). However, if a classification scheme cannot be used reliably, then it is usually fair to say that it is not fit for purpose, especially for analysing large data sets (though it may be that reliability is achieved for certain users in certain contexts)

2. Mutual exclusivity

Categories should be mutually exclusive on the same horizontal level, so that it is only possible to place subject matter into one category. This relates to reliability. There are varying degrees of mutual exclusivity, since categories often have things in common, or overlap to some degree, depending on the criteria. Mutual exclusivity tends to be lower for abstract or unobservable concepts. This is especially true for psychological labels, and even more so those that are all-consuming (such as ‘situation awareness’, ‘mental model’, or ‘information processing’). For properly differentiated categories with clear definitions, appropriate guidance can reduce sources of confusion (see Olsen and Williamson, 2017 $$).

3. Comprehensiveness (or ‘content validity’)

It should be possible to place every sample or unit of subject matter somewhere. However, choices must be made about the granularity of categories. Highly detailed classification schemes and classification schemes that offer little granularity suffer from different problems concerning mutual exclusivity, usability, face validity, usefulness, etc.

4. Stability 

The codes within a classification system should be stable. If the codes change, prior classification may be unusable, making comparison difficult. On the other hand, it should be possible to update a classification scheme as developments occur that truly affect the scope and content (e.g., new technology). Ideally, changes should have minimal impact.

5. Face validity 

A classification system should ‘look valid’ to people who will use it or the results emanating from it. An industry classification scheme should incorporate contextual and domain-specific information (‘contextual validity’), but should also sit comfortably with pertinent theory and empirical data (‘theoretical validity’). The best approach here is to stick with what is well-understood and accepted.

6. Diagnosticity (or ‘construct validity’)

A classification scheme should help to identify the interrelations between categories and penetrate previously unforeseen trends. This may relate more to the database and method than the taxonomy itself.

7. Flexibility

A classification scheme should enable different levels of analysis according to the needs of a particular query and known information. This is often achieved by a modular and hierarchical approach. Shallow but wide taxonomies tend to suffer from low flexibility.

8. Usefulness

A classification scheme should provide useful insights into the nature of the system under consideration, and provide information for the consideration of practical measures (e.g., for improvement).

9. Resource efficiency

The time taken to become proficient in the use of a classification scheme, collect supporting information, etc., should be reasonable. Continued difficulties in using a classification scheme, after initial training and supervised practice, usually indicate a design problem and signal the need for (re-)testing.

10. Usability

A classification scheme should be easy to use in the applied setting. This means that the developers should be able to demonstrate a human-centred design process akin to ISO 9241-210. The most relevant aspects of usability should be determined. For instance, some users may have formal training in the use of the classification scheme, little time to make inputs, limited understanding of terms and acronyms, etc.

11. Trainability

It should be possible to train others how to use the classification scheme and achieve stated training objectives, including any required levels of reliability. In some cases, there may be valid reasons to go to only to the original developers for training (e.g., the taxonomy is sensitive or commercialised). In such cases, there is a need to consider why this is the case, and the possible related implications (e.g., lack of peer reviewed, public domain accounts of development; lack of independent testing).

12. Evaluation

Classification schemes should normally be amenable to independent evaluation. This means that they must be available and testable on the requirements above using an appropriate evaluation methodology. This will of course be more difficult for taxonomies that are restricted for various reasons (commercial, security, misuse prevention, etc).

Summing up…

In practice, it will not be possible to achieve anywhere near perfection on these criteria. Even where evaluation results are very positive (assuming there is any evaluation), experience in use will usually be different (and usually worse from the users’ points of view) and undocumented. Trade-offs must be made and some of the properties above will be more important than others, depending on the application. For instance, in some cases, the priority may be to help investigators to ensure that relevant issues have been considered, perhaps also to model the interactions between them (see Four Kinds of Human Factors: 4. Socio-Technical System Interaction). In other cases, the priority may be to help analysts understand prevalence and trends in very large data sets.  In still other cases, the priority may be to help users with little time or knowledge (‘casual users’) make basic inputs. These user groups have different needs and expectations.

It may also be necessary to use a taxonomy that is not adequate on some of the criteria above. In all cases, there is a need to understand the possible risks (e.g., time spent using the taxonomy; decisions made on the basis of the data) and to manage these risks (e.g., ignore data for categories that are know to be unreliable; merge categories; analyse data based on a hierarchically higher category/level up). However, three basic activities should be undertaken to help achieve adequate validity:

  1. Involve appropriate stakeholders in taxonomic development and evaluation, with a focus on understanding their needs the associated taxonomic requirements, and the trade-offs between requirements. This should include people who understand human-centred design, taxonomy and all relevant aspects of the scope of the classification scheme.
  2. Review relevant literature, analyse the work and system, and review other classification schemes (including ones previously used by any stakeholders).
  3. Test the classification scheme throughout its development and implementation.

Afterword

This post is based on a short briefing note that I produced for an Australian government agency meeting in 2004, not long after being awarded a PhD related to taxonomy (461 pages; reading not recommended, but available on request). Since I sometimes find it hard to find this note, I thought it might be useful to put online, also in the hope that it might help someone else. The post focusses on the properties of effective taxonomies that relate to development, and not so much on the use, mis-use and abuse of taxonomies. Another post, maybe.

Posted in Human Factors/Ergonomics, Safety, systems thinking | Leave a comment

Vive la Compétence !

The text in this post is from the Editorial of HindSight magazine, Issue 27, on Competency and Expertise, available for download in late August at SKYbrary here.


France_champion_of_the_Football_World_Cup_Russia_2018.jpg

Image: Kremlin.ru [CC BY 4.0 (https://creativecommons.org/licenses/by/4.0)%5D, via Wikimedia Commons

This summer, we have been entertained by the world’s best footballers – experts in the game. And it just so happens that Competency and Expertise is theme of this Issue of HindSight. What might we learn from World Cup 2018? Here are five observations.

1. Past performance does not determine future performance

Some world-leading teams, which were favourites to win, were knocked out early, or didn’t qualify. It just goes to show that we can’t rely on our record. Past success does not guarantee future success. The same tactics that worked in the past will not necessarily work in the future.

But we humans are creatures of habit. In his famous book Human Error, James Reason (1990) described two ways that we rely – or over-rely – on our past experience. The first  is similarity matching. When a situation is similar to one experienced previously, we use pattern patching and tend to respond in a similar way to how we did before. The second is frequency gambling. More frequent solutions in roughly similar conditions will tend to prevail. Most of the time, these are efficient ways of working, and efficiency is critical when seconds count. But sometimes, we need to be more thorough, especially when  preparing, practising and planning. In any case, we must always adapt to the situation.

Just as past success does not guarantee future success, past failure does not guarantee future failure. Penalties were a case in point. Far from being a lottery that is impossible to rehearse for, or an event for which some teams are ‘jinxed’, this year showed that extensive physical and psychological preparation for such high pressure scenarios pays off.

This is something that I am particularly interested in within ANSPs. Front-line safety-critical staff need and deserve world-class training, especially refresher training. This isn’t a luxury. It’s a necessity, but the sort of necessity that sometimes becomes obvious only in hindsight. The same applies to team resource management training, and other training that integrates lessons from the past. The lessons that stick often come from past failures, but we need to learn those lessons in the right way, in the right context.

2. Teams are more than the sum of their parts…and success runs deep

It became clear in this World Cup that individual expertise does not equal team competence. Teams can suffer through overreliance on star players, but can benefit greatly from teamwork bonded with trust, respect, and an understanding of how each player will respond in a given situation. The same applies in air traffic management. Here, we have procedures to help us predict how others will respond. But procedures do not determine how someone will respond. They do not even apply to all situations, nor prescribe all responses. In this case, trust built from working together helps us to succeed.

In the World Cup, the team is not just the players on the pitch. The best managers set up their teams to win, using all necessary resources, and adapting their style to whatever will bring out the best from each player. Everything is designed and managed for human performance. Hundreds more, including psychologists, dietitians,
physiotherapists, etc, help players to perform at their peak. It is similar with ANSPs. While all have similar basic kinds of front-line support staff, some ANSPs have teams of qualified human factors/ ergonomics specialists, psychologists, TRM facilitators, CISM peers, educational specialists, etc. Human performance is what we do, but to be sustainably successful, it needs a strong support network.

3. Technology changes the nature of work

The introduction of video assistant referee showed how technology changes the nature of work. Referees now have to use their expertise to decide when to use the technology. Over-reliance ruins the spontaneity of play. Under-use may bring criticism that not only did a referee not spot a foul or offside, but that they didn’t use a tool that could have shown this: two mistakes, where previously there would have been only one

In The ETTO Principle, Erik Hollnagel discusses a fundamental trade-off that underlies human performance: the efficiency-thoroughness trade-off. Referees must balance efficiency against thoroughness to harmonise fluidity and fairness. Footballers do the same. If there is time to be thorough to set up a shot, then they will. If not, then they need to strike roughly on target. The right balance is clear in hindsight. For controllers, a very thorough approach to flight data recording with an electronic solution may result in too much head-down time. A very efficient approach may result in over-reliance on memory. The efficiency-thoroughness trade-off is a constant balancing act that is fundamental to the development of expertise.

4. Positivity helps (a lot)

Some teams, such as Belgium and Croatia, played with incredible self–belief and confidence. Positivity permeates effective teams, on and off the pitch, even when things are difficult. Having spent hundreds of hours with different fixed ATC teams, and in different units, it is clear that different teams and units develop particular cultures or personalities. For some, fun, friendliness and positivity are hallmarks.. This is something one can see and feel, as an outsider. We all know intuitively that working in a positive, joyful environment brings out the best in us. We all need to work on creating joy in work.

5. Respect is an attitude…and a non-technical skill

For me, two of the highlights of the World Cup were about respect. When England Won against Colombia on penalties, Manager Gareth Southgate consoled Colombia’s Mateus Uribe, who missed his shot. Southgate was perhaps mindful of the penalty that he missed as an England player. Southgate’s overall demeanour was not only respectful, but empathic, supportive, and measured: a great role model for managers.

Respectful people carry their respect with them wherever they go. The Japanese team – consistent with their culture – cleaned their own dressing room, and left a handwritten note of thanks – in Russian. This courtesy is also a sign of pride in work. Even the Japanese fans helped to clean the stadium after their side was knocked out. Perhaps there should be a separate trophy for the most respectful team and supporters. This year, Japan would have won that trophy.

But France won the World Cup after a superb run of matches. Writing this Editorial from France, it was a pleasure to see the French people celebrate their victory, against a strong and dynamic Croatian team.

Perhaps we can learn from the preparation, planning and practice that went into the World Cup, supporting such expert performances. Vive la compétence !

 

Posted in Culture, Human Factors/Ergonomics, Safety | Tagged , , , , , , , , , | Leave a comment

Human Factors at the Fringe: BaseCamp

A legendary rivalry: one mountain and two climbers seeking to be the best. We join them at basecamp as they prepare for the challenges of the ascent. Invited into separate tents to join just one of the two climbers, audiences experience the subjective and different sides of this rivalry, sharing only one side of the story. As time passes, the voices travel through the camp and the line between truth and lies, fact and fiction, begin to blur. Award-winning Fever Dream Theatre return after their 2016 sell-out hit Wrecked. ‘Stays with you long after you’ve left’ (NME).

Basecamp, Fever Dream Theatre, C South  (Venue 58), Edinburgh, 4-13 & 15-27 August 2018

(See Human Factors at The Fringe for an introduction to this series of posts.)

As you meet the two climbers at the venue – ‘BaseCamp’ – you are taken into one of two tents. The climbers are raising money for their next climb, and you will hear about one of their climbing lives.

You are taken into a canvas tent and the climber starts to talk about climbing – her passion. You noticed on being introduced to the two climbers initially that there was tension between the two, and as your host continues her story, the knotty relationship between her and her friend in the other tent surfaces. Your host seems honest and credible. In the other tent, people are hearing from the other climber. You don’t know what she’s saying, and perhaps you never will. You will only hear one side of the story. Do you get the feeling that you’re not hearing the whole story, that you are missing part of the picture? Are you curious to find out? Or are you content with the version of events that you have heard?

In many work situations, we rely on the accounts that people provide. This is what I call Work-as-Disclosed.

“This is what we say or write about work, and how we talk or write about it. It may be simply how we explain the nitty-gritty or the detail of work, or espouse or promote a particular view or impression of work (as it is or should be) in official statements, etc. Work-as-disclosed is typically based on a partial  version of one or more of the other varieties of human work: Work-as-imagined, work-as-prescribed, and work-as-done. But the message (i.e., what is said/written, how it is said/written, when it is said/written, where it is said/written, and who says/writes it) is tailored to the purpose or objective of the message (why it is said/written), and, more or less deliberately, to what is thought to be palatable, expected and understandable to the audience. It is often based on what we want and are prepared to say in light of what is expected and imagined consequences.” From The Varieties of Human Work

BaseCamp provides two versions of Work-as-Disclosed. To some extent, each may contain P.R. and Subterfuge

“This is what people say happens or has happened, when this does not reflect the reality of what happens or happened. What is disclosed will often relate to what ‘should’ happen according to policies, procedures, standards, guidelines, or expected norms, or else will shift blame for problems elsewhere. What is disclosed may be based on deliberate deceit (by commission or omission), or on Ignorance and Fantasy, or something in between… The focus of P.R. and Subterfuge is therefore on disclosure, to influence what others think.” From The Archetypes of Human Work: 6. P.R. and Subterfuge

Each version of events seems credible, and as you listen to the story, for nearly an hour, you develop a felt rapport with the reporter. How much do you want to hear a second account? And if you do hear another account, how will you respond to conflicts with the account that you have heard, and trusted?

In these sorts of situations, at home, in organisations, in courtrooms, we often hear and accept the stories that we want to hear. Sometimes we choose not to hear the stories that we don’t want to hear. We may also choose the sequence of the stories that we hear, or else this might be forced upon us by others or by circumstance. In safety investigations, formal inquires, court cases and disputes of all kinds, who you chose to (or are able to) listen to, and the order in which you listen, will affect the story that you create about what happened. By hearing only from clinician(s), but not the patient and family, for example, your story will lack the perspectives and details that are required for a more thorough understanding. And the order in which you listen to people, even when you listen to many, will affect what you hear in subsequent accounts because it will affect your questions, your mental set and perceptual filter. This is an ‘anchoring’ heuristic that has been researched extensively in the context judgement. Mostly, people think about anchoring in the context of quantitative judgement:

‘In many situations, people make estimates by starting from an initial value that is adjusted to yield the final answer. The initial value, or starting point, may be suggested by the formulation of the problem, or it may be the result of a partial computation. In either case, adjustments are typically insufficient (Slovic & Lichtenstein, 1971). That is, different starting points yield different estimates, which are biased toward the initial values. We call this phenomenon anchoring.” Tversky & Kahneman (1974)

Anchoring can also affect our understanding of stories, by anchoring our expectations, questions, and desire for certainty.

There may indeed be misunderstandings between different parties to an event, because they each has partial knowledge and information, because each has different goals and expectations, and because each sees things from different perspectives and resolutions. This is the case with BaseCamp. Not only are there inconsistencies between the accounts, there is a crucial unspoken aspect to each of their thinking about the relationship and the factual and counterfactual aspects of a critical event. They don’t know because it is a Taboo, and you will only know if you hear both stories, or if you can, as two listeners, piece together the aspects of the stories.

In the EUROCONTROL ‘Systems Thinking for Safety: Ten Principles‘ White Paper, the term field experts was used to describe people who possess expertise relative to their own work-as-done.

“The perspectives of field experts need to be synthesised via the closer integration of relevant system actors, system designers, system influencers and system decision makers, depending on the purpose. The demands of work and various barriers (organisational, physical, social, personal) can seem to prevent such integration. But to understand work-as-done and to improve the system, it is necessary to break traditional boundaries.” From: Systems Thinking for Safety/Principle 1. Field Expert Involvement

There are many influences on who speak to, how, for how long, and when, for example:

  • Desire for certainty – by introducing new accounts, we may well introduce uncertainty, which may bring us anxiety.
  • Prejudice and confirmation bias – we may have a predetermined goal to achieve, or a preconceived idea about what happened and who is responsible for an outcome, and choose (more or less consciously) who and how we speak to people in order to confirm our hypothesis.
  • Time – listening to different accounts takes time, which is always limited. Even when there is time, we may perceive as better spent on something else (e.g., analysis, reporting, action). Sometimes, system constraints such as regulations can force the issue (see the example here).
  • Theory of causation – we may perceive that that those closest to an event (e.g, an air traffic controller) are ‘causal’ to it, and therefore important to hear, while those less close to an event (e.g,, a procedure writer) are merely ‘contributory’ to it (and therefore less important to hear). The second group are rarely interviewed, and so we tend to hear the first story, and not the second story (see talk here).
  • Expertise – we may simply lack the competency to investigate an issue appropriately.

Broadly these and other influences relate to barriers to new thinking about systems and safety, outlined here.

Multiple perspectives are not a sources of weakness. Diversity is a source of resilience, even – or especially – when accounts do not agree. This is counterintuitive for those who wish to have a straightforward, perhaps mechanistic, account.

This advice might help (adapted from Systems Thinking for Safety Ten Principles White Paper and Learning Cards):

  • Listen to people’s stories. Consider how people can best tell their stories from the point of view of how they experienced events at the time. Try to understand the person’s situation and world from their point of view, both in terms of the context and their moment-to-moment experience.
  • Understand their local rationalities. Be curious about how things make sense to people at the time. Listen to people’s individual goals, plans and expectations, in the context of the flow of work and the system as a whole. Focus on their ‘knowledge at the time’, not your knowledge now. Understand the various activities and focus of attention, at a particular moment and in the general time-frame.
  • Seek multiple perspectives. Don’t settle for the first explanation; seek alternative perspectives. Discuss different perceptions of events, situations, problems and opportunities, from different people and perspectives, including those who you might think are not directly involved. Consider the implications of these differential views. One way to do this is to adopt a group approach to debriefing, as explained in this Etsy Debriefing Facilitation Guide on leading groups to learn from accidents, by John Allspaw @allspaw, Morgan Evans @NeonMorgan, and Daniel Schauenburg @mrtazz.

I will leave you with this – an advertisement of my childhood, which remains my favourite of all time. I talk about it here.

“An event seen from one point of view gives one impression. Seen from another point of view, it gives quite a different impression. It’s only when you get the whole picture that you fully understand what’s going on.”

You may well have to accept that you can never fully understand what went on. But you can get past the basecamp of understanding.


See also:

Human Factors at The Fringe

Human Factors at The Fringe: The Girl in the Machine

Human Factors at The Fringe: My Eyes Went Dark

Human Factors at The Fringe: Nuclear Family

Human Factors at the Fringe: Lemons Lemons Lemons Lemons Lemons

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , , , , | 1 Comment

The Safety-II Dance: A Podcast by Greater Than Code

A few weeks ago, I had a chat with Jamey HamptonJessica KerrJohn K. Sawers of Greater Than Code. Here is the podcast that resulted, expertly produced by Mandy Moore.

In the podcast, we roamed around topics of human factors/ergonomics, system performance and human wellbeing, empathy, appreciative inquiry, asset-based community development (ABCD), and Safety-II.

All Greater Than Code podcasts are on their website and on iTunes.

GmmVl5De_400x400

Posted in Culture, Human Factors/Ergonomics, Safety | Tagged , , , , , , , | 1 Comment