The Archetypes of Human Work: 2. Congruence

This is the second in a series of posts on The Archetypes of Human Work, which are based on the interactions or relationships between The Varieties of Human Work. For an introduction, see here.

The seven archetypes are:

  1. The Messy Reality
  2. Congruence (this Archetype)
  3. Taboo
  4. Ignorance and Fantasy
  5. Projection
  6. P.R. and Subterfuge
  7. Defunct

Each archetype includes a number of examples (currently healthcare-related). If you have further examples – from any industry – please provide an example as a comment or get in touch. More examples will be added over time.

Archetype 2: Congruence

slide3

Archetype 2: Congruence

Composition: work-as-done and as-prescribed and usually as-imagined (and often as-disclosed).

Short description: Much human work is done ‘by the book’ – at least in general terms if not the fine detail – and is done much in line with how people who are more removed from the actual work imagine. Such work is often even disclosed, since there is no reason not to. However, prescribed work can have unintended consequences. These, of course, were not imagined, at least by those who designed the work.  

What is it?

Congruence comprises activity that largely conforms with prescribed work, and is known to other relevant stakeholders. Congruence might apply to specific activities and where prescription is limited to general goals or principles, essentially giving discretionary space to practitioners. In such cases, work can be said to align with these principles, even though there may be variation in how these are achieved or adhered to. Since work-as-done accords fairly well with procedures and is known to others, it may well be discussed both inside and outside the practitioner group; there is no reason not to, and no reason for P.R. and Subterfuge. Work-as-done in these cases is therefore more or less known and understood further from the sharp end, though this is unlikely to extend far. Congruence will normally reflect quite specific activities, but may resemble much of the work in some environments, e.g., call centres. Much work is likely to shift frequently between Congruence and The Messy Reality.

Why does it exist?

Some work, usually specific activities, can be prescribed such that work-as-done is an accurate reflection of work-as-prescribed. This applies especially where work has a defined process (e.g., simple sequences, loops, or conditional structures [if then <action/s>), where the pre-conditions and conditions of work are more or less known, exceptions are well understood, and variation in system conditions and human performance is restricted and known to relevant stakeholders. In such cases, prescription might be simple or complicated, in the form of procedures, checklists, or forcing functions built into interface dialogues, requiring varying degrees of competence.

In some cases, Congruence may reflect well-designed work, inasmuch as the imagination of how it will and should be done matches how it is and should be done in order to optimise system performance and human well-being. There is typically a high level of field expert involvement in the design of this kind of work (including resources and constraints), for instance via a human-centred design processes. In other cases, how work is done informs how work is prescribed and imagined, i.e., procedures are written to reflect the real work. Here it may be the case that there is a low authority gradient or power-distance, and management is well connected to the front-line work.

A technological forcing function (also known as a poka-yoke), such as a hard interlock on a chemical processing plant or a required field on a web form, may ensure that work is done in the way that it is prescribed, with no or few opportunities for variation. This is likely to reflect specific activities rather than the general work. In other cases, organisational monitoring and control systems (e.g., audits, competency checks, behaviour-based safety), and associated sanctions, may ensure that work is done in the way that it is prescribed. In such cases, the way that people talk about the work is also likely to conform with how it is done at that time, leaving those further removed from the sharp end with a perception of Congruence, which may be persist only for a time…

Shadow side

Work-as-prescribed may be badly designed, such that it is inefficient or even hazardous, e.g., conflicting air traffic control arrival and departure routes designed by someone with no experience of working the airspace. In such cases, work may be done in the way that it is supposed to be done, and this may be known, but the hazards may not be known, especially beyond the front-line. In some such cases, prescribed work may not account for exceptional ‘black swan’ events that are beyond the imagination of those who prescribed the work. An example of this is the checklists in QF32 (2010). In this case, multiple aircraft system failures resulted in dozens of electronic centralised aircraft monitor (ECAM) alerts, which could not be processed by the crew in the prescribed way, forcing them deep into The Messy Reality, which was never imagined (i.e., never projected). In this case, and in the better known Hudson River landing (2009), the crew had very high levels of competence. In other cases, lack of competence or experience may leave practitioners unaware of how to anticipate, detect, and handle such trip hazards. On a social level, group processes may be at play. Prescribed work may be hazardous, inefficient or otherwise ineffective, but people may become desensitised to this by the need for group cohesion and harmony, fear of speaking up or rocking the boat, or may have attempted before, but given up.

Where work-as-done is monitored and controlled, especially where work is not well designed, and in a climate of low trust, Congruence may emerge only temporarily. Typically, this cannot be sustained for long due to the variable and degraded nature of real (as opposed to imagined) system conditions (goals, demands, pressure, resources, constraints, incentives, punishments, climate, etc.), which force a return to the The Messy Reality when monitoring and control allows. Those observing work-as-done, however, may leave with the impression that Congruence is the norm.

Examples (Healthcare)


Congruence can happen in the medical workplace, but is usually not the norm. An example would be the use of debriefing after a day’s operating list. A debrief should take place at the end of every team’s operating list, and has been mandated in Scotland for a number of years. In my operating theatre we do have a debrief at the end of every list. Work is therefore ‘congruent’ – our work-as-done is identical to work-as-prescribed (mandated by the Scottish patient safety programme) and we perform a robust, checklist-prompted debrief looking at both task and team performance, so our work-as-done is also congruent with work-as-imagined (in some teams a very superficial debrief occurs – so work-as-done technically is congruent with work-as-prescribed; a debrief does take place – but is certainly far removed from work-as-imagined). As a result we also have a positive attitude to work-as-disclosed – as we are very happy to talk about implementing a process that we feel quite proud of. We are however, a positive outlier in this respect – and finding Congruence in this domain is the exception rather than the norm.

Craig McIlhenny, Consultant Urological Surgeon, @CMcIlhenny.


One of the priority areas for the Scottish Patient Safety Programme in Primary Care (SPSP-PC) was the accurate reconciliation of changes to patients’ medication regime following discharge from hospital. The use of a care bundle audit was promoted to measure compliance with a number of process measures, including completing the reconciliation within a set time frame and discussing significant changes with patients or carers. This has many potential benefits especially as patients are vulnerable to medication-related harm (due to inappropriately prescribed or omitted medication) after discharge. In my practice, systems were altered to ensure compliance with the bundle audit and 100% compliance was quickly achieved. This may seem like success but a few problems arose. Staff prioritised contacting patients to discuss medication changes whereas previously they knew which patients were confused about their medication and would contact the pharmacy (rather than the patient) to make sure changes were implemented. This resulted in delay in implementing changes, increased confusion for patients and more work for staff. Secondly, the information in the immediate discharge letter (IDL) is often inaccurate. With the focus on accurate reconciliation, any discrepancy between the immediate discharge letter (IDL) and the patient’s pre-hospital medication list had to be resolved. This meant changing the patient’s usual medication list if the information on the IDL seemed reasonable. However, if there was no obvious justification for discrepancies, clarification would be sought from secondary care. This would delay the process of completing medicines reconciliation and increase work for staff in both primary and secondary care. Previously GPs would often make changes to the patient’s usual medication list based on their knowledge of the patient, their condition and the information from secondary care.

Duncan McNab, GP, @Duncansmcnab.


A Do Not Attempt Resuscitation (DNAR) form is put into place when caregivers feel that resuscitation from cardiac arrest would not be in the patient’s best interests. These forms have received a significant amount of bad press, primarily because caregivers were not informing the patient and/or their families that these were being placed. Another problem with DNAR forms is that some clinicians feel that they are being treated as “Do Not Treat” orders, leading (they feel) to patients with DNAR forms in place receiving sub-standard care. This means that some patients who would not benefit from resuscitation are not receiving DNAR forms. As a result when these patients have a cardiac arrest they are subjected to aggressive, yet ultimately futile, resuscitation measures which may include multiple broken ribs, needle punctures in the arms, wrists and groin, and electric shocks. It is not unusual to hope that these patients are not receiving enough oxygen to their brains to be aware during these last moments of their lives.

Anonymous, Anaesthetist.


Most hospital pharmacy departments in the UK now use dispensing robots for a large chunk of their medication dispensing. Robotic dispensing reduces the risk of picking errors (manually picking the wrong item from the shelf), as well as theoretically speeding up the dispensing process. Though the overall process may vary depending on whether other systems are electronic or manual (e.g. paper prescription charts or an electronic prescribing system), the work-as-imagined/as-prescribed is along the lines of: operator enters prescription details into pharmacy computer system, indicating the required item; computer system communicates with robot; robot picks correct item from shelf and outputs it to the operator for labelling and checks. For much of the time this process accurately reflects the work-as-done (by human and robot), and the process works very well. However, there are occasions when things can go wrong. An example: a request is received for an antibiotic injection (clindamycin); it is showing up as out of stock on the pharmacy stock control system, but the robot inventory indicates the item is there. In order to supply the item, the operator uses the stock control system to create a label for the clindamycin injection (which is still as-prescribed), and then walks round to the back of the robot to perform a manual output of the item. However, they may manually output a different antibiotic injection (clarithromycin) inadvertently, and dispense this in error, with the clindamycin label. The automated robotic picking system has been bypassed, removing that safety net.

Anonymous, Pharmacist.

Pharmacists and technicians provide support to care homes to carry out medication reviews and support repeat ordering. One piece of documentation we use to support this is the Medicines Administration Record (MAR). These are generated by the community pharmacy, which supplies the medication. This in itself is not deemed a care home confidential document until such a time as the care home starts to use them to note administration of medications. At this point, access to it by anyone but care home staff requires consent. We do obtain patient/welfare/proxy consent to undertake our work, but technically, as the MAR sheets are issued monthly and can be different on a monthly basis, we should be getting monthly consent. This would make it unworkable and unmanageable.

Anonymous, Pharmacist.

As I walked into the six bedded bay the patient in the first bed on the right was in distress and breathing heavily. He looked very frail. I was on my ward round with a long list of patients to see with our team of doctors. We went to the middle bed on the right, drew the curtains around and attended to our patient. During the consultation there was a commotion from the next bed and a Cardiac Arrest call was put out. The patient was put on the floor, chest compressions and ventilation started (Cadio-pulmonary resuscitation, CPR) whilst we waited for the Resuscitation Trolley to arrive along with the full Cardiac Arrest team. The nurse read out the patient’s notes and we immediately stopped the CPR attempt, and the patient was pronounced dead. The patient had been in frail health with advanced chronic obstructive pulmonary disease (COPD), heart failure, kidney impairment and had been chair bound at home. Overnight the admitting Doctor had written “Discuss DNACPR (Do Not Attempt Cardiopulmonary Resuscitation) with the Consultant in the morning”. Instead of dying with symptoms of breathlessness controlled by morphine and oxygen and having a Health Care Assistant sit with him as be died, he was left with distressing breathlessness. Then we did chest compressions and ventilation as he died. We did not get as far as using the defibrillator. I am sure that the chest compressions would have been forceful enough to break some ribs. In this scenario it is possible that the patient had some consciousness, and that his last memories would have been fear and pain. The other patients in the bay were, of course, terrified by these events and no one in the healthcare team felt good about the turn of events. Looking back I feel guilty that I did not turn to that patient, and take steps to ensure he had a calmer end of life. 

What is sad is that this is not an unusual story. Unless a person dying in Hospital or a Nursing Home has a DNACPR then CPR will be usually be done. CPR may even be done when a person in frail health dies at home without a DNACPR, because the paramedics may be instructed to do CPR ”Just in case it was a cardio-pulmonary arrest”. Nurses and paramedics work in such fear of not doing CPR when there is no DNACPR that they may override their own professional judgement and do CPR when it is clearly inappropriate. Recently a nurse was reprimanded by the Nursing and Midwifery Council for not trying CPR on a nursing home resident who, in my opinion, was clearly already dead. I know of a case in our Hospital in which CPR was started on a person whose body was already in rigor mortis.  

How did we get to this point in the United Kingdom that to ensure a person experiences a calm end of life, a DNACPR form must have been completed, and be available in a prominent place? Unless the DNACPR is readily available to the compassionate nurse, instead of comforting the dying person with her presence, her touch, words of kindness and symptoms relieving medicine, instead she must start basic life support and call for a Cardiac Arrest Team?   

Dr Gordon Caldwell, Consultant Physician, @doctorcaldwell

Posted in Human Factors/Ergonomics, Humanistic Psychology, Safety, systems thinking | Tagged , , , , , , , | 6 Comments

The Archetypes of Human Work: 1. The Messy Reality

In my last post, I outlined some thoughts on four varieties of human work: work-as-imagined, work-as-prescribed, work-as-disclosed and work-as-done. As with most things, what is most interesting about these varieties concerns their relationships and interactions. Considering the various zones of the figure below – where the varieties overlap or don’t overlap – it is possible to recognise a number of archetypes, patterns or forms concerning the relationship between the varieties of human work, which will be familiar to many once seen.

In this post, I outline seven such ‘archetypes of human work’. This is not to say these are the only archetypes, and the archetypes do not necessarily characterise the zones that they inhabit. But they have shown themselves repeatedly in my experience of research and practice in organisations, and may well be recognisable to you. To sensecheck and exemplify the archetypes, a number of healthcare clinicians have kindly provided examples. These clinicians have helped to refine the archetypes themselves. If you have further examples – from any industry – please provide an example as a comment or get in touch. More examples will be added over time.

The seven archetypes that will be outlined are:

  1. The Messy Reality (this Archetype)
  2. Congruence
  3. Taboo
  4. Ignorance and Fantasy
  5. Projection
  6. P.R. and Subterfuge
  7. Defunct

These archetypes will be outlined in this and subsequent posts.

Archetype 1: The Messy Reality

slide2

Archetype 1: The Messy Reality

Composition: work-as-done but not as-prescribed and usually not as-imagined (may or may not be as-disclosed).

Short description: Much work-as-done is not as prescribed (either different to procedures, guidelines, etc, or where there are no procedures), and is usually not known to others who are not at the sharp end of the work. The focus of The Messy Reality is the actual work and the messy details.

What is it?

The Messy Reality is characterised by the kinds of adjustments, adaptations, variations, trade-offs, compromises, and workarounds that are hard to prescribe and hard to see from afar, but can become accepted and unremarkable from the inside. Mostly, such variability is deliberate, but sometimes is unintended. As such, this archetype will be familiar to almost everyone.

The work-as-done that is characteristic of this archetype may or may not be disclosed by those who do the work. It is not necessarily secret (as is more characteristic of Taboo). The key point is more that work-as-done is not as prescribed, and probably not as-imagined or known by others. This archetype is common and applies to much specialist activity in most sectors, e.g., healthcare, banking, WebOps, shipping, and agriculture.

Why does it exist?

The archetype exists for a few reasons, associated with the nature of work-as-prescribed, work-as-done and work-as-imagined. Much work-as-done is not prescribed to any significant degree, especially where the work is so simple that it does not need to be prescribed, or so complex that it cannot be prescribed, even if attempts are or were once made. It is not necessary, possible nor desirable to prescribe all human work. Even if some major steps in a process are prescribed, much of the underlying or related activity is not. It can’t be.

This archetype provides important discretionary space in which practitioners can operate, since the various adjustments, adaptations, trade-offs, compromises, and workarounds that characterise work-as-done are necessary to meet demand under (often normal) system conditions involving degraded resources, inappropriate constraints, perverse incentives, goal conflicts, and production pressure. More generally work evolves over time, and prescribed work proves too inflexible or too fragile to cope with real conditions. Over the longer term, these adaptations may result in a drift from prescribed policy, procedure, standards or guidelines, assuming any such prescription is in place.

This archetype is reinforced by a lack of contact between those who do the work, and those who design, make decisions about, or influence the work (e.g., senior managers, purchasers, HR, regulators and policy makers), who may operate in the Ignorance and Fantasy archetype. Much work-as-done is taken for granted, neither observed nor discussed meaningfully outside of the immediate working environment, and so the messy details remain known only to those who do the actual work. The Messy Reality is therefore ignored or denied (perhaps glossed over with P.R. and Subterfuge). Even where there is an imagination of work-as-done (and associated system conditions), decision makers may turn a blind eye or encourage The Messy Reality, either because stuff gets done or because the costs of fixing sources of mess are seen as too great.

Shadow side

Work-as-done but not as-imagined by others is mostly unproblematic. The Messy Reality does provide important discretionary space for practitioners to meet demand. But The Messy Reality masks a multitude of degraded system conditions involving demand, pressure, capacity, staffing, competence, equipment, procedures, supplies, time, etc. To create flow, more adjustments, adaptations, trade-offs, compromises, and workarounds are required. Performance variability (short term fluctuations or longer term biases and trends), while necessary, may become problematic.

An underlying problem with The Messy Reality is its coexistence with the archetype Ignorance and Fantasy. When work-as-done is not understood by decision makers, problems and drift toward danger may well be invisible, both to the worker in-group (e.g., due to habituation) and to various out-groups (due to ignorance). For some time, a drift into failure may be invisible – masked by inadequate measurement, safety margins, or deliberate P.R. and Subterfuge, but practitioners will tend to feel – as a minimum – uncomfortable. Their concerns may be initially disclosed (formally via reporting systems or letters, or informally), but this disclosure is often discontinued if no timely and appropriate action is taken in response. This decline in disclosure convinces those at the blunt end that the problem no longer exists or was never really a problem, thus feeding the Ignorance and Fantasy archetype and associated blunt-end decisions that create systems of problems – messes.

When things go wrong in The Messy Reality, outcomes are often attributed to the choices that practitioners make, especially when these are different to the work-as-prescribed. Often, such attributions take insufficient account of context, and instead take the form of simplistic labels (‘violation’, ‘non-compliance’, ‘rule breaking’, etc), while choices made at the blunt-end of design, management, and regulation, which may influence, shape or encourage such sharp-end decisions, are rarely labeled the same. This dynamic exists even in the absence of detailed prescription of work. For instance, if an organisation does not have a policy on a potentially problematic issue, such as the use of mobile devices in environments such as air traffic control, then practitioners could be blamed for their choices in case of an incident or accident, even when practices are known or imagined. Ultimately, The Messy Reality, can become a liability in terms of regulation and law, both to the organisation and to individual practitioners.

Examples (Healthcare)


Certain clinical situations are volatile, uncertain, complex, ambiguous (VUCA) and time critical and they can highlight different aspects of ‘The Messy Reality’.  For example, a patient with a ruptured abdominal aortic aneurysm, if they reach hospital alive, will require immediate transfer to theatre for the life threatening bleeding to be stopped and a new vessel to be grafted into place.  The complex and dynamic nature of the case deems that it cannot be prescribed and so the practitioner has to operate within the discretionary space. This allows the practitioner the necessary freedom to treat the changes as they arise and potentially to deviate from ‘standard operating procedures’ (SOPs). These SOPs are ordinarily designed for non-emergency work and have a number of ‘safety steps’ inherent within them. There are important steps such as identifying the patient, procedure and allergies and form part of the wider WHO ‘five steps to safety’ but also other points less critical but important, especially in the non-emergency setting. It is commonplace for the practitioner to deviate from the SOPs and to perform an ad-hoc, yet necessary, streamlining of this process in order to proceed at the appropriate pace and to treat physiological changes as they present themselves.  This can give rise to a number of issues. Firstly, I have known this deviation to create friction amongst the team at this critical time that is generally not helpful in both proceeding with the work and maintaining team harmony. Secondly, if the outcome for the patient is poor and the case is investigated, I have known for practitioners to be admonished for their deviation from the SOPs, although they nominally relate to the non-emergency setting. This is in stark contrast to if there is a good patient outcome as the deviation is often not even noted, or highlighted as potentially being intrinsic to the positive outcome. Lastly there is often a corporate response that seeks to prescribe the work that is by definition VUCA and cannot be prescribed. Ultimately, I believe that on balance practitioners benefit from The Messy Reality as it is when the work is at its most complicated and cannot be prescribed that autonomy and professional judgment can be exercised most readily for the benefit of the patient. 

Dr Alistair Hellewell, Anaesthetist, @AlHellewell


The ‘normalised’ unsafe practice of hyperventilation during cardiac arrest management provides a comprehensive example of The Messy Reality archetype. It has become evident, from analysing retrospective observational data, that during the procedure of cardiopulmonary resuscitation (CPR), medical practitioners (usually anaesthetists) almost always deliver too much pressurised oxygen/air to the lungs of patients (both adults and children). Traditional Safety-I concepts may regard this as a ‘violation’, in that that this practice continues to occur despite a succession of recommendations in international guidelines to the contrary, supported by the established and widespread provision of systematic, organised education and training. However, when directly questioned, anaesthetists demonstrate a clear, functional knowledge that such practice is detrimental to patient outcome. When contemplating this behaviour we must consider the following. Firstly, there is no intention for airway management practitioners to deliberately hyperventilate a patient. Secondly, these clinicians do not know that they are hyperventilating patients during the period that it is actually happening. Thirdly, there is not ordinarily any recognition or acknowledgement that they may have hyperventilated the patient after the clinical intervention has been discontinued. Despite the fact that this issue is widely known to anaesthetists, others (particularly at the blunt end) would generally be ignorant of the issue. 

Ken Spearpoint, Emeritus Consultant Nurse, @k_g_spearpoint


Radiology request forms are meant to be completed and signed by the person requesting the procedure. In the operating theatre, the surgeon is usually scrubbed and sterile, therefore the anaesthetist often fills out and signs the form despite this being “against the rules”. Managers in radiology refused to believe that the radiographers carrying out the procedures in theatre were “allowing” this deviation from the rules.

Anonymous.


The use of clinical early warning scores is well established in secondary care. More recently, the use of the National Early Warning Score in primary care has been promoted as a way to aid the identification and appropriate management of sepsis. A score is calculated based on the value of each of the following physiological parameters: temperature, pulse rate, blood pressure, oxygen saturations, respiratory rate and level of consciousness. When interviewed, most general practitioners (GPs) stated that, although they do not calculate the overall NEWS score, they always record the relevant observations when they are concerned about sepsis. On analysis of referral letters of adults admitted with an infective cause from out-of-hours primary care to secondary care, all physiological parameters necessary for the NEWS score were recorded in 50% of patients but when the admission diagnosis was given as sepsis or possible sepsis, the values were only complete in 30% of cases. When this is explored with GPs using specific cases it becomes clear that often the decision for rapid admission is made on temperature, pulse and often the ‘look’ of the patient. Rather than measuring other parameters (such as respiratory rate) the GP decides to start arranging admission as further evidence is not needed to guide their next action. This is a more efficient if less thorough approach. (Early findings of work – unpublished at present.)

Duncan McNab, General Practitioner, @Duncansmcnab.


Hospital policy is that free samples of drugs may not be accepted from pharmaceutical companies, and that all supplies of drugs should be ordered and received via the pharmacy department. However, drug company representatives have been known to bypass this by directly offering/sending free drug samples directly to consultants.

Anonymous, Pharmacist.


Pharmacists and technicians provide support to care homes to carry out medication reviews and support repeat ordering. One piece of documentation we use to support this is the Medicines Administration Record (MAR). These are generated by the community pharmacy, which supplies the medication. This in itself is not deemed a care home confidential document until such a time as the care home starts to use them to note administration of medications. At this point, access to it by anyone but care home staff requires consent. We do obtain patient/welfare/proxy consent to undertake our work, but technically, as the MAR sheets are issued monthly and can be different on a monthly basis, we should be getting monthly consent. This would make it unworkable and unmanageable.

Anonymous, Pharmacist.


Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , | 6 Comments

The Varieties of Human Work

Understanding and improving human work is relevant to most people in the world, and a number of professions are dedicated to improving human work (e.g. human factors/ergonomics, quality management, industrial/work/organizational psychology; management science). The trouble with many of these professions is that the language and methods mystify rather than demystify. Work becomes something incomprehensible and hard to think about and improve by those who actually design and do the work.  Recently, some notions that help to demystify work have gained popular acceptance. One of these is the simple observation that how people think that work is done and how work is actually done are two different things. This observation is very old, decades old in human factors and ergonomics, where it dates back to the 1950s in French ergonomics (le travail prescrit et le travail réalisé; Ombredanne & Faverge, 1955) and arguably the 1940s in analysis of aircraft accidents in terms of cockpit design (imagination vs operation). Early ergonomists realised that the analysis of work could not be limited to work as prescribed in procedures etc (le travail prescrit), nor to the observation of work actually done (le travail réalisé). Both have to be considered. But these are not the only varieties of work. Four basic varieties can be considered: work-as-imagined; work-as-prescribed; work-as-disclosed; and work-as-done. These are illustrated in the figure below, which shows that the varieties of human work do usually overlap, but not completely, leaving areas of commonality, and areas of difference. Here we will consider each variety, one by one.

varieties-of-work

The varieties of human work.

Work-as-Imagined

When we think about human work, we typically think about the things that we actually do. But in thinking about what we or others do, we have already uncovered another important type of work – the work that we imagine. Work-as-imagined is both the work that we imagine others do and the work that we imagine we or others might do, currently or in the future.

The imagination of human work takes place within organisations, between organisations, and from outside of organisations. If we take the example of operational staff (e.g., clinicians, pilots, train drivers, control room operators, road maintenance workers), then the policy makers in national and local government, along with regulators and inspectors, and the general public and various groups (e.g. patient groups, transport user groups, neighbourhood associations) will have an imagination about the work of the operational staff (as well as also the work of middle and senior managers within facilities or companies). Within a facility or company, senior management and middle management, along with non-operational specialists and support staff, will also have an imagination of the work of operational staff. Operational staff will have an imagination of the work of all of the above, and of other operational and non-operational staff in different specialities, inside or outside of the organisation.

To a greater or lesser extent, all of these imaginations – or mental models – will be wrong; our imagination of others’ work is a gross simplification, is incomplete, and is also fundamentally incorrect in various ways, depending partly on the differences in work and context between the imaginer and the imagined.

Work-as-imagined is fed by three basic sources: past experience of work-as-done; knowledge and understanding of work-as-prescribed; and exposure to work-as-disclosed. All three of these are problematic. Past experience of work-as-done can be a useful foundation for work-as-imagined, but it can quickly become outdated with changes in demand, resources, constraints, and changes to how the work works. This outdatedness can be a blindspot. Work-as-prescribed is difficult to understand even for those doing the work, let alone those who do not, is under- or over-specified, and is often far from reality. Work-as-disclosed, meanwhile, is partial and often biased.

Work-as-imagined also applies to our imagination of future work, to change – adaptation or transformation. Most human work begins with work-as-imagined, since someone has an imagination about how work could be done, unless new ways of working are discovered accidentally. In practical terms, this future work-as-imagined includes changes to tasks, workflows, jobs, team and organisational structures and processes, and technology. These changes are the subject matter of human factors and ergonomics (i.e., interaction design), but of course changes mostly happen with no HF/E input.

Similarly, work-as-imagined applies to what we think we would do in a scenario, which may well be different to what we would really do. Martin Bromiley (2016) reflected on a tragic incident where his wife Elaine Bromiley died in a routine operation. He said that “As clinicians the world over have reviewed my late wife’s case, many have stated that ‘I wouldn’t have done what they did.’ Yet place those same people in a simulated scenario with the same real-world disorder, which deteriorates into the same challenging moment, and most actually do.” Similarly, Hollnagel (2016) stated that, especially when something goes wrong, “work-as-done differs from what we imagine we would do in the same situation, but thinking about it from afar and assuming more or less complete knowledge.”

Especially in cases where decisions about the work of others (or that affect the work of others) are made by those whose imagined view of work is incorrect, work-as-imagined can become very problematic. In such cases, inadequate involvement of those who do the work (work-as-done), and inadequate analysis and synthesis of the evolving context of work, often leads to badly designed work and work environments, and unintended consequences, including adaptations to work-as-done to overcome constraints and to work around other unintended consequences.

Much analysis work is done on human work, but this is often done on this imaginary variety. In the context of oil and gas engineering, Miles & Randle (2016) stated that “The design and engineering of new assets is usually contracted out … These contractors may not receive direct feedback on the success of, or problems with, their previous designs in the field, and most engineers designing the asset will not have worked on or even visited an operating installation.” The analysis of imagined work can be seen in the use of many ornate methods, used by those who are distant from the work or have inadequate access to those who do the work – the field experts. Hence, what we are deconstructing and analysing, formally or informally, is often an imagined, abstract system, not a real, concrete one.

Work-as-Prescribed

Our imagination of human work is not necessarily the same as the way that work is prescribed. Work-as-prescribed is the formalisation or specification of work-as-imagined, or work-as-done, or work-as-disclosed, or some combination of the three. It takes on a number of forms in organisations, including: laws, regulations, rules, procedures, checklists, standards, job descriptions, management systems, and so on. Some of these are more task-oriented (e.g., procedures, checklists) while others are more job-oriented (e.g., job descriptions). While there are infinite varieties of work-as-imagined, there is a limited variety of work-as-prescribed, with each task having one or a small number of prescribed methods.

Work is often prescribed by more senior members of an organisation (supervisors and middle managers), and sometimes by those at a greater distance from the work itself, for instance specialists with little contact with the front line, or external organisations, such as regulators (e.g., work time limits) and policy-makers (e.g., four-hour target for accident and emergency admissions). It is not unusual to see work prescribed far away from the actual work by those who have never actually performed the task. Catchpole and Jeffcott (2016) wrote that, in healthcare, “You will quickly find that there is a difference between policy and practice…and that administrators may not be aware of the latter.” Work is often also prescribed by those who do the work (e.g., working groups), and those have previously done the work, but no longer do so.

Work-as-prescribed is unique among the four key varieties in that it is assumed to be the safe and the right way to work. As such, it is subject to risk assessment, and risk controls are incorporated, as well as other measures to control and standardise work-as-done. These controls may involve soft constraints, which are possible to overcome, albeit at a cost (e.g., rules), or hard constraints, which are difficult or impossible to overcome (e.g., forcing functions). Work-as-prescribed is also often the ultimate arbiter of whether performance is satisfactory.

The problem is, it is usually impossible to prescribe all aspects of human work, even work that is well-understood, except for extremely simple tasks (Hollnagel, et al, 2013). First, there are many ways in which work can be done. Even if it is prescribed in one way, it could and will probably be done in other ways, even if we just consider small differences in implementation. Second, work often or usually incorporates task switching, between different aspects of different procedures. This meta-level of work is hard to impossible to capture in prescribed work. Third, the pre-conditions for, and conditions of, work cannot all be foreseen, let alone guaranteed. Assumed system conditions – staffing levels, competency, equipment, procedures, time – are often somewhat more optimal than those found in practice. Fourth, it is just not possible to articulate, especially in a linear written form, the precise way that work is done in a way that is usable or that can reasonably be followed. Pariès and Hayward (2016) noted that “in most current industrial processes, strict adherence to preestablished action guidelines is unattainable, incompatible with the real efficiency targets, and insufficient to control abnormal situations.” They say that many requests from their clients derive from “difficulty in reconciling this ‘old truth’ with the inflation of applicable aviation safety standards and the compliance expectations of safety management system frameworks.” Work-as-prescribed is therefore usually under-specified even in its most complicated forms. Procedures, standards, regulations, etc., lack the detail and richness of actual work. And the more specified the prescribed work, the more incorrect is it likely to become in messy work situations (a good example being the checklists in QF32). In other cases, there may be relatively little prescription. As an example from an industry that is almost the opposite to aviation in terms of prescription – web operations and engineering – Allspaw (2016) remarked that there is “no singular overarching regulatory, standards, or policy-making body for these services”.

Often, we tend to think that work-as-prescribed is basically the correct way to work. In some cases, this is justified, but in other cases is it not. The famous ‘work to rule’ strategy (used by the National Union of Railway Men against against British Rail) involves a tactical realignment of work-as-done with work-as-prescribed. This has become a standard form of industrial action, also known as a ‘white strike’. The result is that the system cannot function effectively, thus demonstrating the limits of work-as-prescribed. Work to rule has been described as a decision to “Give the rules a meaning which no reasonable man could give them and work to that” (Sir John Donaldson, 959, Secretary of State v. ASLEF (No. 2) [1972]). Similarly, Sir John Denning stated that “Those rules are to be construed reasonably…They must be construed according to the usual course of dealing and to the way that they have been applied, in practice.” The final arbiter for what is reasonable is, according to these statements, work-as-done. Work-as-done, therefore, may be assumed to be reasonable, for all intents and purposes. Of course, when accidents happen, this perspective reverses, such that work-as-prescribed is reasonable and work-as-done is not.

Work-as-prescribed is very often a basis for various sorts of analysis, often combined with Work-as-imagined, or knowledge of work-as-done. Like work-as-imagined (but unlike much work-as-done), work-as-prescribed can be deconstructed. It can also be examined and discussed. Examples include risk assessment, task and job analysis, human error analysis, behavioural safety, and so on.

Field expert involvement in the development and use of work-as-prescribed is critical to limit the gap between work-as-done. Two problems are common here. The first is a lack of field expert involvement, e.g., job descriptions written by HR, procedures written by a procedure department with little contact with operations. The second problem is that those involved are constrained in their ability to communicate with others who are affected. Those who do the work may well think that no one is involved in prescribed work, and thus lack knowledge to feed back flaws in prescribed work.

Work-as-Disclosed

In addition to the way that we imagine work, and the way that work is prescribed, we can add a third variety of human work: Work-as-disclosed (or -explained, -expounded, -exemplified, or -espoused). This is what we say or write about work, and how we talk or write about it. It may be simply how we explain the nitty-gritty or the detail of work, or espouse or promote a particular view or impression of work (as it is or should be) in official statements, etc. Work-as-disclosed is typically based on a partial  version of one or more of the other varieties of human work: Work-as-imagined, work-as-prescribed, and work-as-done. But the message (i.e., what is said/written, how it is said/written, when it is said/written, where it is said/written, and who says/writes it) is tailored to the purpose or objective of the message (why it is said/written), and, more or less deliberately, to what is thought to be palatable, expected and understandable to the audience. It is often based on what we want and are prepared to say in light of what is expected and imagined consequences.

How we talk or write about work is not necessarily the same as work-as-prescribed, since prescribed work may not be a good basis for how we disclose or explain how things work. As mentioned earlier, work-as-prescribed may be under-specified or over-specified, or just not how things are really done. So a supervisor might explain to a newcomer ‘how things work around here’ (work-as-done), in a summary form, or in a way that is very different to how work is officially prescribed.

But work-as-disclosed is also not necessarily the same as work-as-imagined, because what we think or believe may be different to what we are prepared to say, especially to outsiders. John Wilkinson (2016), a former regulator in the UK Health and Safety Executive, noted that “People choose what they want to say to regulators … The regulator can start to believe that ‘work-as-imagined’ (what the ideal organization does to work safely) should always match ‘work-as-done’ (the ‘real world’ of business). The right position lies somewhere in-between”. Similarly, Cook and Cooper (2016) stated that “many well-intended shortcuts and deficient workplace practices are routinely not detected during audits. The outcomes of this can be an increasing gap between work-as-imagined and work-as- actually-done, and major system failures may be associated with this gap”.

Another example is what a staff member says to a senior manager about work, which may be different to what really happens. There are many reasons not to express how work is really done. For instance, staff may fear that resources will be withdrawn, constraints may be put in place, sanctions may be enacted, or safety margins or buffers will be dispensed. Hence, secrecy around work-as-done can be a self-protective measure against the drive to improve efficiency at the expense of other goals (such as safety and well-being).

Work is disclosed or explained (and expounded, exemplified, espoused) by many people, both those who do the work, and those who do not. Some work-as-disclosed is therefore based on (or disclosed with) intimate knowledge of work-as-done. A surgeon and an anaesthetist/anaesthesiologist, may, for instance, advise a patient about a surgical procedure. What is said will reflect what is done, but only at a high level. Other work-as-disclosed is based only on an imagination of the work (work-as-imagined), or else what others say about the work (work-as-disclosed by a third parties). A corporate communications specialist in an airline, air traffic control organisation or professional association, may, for example explain to the news media or via social media the work of pilots or air traffic controllers. Both of these direct and indirect forms of work-as-disclosed will involve simplifications.

In other instances, work-as-disclosed may not deliberately simplify. An example is a train driver explaining his or her work to a human factors specialist/ergonomist who is undertaking some form of task analysis. Here, the driver will be thinking about his or her work in detail, and explaining it perhaps in more detail than ever before, except perhaps to a new train driver who is undergoing on-the-job training.

People will sometimes modify or limit what they say about work-as-done based on consequences. In an environment where people are punished for trade-offs, workarounds, and compromises, which the staff believe to be necessary to meet demand, then the overlap between work-as-disclosed and work-as-done may be deliberately reduced. Some work-as-disclosed is explicitly designed to reassure, perhaps to provide a basis for work-as-imagined in the other that aligns with work-as-prescribed (e.g. “We fully comply with all relevant rules and procedures”). In a healthcare context, Catchpole and Jeffcott (2016) wrote that “Direct observation usually illustrates a further difference between what is said and what is done.” The celebrated US Anthropologist Margaret Mead was credited with saying “What people say, what people do, and what they say they do are entirely different things” (there is no written evidence that she every did say this, but it is reflective of aspects of her work).

If there is a culture that is mutually experienced as fair and trusting, then there is a good chance that the overlap between work-as-disclosed and work-as-done will be large. In such cases, the areas of lack of overlap may be limited to inconsequential minutia, or aspects of work that are not easily available to conscious inspection from the inside, bearing in mind that much human work is based on unspoken assumptions and norms, and unconscious patterns of activity.

Formal methods for understanding work via work-as-disclosed include individual and group interviews, using a wide variety of more or less structured methods from psychology, human factors and ergonomics, sociology, ethnography, etc. Some of these methods are used in situ along with work-as-done (e.g., think aloud) but most are used remote from work-as-done (e.g., critical incident technique; focus groups). In some cases, assurances of confidentiality may be required to increase the overlap between work-as-disclosed and work-as-done. In all cases, one needs to be mindful that what is said may well differ from what is done.

Work-as-Done

Work-as-done is actual activity – what people do. It is characterised by  patterns of activity to achieve a particular purpose in a particular context. It takes place in an environment that is often not as imagined, with multiple, shifting goals, variable and often unpredictable demand, degraded resources (staffing, competency, equipment, procedures and time), and a system of constraints, punishments and incentives, which can all have unintended consequences.

Work-as-done is mostly impossible to prescribe precisely and is achieved by adjustments, variations, tradeoffs, compromises that are necessary to meet demand. These adaptations are based on operational know-how, but often have not been subject to formal analyses such as risk assessment; such analysis struggles to handle them. While the adaptations are often necessary to meet demand, they can sometimes put the system and practitioners at risk. This raises ethical problems, according to van Winsen and Dekker, who stated that “We need to ask ourselves, if it is ethically right that operators routinely need to work around or loosely interpret many official procedures…to get their work done?”. One should not get the impression that work-as-done is necessarily the right way. In the context of the rail industry, O’Flanagan and Seeley (2016) noted that “sometimes the motivations for the way that the work is actually done are not laudable.” These motivations may arise from various sources, of course, at different levels within and outside the company.

Still, gaps between work-as-prescribed and work-as-done may be known, and accepted and even encouraged – at least implicitly – at supervisory and local management levels, while demand is met. However, these gaps are usually not disclosed liberally and may not be imagined widely. When things go wrong, the adaptations, and the gaps between the varieties of human work, are subject to scrutiny. Hollnagel (2016) stated that we account for the differences “by inferring that what people actually did was wrong – an error, a failure, a mistake – hence that what we thought they should have done was right. We rarely consider that it is our imagination, or idea about work-as-imagined, that is wrong and that work-as-done in some basic sense is right.”

In light of the risks of disclosing all aspects of work-as-done, workers may keep aspects of it secret, or protect their working environment so as not to expose it. Aside from the risk of sanctions, the reason for this is a suspicion that if a decision maker sees a snapshot of work-as-done, then they may generalise from that snapshot, or make assumptions, and change the design of work system, perhaps changing work patterns (e.g., shift systems or work times, or activities), altering team structures, reducing resources, or tightening safety buffers or margins, usually in order to increase efficiency.

Work-as-done can be examined via observation, but this is challenging. It is particularly prone to change on inspection. This is a known flaw of, for instance, behavioural safety schemes and safety audits, especially those that focus on negatively perceived behaviour, or are perceived as checks of compliance or non-compliance with work-as-prescribed. It can also be very difficult to understand (e.g., the work of a radar controller or radiologist), or unsafe to observe (e.g., military personnel).

Work-as-done is the most important and yet most neglected variety of human work. It is the variety that outsiders (those who do not do the work) pay least attention to. Much attention is paid to the other varieties of work, and this would not be a problem if it were not for the fact that these other varieties are so often mistaken for, or used as a proxy for the real thing: work-as-done. Hollnagel (2016) noted that “We lack models based on what people actually do, on the recurrent patterns of behavior.” This is probably more true of some industries than others. By involving field experts in any activity to understand work, and by getting close to where the work is done, we can help to close the gaps, but there will likely always be differences, and knowing this keeps us humble and aware that our understanding is limited, never complete. Human factors and ergonomics practitioners often find themselves in a privileged but very difficult position, as go-betweens and translators, consciously trying to understand and explain the gaps between the varieties of work to help improve system performance and human wellbeing, without unintentionally bringing about harm along the way.

Conclusion

The early ergonomists were right. The analysis of work cannot be limited to work as prescribed in procedures etc (le travail prescrit), nor to the observation of work actually done (le travail réalisé). Similarly, it cannot be limited to work as we imagine it, nor work as people talk about it. Only by considering all four of these varieties of human work can we hope to understand what’s going on.

References

Allspaw, J. (2016). Human Factors and Ergonomics Practice in Web Engineering and Operations: Navigating a Critical yet Opaque Sea of Automation (Chapter 25). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

Catchpole, K. and Jeffcott, S. (2016). Human Factors and Ergonomics In Healthcare: Challenges and Opportunities (Chapter 13). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

Cook, B. and Cooper, R. (2016). Human Factors Practice in Military Aviation: On Time and On Target (Chapter 16). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

Hollnagel, E., Leonhardt, J., Shorrock. S. and Licu, T. (2013). From Safety-I to Safety-II. A White Paper. Brussels: EUROCONTROL Network Manager. [pdf]

Hollnagel, E. (2016). The Nitty-Gritty of Human Factors (Chapter 4). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

Miles, R. and Randle, I. (2016). Human Factors and Ergonomics Practice in the Oil and Gas Industry: Contributions to Design and Operations, (Chapter 17). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

O’Flanagan,, B. and Seeley, G. (2016). Human Factors/Ergonomics Practice in the Rail Industry: The Right Way, the Wrong Way and the Railway (Chapter 14). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

Ombredanne A. & Faverge J.-M. (1955). Lanalyse du travail. Paris : PUF.

Pariès, J. and  Hayward, B. (2016). Human Factors and Ergonomics Practice in Aviation: Assisting Human Performance in Aviation Operations (Chapter 15). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

Secretary of State v. ASLEF (No. 2) [1972] 2 All E.R. 949 at 959 (N.I.R.C.) per Sir John Donaldson. Cited in William Twining and David Miers (2010). How to Do Things with Rules. Cambridge University Press. p. 41.

Wilkinson, J. (2016). Human and Organisational Factors in Regulation: Views from a Former Regulator (Chapter 20). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

van Winsen, R. and Dekker, S. (2016). Human Factors and the Ethics of Explaining Failure (Chapter 5). In S. Shorrock and C. Williams (Eds.), Human factors and ergonomics in practice: Improving system performance and human well-being in the real world. Boca Raton, FL: CRC Press.

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , , , , , , , , , , , , , , , , | 9 Comments

Just culture: Who are we really afraid of?

Douglas Sprott CC BY-NC 2.0 https://flic.kr/p/5orYgw

Douglas Sprott CC BY-NC 2.0 https://flic.kr/p/5orYgw

When we think about just culture, we usually think about accidents and incidents, associated ‘honest mistakes’ and ‘negligence’ (by whatever name), as well as official responses to these, at company and judicial level. The notion of just culture is driven partly by fear; fear of being judged and blamed, especially fear of being blamed unfairly. The fear is felt most strongly by operational staff, who are at the sharp end of organisations and have sometimes faced disciplinary or legal action for their parts in accidents. This issue was discussed recently at a conference on just culture and the judiciary. The keynote speaker was Martin Bromiley, who talked about just culture in healthcare in the UK (Bromiley, 2016a). He and others raised the issue, both formally and informally, that judgements do not just come from the judiciary. After many hundreds of hours spent talking to thousands of people in interviews and focus groups – from operational staff to Board members and judiciary – about about aspects of safety and fairness, the question that came to my mind was “Who are people really afraid of?”…and why?

Most of my time has been spent talking to operational staff (e.g., air traffic controllers and technicians, and others in different industries). Since they do the sharp-end operational work, it is important to understand from them what and how they think, and to understand how things really work at the sharp end. But I have also spent much time talking to other stakeholders: specialist and support staff, middle managers, senior managers/directors, national judiciary, and policy makers. Whenever talk comes to issues of fairness or justice, there is usually a ‘them and us’ tone to discussions. To front-line staff, it’s the actions of ‘us’, and the responses of ‘them’.

But who is ‘them’ really? The ultimate judge of whether an action constitutes gross negligence is the judiciary. Of course, people have been prosecuted for gross negligence, such as flying while drunk. People have sometimes been prosecuted and even convicted for what many would now term ‘honest mistakes’, perhaps actions or decisions that many others in the same situation, with similar training or experience could have made, at some time or other. But in aviation, these cases are relatively rare. In the past 40 years, the numbers of pilots, engineers and air traffic controllers convicted is low. Prominent cases involving commercial flights include: Zagreb BA476 & JP550 (1976), Athens SWR213 (1979), Mt Crezzoin ATI460 (1987), Habsheim AF296 (1988), London Heathrow BA012 (1989), Schipol DAL039 (1998), Yaizu JAL907 (2001), Linate MD87 & C525 (2001), Uberlingen (2003), Palermo TUI1533 (2005), Athens HCY522 (2005), and Mato Grosso GLO1907 & N600XL (2006) (Smoker and Baumgartner, 2016). Following other accidents, individuals have faced charges, which were later dropped, and others have been acquitted. In aviation, relatively few judgments are made by the judiciary against front-line staff, especially for accidents where there is no sign of reckless or grossly negligent behaviour (what constitutes ‘gross negligence’ is a matter of legal judgment and also varies between states; some do not differentiate between ‘negligence’ and ‘gross negligence’, while other countries have special exceptions for some professionals for ‘minor’ cases of ‘negligence’).

Indeed, people in air traffic management rarely mention the judiciary as a source of significant concern in discussions about just culture in a normal work setting. Even when asked directly, many people have not given prosecutors and judges much thought. People are often unsure what would happen if they got caught up in a prosecution. This is not to say the role of the judiciary is unimportant or that legal and other support in the case of a prosecution is unimportant; it is very important. The judgement of the judiciary is just not something that seems to be weigh heavily on people’s minds when the topic of ‘just culture’ arises in discussions about safety – at least in air traffic management. This would be different for other professionals in transportation (e.g., pilots and some train drivers) who travel through different judiciaries, and for healthcare professionals, who face complaints from patients and are arguably far more exposed.

People in air traffic management also do not often talk about the role of senior management with respect to just culture. The Board is responsible for policy (including just culture policy) but does not frequently make judgements about the performance of individuals. It is not the judgement of the CEO whom people seem to fear, nor usually the Safety Director (where one exists). In some cases, the HR Director may be a person of concern, but only if judgements about performance are passed to them from someone else, for instance a Director of Operations. Directors of Operations usually come from an operational background themselves (though they rarely remain operational, partly due to lack of time). But the Director of Operations is usually only of concern if judgements about performance are passed to him or her from someone else, for instance via an investigation.

Indeed, the main focus of discussion with operational staff about just culture usually concerns investigations and investigators. Being blamed in the context of a safety investigation is contrary to the purpose of a safety investigation, partly because it is deathly for an occurrence reporting system, and for any subsequent investigations and learning. Trust is built up slowly between people, especially in organisations made up of  silos, but it is destroyed in an instant. People immediately lose trust in safety processes and practitioners when they perceive that they are blamed for events in the context of a safety investigation. Again, investigators are typically from an operational background. Some remain operational, while others do not (for instance due to lack of time, or for reasons of competency, age, or health), and tend to become more distanced from the operational work.

Quite often, what is really interesting about discussions concerning safety and justice is what is not said openly, when it is clear that something is being omitted. These are the taboo subjects. Sometimes, people indicate that there is a problem and that they will not discuss it in a group, but will mention these problems in private (interviews), in breaks, and as they are about to leave the room (door handle moments). Just culture among colleagues is one of these issues. What people fear most of all is not the judgement of those who are most distant from the work, whose judgements are relatively rare. What people fear is the judgement of those closest to the work – their co-workers. Except in the most open of cultures (rare exceptions, such as Scandinavian countries), usually people will avoid discussing this openly in a group setting. People fear raising the issue of judgement and blame by colleagues because they fear being judged and blamed for raising the issue. A doctor friend of mine who is the head of a department in a French hospital once told me about his attempt to discuss just culture with his colleagues (the doctor is not French). He decided to recount a story of an ‘honest mistake’ in a messy situation, of the sort that is typical of healthcare. After telling the story, his colleagues pounced on him, pointing out what he did wrong and what he should have done. It was the last time he tried such an exercise. This experience is far from unique. Indeed, in healthcare, clinicians seem to fear most the judgement of other clinicians (Bromiley, 2016a). Human beings tend to have a strong need to belong and a strong need for group identity. Discussing internal threats to that group identity can itself seem threatening.

The judgements of those closest to us are of most concern to us for two key reasons. First, we have to continue working with or alongside these people from one day to the next. Strained relations make for an unpleasant working life. Second, people in the same sort of position have an advantage that is not present in those who are far removed from the work (e.g., senior management of the judiciary). The advantage is this: our colleagues and co-workers know how the work is done and have an imagination about how they think they would have done the work (i.e., better) (Shorrock, 2016). They have confidence that this imagination is what would actually happen, but this is far from the case (Bromiley, 2016b). While a coworker’s Work-as-Imagined is not another worker’s Work-as-Done, it is closer than Work-as-Imagined in the minds of anyone else. Co-worker judgements therefore hit closer to home. Co-workers can point out our errors in the same way that we can point out theirs. They know the work and may do it themselves, so their judgements carry most weight.

It is not just operational staff, of course. It is all of us. Think about how you drive. If you are like most people, you probably spend much more time judging others’ driving (including, or especially, your partner’s) than you spend thinking about how you are driving. In any case, we think our driving is better than average (Roy and Liersch, 2014). Our self-serving bias is strong.

We are then, as groups, our own worst enemies. We demand fairness from others (especially other professionals – out-groups), but continue unfairly to blame others. At this point, you might complain that, “Judgement by colleagues is less important than judgement by a judge!“. It is probably true that, when it comes to justice, an individual judgement by a judge (especially a conviction) is more important than a judgement by a colleague. But to assume greater importance for judiciary judgements overall would be to captured by the déformation professionelle of traditional ways of thinking about safety (Safety-I), that rare adverse events are much more important than everyday work, and therefore we should focus only on accidents. Front-line staff naturally seem to accept that focusing only on accidents in order to understand a lack of accidents doesn’t make sense. Conversely, front-line staff naturally seem to accept that to improve safety, you have to focus on everyday work, not only accidents (past or future). It follows then, that to improve fairness or justice, we have to focus on everyday judgements, not just rare judgements of the sort that arise from judicial investigations (or even safety investigations).

Even accepting that everyday judgements are high frequency, it is a serious mistake to think they are of low consequence. When we think back to the real impact of co-worker judgements about us and our performance, we find that their impact can be enormous. Being judged or blamed for our individual part in routine work in a messy situation and complex system, when outcomes are not as planned, leads to a number of negative thoughts and emotions, including resentment, anger, worry, and preoccupation. Being judged can lead to lost sleep, damaged self image, mental and physical health problems, interpersonal problems and strained or ruined relationships. Being judged can lead to company disciplinary proceedings and even legal action (defamation; slander, libel). On an operational level, blame by colleagues can lead to non-cooperation, such as the withholding of operationally relevant information within or between teams. This, in turn, becomes a safety issue.

Demanding justice from an out-group while ourselves denying it to others in our in-group is understandable. Constructing a common external threat (out-group derogation) seems to help internal solidarity. But when the real threat is internal, then this is a kind of hypocrisy that we should address. And while front-line staff are the most vocal supporters of just culture, for good reason, perhaps the judiciary are the unsung champions of just culture. The judiciary spends weeks and months collecting factual and other information, reviewing and discussing the information, and deliberating upon that information, before forming judgments – all for an event that may have lasted minutes. This difference between the time frame for Work-as-Done versus Work-as-Judged is perhaps one reason why we focus on the judgments formed in criminal and safety investigations, and this is a fair point. We think that such judgments should never be unjust, because there is sufficient time to make a just judgment (while ignoring the constraints of the national legal systems and penal codes). Our everyday judgements, on the other hand, are formed and expressed in haste, in seconds or minutes – a similar timeframe to that of the work being judged.

So what to do? Perhaps the most important actions we can and should take concern us, not them. Addressing our frequent, everyday blaming and shaming judgements in response to outcomes-not-as-planned will likely have the most impact on human wellbeing and safety.

Be mindful of your personal reaction to failure

  1. Reflect on your initial internal reactions. How did you react emotionally to what you observed or heard? What feelings did you experience? Your immediate internal reaction may have been anger or fear, for example. People who are involved and uninvolved will tend to have different internal reactions. Those directly involved, and who could be judged, may be more likely to experience fear. Those uninvolved, or involved but unlikely to be judged, may be more likely to experience anger, or perhaps sympathy (via identification).
  2. Reflect on your judgements and evaluations. Following these reactions and feelings, what did you think about all of this? How did you interpret and evaluate what happened at the time? When considering your involvement in an adverse event or unwanted situation, you may have judged yourself harshly. Perhaps you felt disappointed in yourself, even doubted your competency. When evaluating another person’s involvement, consider whether your focus is on the individual or the situation and system, and to what extent you are judging and blaming an individual (whether or not this is expressed). The focus should not be on part of the picture, but the whole picture. At this stage, be mindful of the outcome bias. Knowing the outcome of an event changes the way that you think about the actions and decisions that took place in the run-up to that outcome. Experimental studies have shown repeatedly that the exact same performance will be judged differently depending on the outcome. This is confirmed in our everyday experience. Often, what makes performance ‘bad’ is not the performance itself, but the outcome (e.g., accident). Had there been no accident, the performance would often be judged as normal (‘uneventful’), perhaps even rather efficient or effective. Be mindful also of the fundamental attribution error. We have a tendency to from dispositional rather than situational explanations for others’ behaviour. We are prone to blame, but this can be overcome with education, at least at the stage of judgement and evaluation (if not initial internal reaction). If you are involved in investigation, then your responsibility to reflect on your judgements and evaluations is greater still.

Be mindful of your interpersonal reaction to failure

  1. Empathise. Empathise with others to understand their local rationality. If we really want a just culture, then we have to empathise with others and understand why what they did made sense to them at the time. Try to understand the background situation and the person’s world via ‘person empathy’ or ‘background empathy’. Also try to develop a moment-by moment empathy for the person’s experience, cognitively, emotionally, and physically, using ‘process empathy‘. Seek not to judge, but at least to understand. This is an interpersonal activity because it will tend to involve talking to people. Empathy is not a solo activity. It has to be experienced by the other person. Carl Rogers (1956) noted that, “Unless some communication of these attitudes has been achieved, then such attitudes do not exist in the relationship as far as the client is concerned.”
  2. Consider needs. Based on this empathic understanding, think about what others need, and what would get in the way of their needs being met. How would they like to be treated, helped or supported? It might be helpful to ask these questions of yourself, thinking back to a situation where you were in a similar position, and when your needs were met or not met. But remember that your needs are not theirs.
  3. Apologise. We all get it wrong and judge or blame others unfairly from time to time in everyday life, including at work. We cannot stop others from doing this, and we will sometimes relapse into blame ourselves. But we can keep our side of the street clean when we do slip up, by apologising. Some people find this easier than others, but it requires little effort other than swallowing one’s pride. Express how you jumped to judgement without thinking it through or thinking about what they need. Consider how the above might be applied. There is little more restorative in a relationship than an honest and unreserved apology, and perhaps an offer to make amends.

So to answer the question, “Just culture: Who do we fear?”, it is the judgement of those close to us – in or from the same world – that we fear the most. It is also those close to us who we can help the most.

References

Bromiley, M. (2016a). Healthcare’s just culture journey: A long and winding road. Just Culture and the Judiciary. EUROCONTROL Experience Sharing Enhanced SMS ES2-WS04-16 seminar, “Just culture across industries: Learning from each other”, Lisbon, 22-23 November 2016.

Bromiley, M. (2016b). Foreword. In, S. Shorrock and C. Williams (Eds.), Human Factors and Ergonomics in Practice: Improving System Performance and Human Well-being in the Real World. CRC Press.

Smoker, A. and Baumgartner, M. (2016). IFATCA – Experience with accused individuals. Just Culture and the Judiciary. EUROCONTROL Experience Sharing Enhanced SMS ES2-WS04-16 seminar, “Just culture across industries: Learning from each other”, Lisbon, 22-23 November 2016.

Rogers, C. (1957). The necessary and sufficient conditions of therapeutic personality change. Journal of Consulting Psychology, 21, 95-103.

Roy, M. M., Liersch, M. J. (2014). I am a better driver than you think: examining self-enhancement for driving ability. Journal of Applied Social Psychology, 43(8), 1648–1659. DOI: 10.1111/jasp.12117

Shorrock, S. (2016). Work-as-Imagined, Work-as-Done, and Just culture. EUROCONTROL Experience Sharing Enhanced SMS ES2-WS04-16 seminar, “Just culture across industries: Learning from each other”, Lisbon, 22-23 November 2016.

Related posts

Safety-II and Just Culture: Where Now?

Six Thinking Hats for Safety

Exploring experiences using Schein’s cycle

The whole picture

Systems Thinking for Safety: From A&E to ATC

Systems Thinking for Safety: Ten Principles

Occupational Overuse Syndrome – Human Error Variant (OOS-HEV)

Human Factors at the Fringe: My Eyes Went Dark

Acknowledgement

This post was inspired by several conversations and presentations at the conference mentioned in the post.

Note: I have tried to use the British spelling of ‘judgement’ for the everyday use of the term, and the British legal spelling (and routine American English spelling) ‘judgment’ for legal judgments. I have probably not achieved this aim.

Posted in Culture, Human Factors/Ergonomics, Humanistic Psychology, Safety, systems thinking | Tagged , , , , , , , , | 3 Comments

Human Factors at The Fringe: Every Brilliant Thing

You’re six years old. Mum’s in hospital. Dad says she’s done something stupid. She finds it hard to be happy. You make a list of everything that’s brilliant about the world. Everything worth living for. 1. Ice Cream 2. Kung Fu Movies 3. Burning Things 4. Laughing so hard you shoot milk out your nose 5. Construction cranes 6. Me A play about depression and the lengths we go to for those we love. “Heart-wrenching, hilarious… possibly one of the funniest plays you will ever see” **** The Guardian

 Every Brilliant Thing, by Duncan MacMillan and Jonny Donahoe/Paines Plough, 28 August, Roundabout @ Summerhall, Edinburgh

4557-fitandcrop-890x500

There are a few words you wouldn’t associate with depression, such as ‘funny’, ‘heartwarming’ and ‘inspiring’. But these are words that would apply to this one-man play about a seven year old boy’s reaction to his mother’s depression and suicide attempt. Johnny decides to create for his mother a list of every brilliant thing in his life, such as ice cream, the colour yellow, chocolate, rollercoasters and being allowed to stay up late. He hopes that his list will cheer up his mother, maybe even help her realise that life is worth living.

But as he grows older, he continues the list, and the brilliant things expand enormously. It reminds him of why life is worth living, despite his own struggles in life, including depression. The brilliant things, more than the experience that prompted him to write them down, seem to define his life. It’s not that everything is brilliant: as Johnny says, “if you got all the way through life without ever being heart crushingly depressed, you probably haven’t been paying attention”. It’s just that there are usually many more brilliant things than bleak things, if we really do pay attention.

This play is about a boy and a family, but the premise clearly applies more widely, to communities and organisations. Even when bad things happen or are happening, it is usually the case that there are many more good things. But so often we don’t pay attention to them. What is good about this community or organisation? What gives life? What brings joy? Very often, we don’t really know because we have never turned our attention to the question. Instead we tend to focus on deficits – things that are wrong or missing – and associated needs. Anyone who does groupwork with organisations will know that deficit-based discussions can be rather downbeat and dispiriting. The opposite is true in asset-based discussions. And this is reflected in Every Brilliant Thing. As the play progresses, audience members read out items from Johnny’s list of brilliant things in response to calls of various numbers from the actor. Audience members also play out various characters in Jonny’s life. Everyone seemed to do so joyfully.

Perhaps we should be more like Johnny, understanding deficits and attending to associated needs, but first understanding the assets that we value. If an organisation is relatively safe, why is this? What is going on that makes it a safe organisation? For sure there will be problems and risks and threats to safety, but unless we understand first what we have – what makes it safe (or healthy, or fun, or meaningful) we won’t know what to protect, nourish, and grow. How many organisations and communities make an inventory of every brilliant thing? I have recently paid much more attention to this question, inspired by asset-based approaches and Safety-II. When asked, people list all sorts of things, but they most often concern people and their skills, knowledge, values, relationships. What they also say is this: “No-one has asked that before“.

Every Brilliant Thing shows how joy can exist despite bleak situations. By attending to the brilliant things that keep us going – as individuals, families, communities and organisations – we find that the things that we had taken for granted, or not even noticed, really do need to be cherished.

See also:

Human Factors at The Fringe

Human Factors at The Fringe: The Girl in the Machine

Human Factors at The Fringe: My Eyes Went Dark

Human Factors at The Fringe: Nuclear Family

Human Factors at The Fringe: Lemons Lemons Lemons Lemons Lemons

Posted in Culture, Human Factors/Ergonomics, Humanistic Psychology, Safety, systems thinking | Tagged , , | Leave a comment

Human Factors at The Fringe: Lemons Lemons Lemons Lemons Lemons

Walrus’ award-winning show returns to Edinburgh in Paines Plough’s Roundabout. ‘Let’s just talk until it goes.’ The average person will speak 123,205,750 words in a lifetime. But what if there were a limit? Oliver and Bernadette are about to find out. This two-person show imagines a world where we’re forced to say less. It’s about what we say and how we say it; about the things we can only hear in the silence; about dead cats, activism, eye contact and lemons, lemons, lemons, lemons, lemons. ‘About as promising as debuts get.’ (Time Out).

By Lemons Lemons Lemons Lemons Lemons, by Sam Steiner, 28 August, Roundabout @ Summerhall, Edinburgh

2016lemonsl_azc

What if we had a daily limit on the number of words we could speak? This is the premise of this experimental political fantasy focusing on the relationship of Bernadette (a lawyer) and Oliver (a musician) in the context of a new ‘hush law’. The law is introduced by the government to ration citizens to 140 words each per day. Oliver campaigns against the law while Bernadette seems not to believe it will actually be voted into effect. Ultimately, for unexplained reasons, it is.

The play is essentially about the dynamics of Bernadette and Oliver’s relationship and how the prospect and reality of the hush law affects their communication. It skips between the couple’s conversations in the past, when they could speak freely, and the present, when they are restricted.

The couple struggle to manage their lexical allowances. On the first day of the hush law, Bernadatte wastes nearly half of her allowance ordering a smoothie. Inconsistent use of the quota between the pair causes tension and raises questions on the importance of the other and the relationship. When Oliver uses his daily limit before returning home, Bernadette is frustrated and comes up with a bunch of random words to spend rest of her allowance, using up her last five with “lemons, lemons, lemons, lemons, lemons”. With varying degrees of success, they learn to monitor how they use their word quota over the course of each day, greeting one another with a number reflecting their available words. We are left to consider a number of questions. What words would we use and leave out, when every word counts? Who would we save our words for? How might we learn to communicate without words?

But the backstory is a restriction of freedom of speech and the social and political implications. The law has some strange effects in society. Songs gradually lose their words because it takes more than a day for the artists or listeners to sing a song. Perhaps most intriguing to me was when Oliver exclaimed that the law is inherently discriminatory because, even if everyone has the same limit, those with less power need more words, while those in positions of power already have the influence they need. They have less need for words. This was a thought that lingered after the play.

In organisations, and society, words are already funnelled and filtered. The ‘140’ limit is obviously borrowed from twitter, which has today excluded quoted tweets, photos, GIFs, videos and poll from its famous 140-character limit. And between the various strata of organisations and society, the possibility to communicate upwards diminishes with altitude. A front-line worker usually has little or no direct access to the Board, for instance. If they want to express anything they may have a small quota of words to do so, if they are lucky. On matters of safety, individuals may indeed have a limit of around 140 words to pass a concern to senior management, perhaps through a reporting scheme.

When we cannot speak out adequately in organisations and society, the concerns and messages do not go away. They take on new forms: learned helplessness, revolt, or anything in between. In Lemons, the characters learn new workarounds: more efficient words, blends and portmanteaus, rudimentary morse code (as with twitter, where people use images of many more words, or use a series of tweets). Some of this is probably not what the lawmakers imagined. In organisations and societies, competing means of communication emerge in response to limits on communication, including behaviours (e.g., facial expressions, postures, whistleblowing, demonstrations, strikes, riots) or other outcomes (e.g., accidents). As I often say to people in positions of power in organisations, people’s concerns and needs remain whether or not we listen to them. But by spending more time listening – allowing time for more words – everyone’s needs can be met, to some extent at least, before it is too late.

 

See also:

Human Factors at The Fringe

Human Factors at The Fringe: The Girl in the Machine

Human Factors at The Fringe: My Eyes Went Dark

Human Factors at The Fringe: Nuclear Family

Posted in Culture | Tagged , | 4 Comments

Human Factors at The Fringe: Nuclear Family

Nuclear Family is a gripping piece of interactive theatre which follows Joe and Ellen, nuclear plant workers and siblings, faced with an imminent disaster. Audience members will be privy to what could possibly be their last hours as they struggle with the biggest decisions of their lives. In a heated round table discussion, the audience will experience the pressure of making life and death decisions.

Nuclear Family, 3-29 Aug, Assembly, Edinburgh

20160822_170813-600x350

To have any chance at understanding why people do the things they do, you have to put yourself in their shoes. Nuclear Family is immersive, interactive theatre that requires you to do just that. The play begins with an introduction by a suited convenor, who explains that you are part of a board of inquiry into an explosion at the Ashtown nuclear power plant in 1996. For the next hour, your decisions are linked to those of two security guards and siblings Joe and Ellen Lynum, who work in the plant. Audience members are seated around the cast and the set – a grim, bunker-esque security office with a desk, some 1990s PCs, telephones, and other peripherals. Joe and Ellen are at the sharp end of the unfolding disaster and the focus is on their decisions, which happen to be yours.

The audience members were taken ‘inside the tunnel’ as events unfolded, watching the ‘video footage’ – the acted scenes. After each superbly acted scene leading up to a critical decision point, we were given short audio recordings of interviews and some documentation, such as police and employment records. We had two minutes to make a binary choice decision: what would be a reasonable or appropriate thing to do next, given the information available and the desired outcome? As an audience, we had to vote on a collective decision. These decisions – four or five in all – were moral dilemmas. Questions of rule-breaking, relationships and competence arose, and each decision had implications, for liberty and loss of life, for instance. Each decision contributed to an unfolding disaster, but the decisions were set against poor management – under-resourcing and reported problems that had never been acted on.

As the audience made each decision, we could not know the consequences until they arose. It was clear that were various routes through the mess and because of this we probably forgot that the ending was actually certain: an explosion. It became a chose-your-own-disaster, but one where we were fooled into counter-factually thinking we could mitigate the outcome and maybe prevent it. We felt the regret and anger for each decision in real time as the next scene unfolded.

This is innovate theatre that teaches the audience about local rationality. The audience, like Joe and Ellen, do what seems reasonable at that time. In hindsight, each decision seems like a bad decision, but at the time each decision is just that: a decision. The decisions seemed reasonable to most people, though there was minority dissent for some decisions, which was not explored. Interestingly, the minority could feel some anger that their preferred option was not taken: even though the consequences of neither option were known at the time, the unknown consequences of the unmade decision seemed better.

The division of the storyline into decision points was reminiscent of the method within Sidney Dekker’s Field Guide to Understanding ‘Human Error’, which suggests breaking down a detailed timeline into critical junctures. But there are crucial differences between an accident investigation, Nuclear Family, and real-time operations. In an accident investigation, you have much of the information and you have knowledge of the final outcome and the outcomes of each decision. You construct the critical junctures (based on the knowledge you now have) and you have many hours or days available to analyse them. In Nuclear Family, you have a some background information and you have knowledge of the final outcome but not the outcome of each decision. You are told the critical junctures and you can pause for a couple of minutes while you make a decision. In real-time operations – in control rooms, cockpits, operating theatres – you don’t have all the information and you don’t know the final outcome, nor for sure the outcome of the decision you are about to take. You may not know in advance that a juncture or decision point is critical and you can’t necessarily pause for long to make a decision.

Understanding local rationality demands a level of empathy, and Nuclear Family cultivated both background empathy (or person empathy) into the characters, and process empathy for their moment-to-moment experience – cognitive, emotional, and social. It is hard to think of a better medium through which to experience this so efficiently than interactive theatre.

See also:

Human Factors at The Fringe

Human Factors at The Fringe: The Girl in the Machine

Human Factors at The Fringe: My Eyes Went Dark

Posted in Human Factors/Ergonomics, Safety | Tagged , , , , , , , | 5 Comments