HindSight 27 on Competency and Expertise is out now!

HindSight Issue 27 is now available in print and online at SKYbrary and on the EUROCONTROL website. You can download the full issue, including an online supplement, and individual articles. HindSight magazine is free and published twice a year, reaching tens of thousands of readers in aviation and other sectors worldwide. You will find an introduction to this Issue below, along with links to the magazine and the individual articles.

HS27 cover_Page_01


Welcome

“Welcome to Issue 27 of HindSight magazine. The theme of this issue is ‘Competency and Expertise’. It is a topic that links to all previous Issues of HindSight.

Our ability to work effectively depends on the competency and expertise front-line practitioners and all involved in the operational, technical, support, and management functions. Safety isn’t something that is just ‘there’ in the aviation system. People actively create safety. But how do we create safety? And what do we need to do to help ensure that we can continue to do so? Competency and expertise is an important part of the answer.

In this issue, we have articles from operational, safety, human factors and psychology specialists. This is part of what makes HindSight unique – it brings together those who do the operational work, those who support operational work in a variety of ways, and those who study operational work to help better understand it. We are proud to give a voice to some of the world’s leading academic thinkers, and to operational and support specialists who have stories, experience and practical insights to convey. The key is that the articles are interesting and useful to the primary readers of HindSight: air traffic controllers and professional pilots, and hopefully to others who support operational work. Do we succeed? Let us know! In this Issue we explore the nature of competency and fundamental applications and implications for operational training, selection, and procedures, including non-technical skills and contingency. We then zoom out to regulatory and future issues. The regular feature on ‘Views from Elsewhere’ continues with articles from surgery and rail. These articles raise questions for us in aviation, and provide some practical ideas. And in this issue we have articles drawing from the world of sport. HindSight continues online over at SKYbrary with further articles in the online supplement, from aviation and other industries, on the theme of competency and expertise.

We also have ‘What we do’ good practice snippets. We’d particularly like to hear from more readers for this section. And this brings me to the next Issue, which will feature articles on ‘Change’. All readers have been affected by changes, in procedures, regulations, technology, people, incentives, organisation, etc. The pace of change will only increase. How do we change to adapt to the dynamic world of air traffic management? And how do we as individuals, teams, and organisations adapt to these changes? Let us know, in a few words or more, for your magazine on the safety or air traffic management – HindSight.”

HindSight 27 Articles

Foreword

Editorial

Op-ed

Fundamental Issues

Non-technical Skills

Contingency

View from the Air

Regulatory Issues

Future Issues

Views from elsewhere

What we do

Interview

HindSight 27 On-line Supplement

See all editions of HindSight magazine

Posted in Human Factors/Ergonomics, Safety | Tagged , , , , , , , , , , , , , | Leave a comment

Twelve Properties of Effective Classification Schemes

Most organisations seem to use a classification system (or taxonomy) of some sort, for instance for safety classification, and much time is spent developing and using such taxonomies. Importantly, decisions may be made on the basis of the taxonomy and associated database outputs (or it may be that much time is spent on development and use, but little happens as a result). There is therefore a risk of time and money spent unnecessarily, with associated opportunity costs. Still, taxonomies are a requirement in all sorts of areas, and several things should be kept in mind when designing and evaluating a taxonomy. This posts introduces twelve properties of effective classification systems.


Effective classification schemes are difficult to develop. The following properties
need to be considered to develop a valid classification scheme that is accepted and produces the desired results.

1. Reliability

A classification scheme must be used reliably by different users (inter-coder reliability or consensus) and by the same users over time (intra-coder reliability or consistency). Reliability will depend on many factors, including the degree of true category differentiation, the adequacy of definitions, the level of hierarchical taxonomic description being evaluated, the adequacy of the material being classified, the usability of the method, the adequacy of understanding of the scheme and method, and the suitability of reliability measurement. Adequate reliability can be very difficult to achieve (see Olsen and Shorrock, 2010 $$), and the heterogeneity of methodologies employed by researchers measuring reliability of incident coding techniques make it more difficult to to critically compare and evaluate different schemes (see Olsen, 2013 $$). However, if a classification scheme cannot be used reliably, then it is usually fair to say that it is not fit for purpose, especially for analysing large data sets (though it may be that reliability is achieved for certain users in certain contexts)

2. Mutual exclusivity

Categories should be mutually exclusive on the same horizontal level, so that it is only possible to place subject matter into one category. This relates to reliability. There are varying degrees of mutual exclusivity, since categories often have things in common, or overlap to some degree, depending on the criteria. Mutual exclusivity tends to be lower for abstract or unobservable concepts. This is especially true for psychological labels, and even more so those that are all-consuming (such as ‘situation awareness’, ‘mental model’, or ‘information processing’). For properly differentiated categories with clear definitions, appropriate guidance can reduce sources of confusion (see Olsen and Williamson, 2017 $$).

3. Comprehensiveness (or ‘content validity’)

It should be possible to place every sample or unit of subject matter somewhere. However, choices must be made about the granularity of categories. Highly detailed classification schemes and classification schemes that offer little granularity suffer from different problems concerning mutual exclusivity, usability, face validity, usefulness, etc.

4. Stability 

The codes within a classification system should be stable. If the codes change, prior classification may be unusable, making comparison difficult. On the other hand, it should be possible to update a classification scheme as developments occur that truly affect the scope and content (e.g., new technology). Ideally, changes should have minimal impact.

5. Face validity 

A classification system should ‘look valid’ to people who will use it or the results emanating from it. An industry classification scheme should incorporate contextual and domain-specific information (‘contextual validity’), but should also sit comfortably with pertinent theory and empirical data (‘theoretical validity’). The best approach here is to stick with what is well-understood and accepted.

6. Diagnosticity (or ‘construct validity’)

A classification scheme should help to identify the interrelations between categories and penetrate previously unforeseen trends. This may relate more to the database and method than the taxonomy itself.

7. Flexibility

A classification scheme should enable different levels of analysis according to the needs of a particular query and known information. This is often achieved by a modular and hierarchical approach. Shallow but wide taxonomies tend to suffer from low flexibility.

8. Usefulness

A classification scheme should provide useful insights into the nature of the system under consideration, and provide information for the consideration of practical measures (e.g., for improvement).

9. Resource efficiency

The time taken to become proficient in the use of a classification scheme, collect supporting information, etc., should be reasonable. Continued difficulties in using a classification scheme, after initial training and supervised practice, usually indicate a design problem and signal the need for (re-)testing.

10. Usability

A classification scheme should be easy to use in the applied setting. This means that the developers should be able to demonstrate a human-centred design process akin to ISO 9241-210. The most relevant aspects of usability should be determined. For instance, some users may have formal training in the use of the classification scheme, little time to make inputs, limited understanding of terms and acronyms, etc.

11. Trainability

It should be possible to train others how to use the classification scheme and achieve stated training objectives, including any required levels of reliability. In some cases, there may be valid reasons to go to only to the original developers for training (e.g., the taxonomy is sensitive or commercialised). In such cases, there is a need to consider why this is the case, and the possible related implications (e.g., lack of peer reviewed, public domain accounts of development; lack of independent testing).

12. Evaluation

Classification schemes should normally be amenable to independent evaluation. This means that they must be available and testable on the requirements above using an appropriate evaluation methodology. This will of course be more difficult for taxonomies that are restricted for various reasons (commercial, security, misuse prevention, etc).

Summing up…

In practice, it will not be possible to achieve anywhere near perfection on these criteria. Even where evaluation results are very positive (assuming there is any evaluation), experience in use will usually be different (and usually worse from the users’ points of view) and undocumented. Trade-offs must be made and some of the properties above will be more important than others, depending on the application. For instance, in some cases, the priority may be to help investigators to ensure that relevant issues have been considered, perhaps also to model the interactions between them (see Four Kinds of Human Factors: 4. Socio-Technical System Interaction). In other cases, the priority may be to help analysts understand prevalence and trends in very large data sets.  In still other cases, the priority may be to help users with little time or knowledge (‘casual users’) make basic inputs. These user groups have different needs and expectations.

It may also be necessary to use a taxonomy that is not adequate on some of the criteria above. In all cases, there is a need to understand the possible risks (e.g., time spent using the taxonomy; decisions made on the basis of the data) and to manage these risks (e.g., ignore data for categories that are know to be unreliable; merge categories; analyse data based on a hierarchically higher category/level up). However, three basic activities should be undertaken to help achieve adequate validity:

  1. Involve appropriate stakeholders in taxonomic development and evaluation, with a focus on understanding their needs the associated taxonomic requirements, and the trade-offs between requirements. This should include people who understand human-centred design, taxonomy and all relevant aspects of the scope of the classification scheme.
  2. Review relevant literature, analyse the work and system, and review other classification schemes (including ones previously used by any stakeholders).
  3. Test the classification scheme throughout its development and implementation.

Afterword

This post is based on a short briefing note that I produced for an Australian government agency meeting in 2004, not long after being awarded a PhD related to taxonomy (461 pages; reading not recommended, but available on request). Since I sometimes find it hard to find this note, I thought it might be useful to put online, also in the hope that it might help someone else. The post focusses on the properties of effective taxonomies that relate to development, and not so much on the use, mis-use and abuse of taxonomies. Another post, maybe.

Posted in Human Factors/Ergonomics, Safety, systems thinking | Leave a comment

Vive la Compétence !

The text in this post is from the Editorial of HindSight magazine, Issue 27, on Competency and Expertise, available for download in late August at SKYbrary here.


France_champion_of_the_Football_World_Cup_Russia_2018.jpg

Image: Kremlin.ru [CC BY 4.0 (https://creativecommons.org/licenses/by/4.0)%5D, via Wikimedia Commons

This summer, we have been entertained by the world’s best footballers – experts in the game. And it just so happens that Competency and Expertise is theme of this Issue of HindSight. What might we learn from World Cup 2018? Here are five observations.

1. Past performance does not determine future performance

Some world-leading teams, which were favourites to win, were knocked out early, or didn’t qualify. It just goes to show that we can’t rely on our record. Past success does not guarantee future success. The same tactics that worked in the past will not necessarily work in the future.

But we humans are creatures of habit. In his famous book Human Error, James Reason (1990) described two ways that we rely – or over-rely – on our past experience. The first  is similarity matching. When a situation is similar to one experienced previously, we use pattern patching and tend to respond in a similar way to how we did before. The second is frequency gambling. More frequent solutions in roughly similar conditions will tend to prevail. Most of the time, these are efficient ways of working, and efficiency is critical when seconds count. But sometimes, we need to be more thorough, especially when  preparing, practising and planning. In any case, we must always adapt to the situation.

Just as past success does not guarantee future success, past failure does not guarantee future failure. Penalties were a case in point. Far from being a lottery that is impossible to rehearse for, or an event for which some teams are ‘jinxed’, this year showed that extensive physical and psychological preparation for such high pressure scenarios pays off.

This is something that I am particularly interested in within ANSPs. Front-line safety-critical staff need and deserve world-class training, especially refresher training. This isn’t a luxury. It’s a necessity, but the sort of necessity that sometimes becomes obvious only in hindsight. The same applies to team resource management training, and other training that integrates lessons from the past. The lessons that stick often come from past failures, but we need to learn those lessons in the right way, in the right context.

2. Teams are more than the sum of their parts…and success runs deep

It became clear in this World Cup that individual expertise does not equal team competence. Teams can suffer through overreliance on star players, but can benefit greatly from teamwork bonded with trust, respect, and an understanding of how each player will respond in a given situation. The same applies in air traffic management. Here, we have procedures to help us predict how others will respond. But procedures do not determine how someone will respond. They do not even apply to all situations, nor prescribe all responses. In this case, trust built from working together helps us to succeed.

In the World Cup, the team is not just the players on the pitch. The best managers set up their teams to win, using all necessary resources, and adapting their style to whatever will bring out the best from each player. Everything is designed and managed for human performance. Hundreds more, including psychologists, dietitians,
physiotherapists, etc, help players to perform at their peak. It is similar with ANSPs. While all have similar basic kinds of front-line support staff, some ANSPs have teams of qualified human factors/ ergonomics specialists, psychologists, TRM facilitators, CISM peers, educational specialists, etc. Human performance is what we do, but to be sustainably successful, it needs a strong support network.

3. Technology changes the nature of work

The introduction of video assistant referee showed how technology changes the nature of work. Referees now have to use their expertise to decide when to use the technology. Over-reliance ruins the spontaneity of play. Under-use may bring criticism that not only did a referee not spot a foul or offside, but that they didn’t use a tool that could have shown this: two mistakes, where previously there would have been only one

In The ETTO Principle, Erik Hollnagel discusses a fundamental trade-off that underlies human performance: the efficiency-thoroughness trade-off. Referees must balance efficiency against thoroughness to harmonise fluidity and fairness. Footballers do the same. If there is time to be thorough to set up a shot, then they will. If not, then they need to strike roughly on target. The right balance is clear in hindsight. For controllers, a very thorough approach to flight data recording with an electronic solution may result in too much head-down time. A very efficient approach may result in over-reliance on memory. The efficiency-thoroughness trade-off is a constant balancing act that is fundamental to the development of expertise.

4. Positivity helps (a lot)

Some teams, such as Belgium and Croatia, played with incredible self–belief and confidence. Positivity permeates effective teams, on and off the pitch, even when things are difficult. Having spent hundreds of hours with different fixed ATC teams, and in different units, it is clear that different teams and units develop particular cultures or personalities. For some, fun, friendliness and positivity are hallmarks.. This is something one can see and feel, as an outsider. We all know intuitively that working in a positive, joyful environment brings out the best in us. We all need to work on creating joy in work.

5. Respect is an attitude…and a non-technical skill

For me, two of the highlights of the World Cup were about respect. When England Won against Colombia on penalties, Manager Gareth Southgate consoled Colombia’s Mateus Uribe, who missed his shot. Southgate was perhaps mindful of the penalty that he missed as an England player. Southgate’s overall demeanour was not only respectful, but empathic, supportive, and measured: a great role model for managers.

Respectful people carry their respect with them wherever they go. The Japanese team – consistent with their culture – cleaned their own dressing room, and left a handwritten note of thanks – in Russian. This courtesy is also a sign of pride in work. Even the Japanese fans helped to clean the stadium after their side was knocked out. Perhaps there should be a separate trophy for the most respectful team and supporters. This year, Japan would have won that trophy.

But France won the World Cup after a superb run of matches. Writing this Editorial from France, it was a pleasure to see the French people celebrate their victory, against a strong and dynamic Croatian team.

Perhaps we can learn from the preparation, planning and practice that went into the World Cup, supporting such expert performances. Vive la compétence !

 

Posted in Culture, Human Factors/Ergonomics, Safety | Tagged , , , , , , , , , | Leave a comment

Human Factors at the Fringe: BaseCamp

A legendary rivalry: one mountain and two climbers seeking to be the best. We join them at basecamp as they prepare for the challenges of the ascent. Invited into separate tents to join just one of the two climbers, audiences experience the subjective and different sides of this rivalry, sharing only one side of the story. As time passes, the voices travel through the camp and the line between truth and lies, fact and fiction, begin to blur. Award-winning Fever Dream Theatre return after their 2016 sell-out hit Wrecked. ‘Stays with you long after you’ve left’ (NME).

Basecamp, Fever Dream Theatre, C South  (Venue 58), Edinburgh, 4-13 & 15-27 August 2018

(See Human Factors at The Fringe for an introduction to this series of posts.)

As you meet the two climbers at the venue – ‘BaseCamp’ – you are taken into one of two tents. The climbers are raising money for their next climb, and you will hear about one of their climbing lives.

You are taken into a canvas tent and the climber starts to talk about climbing – her passion. You noticed on being introduced to the two climbers initially that there was tension between the two, and as your host continues her story, the knotty relationship between her and her friend in the other tent surfaces. Your host seems honest and credible. In the other tent, people are hearing from the other climber. You don’t know what she’s saying, and perhaps you never will. You will only hear one side of the story. Do you get the feeling that you’re not hearing the whole story, that you are missing part of the picture? Are you curious to find out? Or are you content with the version of events that you have heard?

In many work situations, we rely on the accounts that people provide. This is what I call Work-as-Disclosed.

“This is what we say or write about work, and how we talk or write about it. It may be simply how we explain the nitty-gritty or the detail of work, or espouse or promote a particular view or impression of work (as it is or should be) in official statements, etc. Work-as-disclosed is typically based on a partial  version of one or more of the other varieties of human work: Work-as-imagined, work-as-prescribed, and work-as-done. But the message (i.e., what is said/written, how it is said/written, when it is said/written, where it is said/written, and who says/writes it) is tailored to the purpose or objective of the message (why it is said/written), and, more or less deliberately, to what is thought to be palatable, expected and understandable to the audience. It is often based on what we want and are prepared to say in light of what is expected and imagined consequences.” From The Varieties of Human Work

BaseCamp provides two versions of Work-as-Disclosed. To some extent, each may contain P.R. and Subterfuge

“This is what people say happens or has happened, when this does not reflect the reality of what happens or happened. What is disclosed will often relate to what ‘should’ happen according to policies, procedures, standards, guidelines, or expected norms, or else will shift blame for problems elsewhere. What is disclosed may be based on deliberate deceit (by commission or omission), or on Ignorance and Fantasy, or something in between… The focus of P.R. and Subterfuge is therefore on disclosure, to influence what others think.” From The Archetypes of Human Work: 6. P.R. and Subterfuge

Each version of events seems credible, and as you listen to the story, for nearly an hour, you develop a felt rapport with the reporter. How much do you want to hear a second account? And if you do hear another account, how will you respond to conflicts with the account that you have heard, and trusted?

In these sorts of situations, at home, in organisations, in courtrooms, we often hear and accept the stories that we want to hear. Sometimes we choose not to hear the stories that we don’t want to hear. We may also choose the sequence of the stories that we hear, or else this might be forced upon us by others or by circumstance. In safety investigations, formal inquires, court cases and disputes of all kinds, who you chose to (or are able to) listen to, and the order in which you listen, will affect the story that you create about what happened. By hearing only from clinician(s), but not the patient and family, for example, your story will lack the perspectives and details that are required for a more thorough understanding. And the order in which you listen to people, even when you listen to many, will affect what you hear in subsequent accounts because it will affect your questions, your mental set and perceptual filter. This is an ‘anchoring’ heuristic that has been researched extensively in the context judgement. Mostly, people think about anchoring in the context of quantitative judgement:

‘In many situations, people make estimates by starting from an initial value that is adjusted to yield the final answer. The initial value, or starting point, may be suggested by the formulation of the problem, or it may be the result of a partial computation. In either case, adjustments are typically insufficient (Slovic & Lichtenstein, 1971). That is, different starting points yield different estimates, which are biased toward the initial values. We call this phenomenon anchoring.” Tversky & Kahneman (1974)

Anchoring can also affect our understanding of stories, by anchoring our expectations, questions, and desire for certainty.

There may indeed be misunderstandings between different parties to an event, because they each has partial knowledge and information, because each has different goals and expectations, and because each sees things from different perspectives and resolutions. This is the case with BaseCamp. Not only are there inconsistencies between the accounts, there is a crucial unspoken aspect to each of their thinking about the relationship and the factual and counterfactual aspects of a critical event. They don’t know because it is a Taboo, and you will only know if you hear both stories, or if you can, as two listeners, piece together the aspects of the stories.

In the EUROCONTROL ‘Systems Thinking for Safety: Ten Principles‘ White Paper, the term field experts was used to describe people who possess expertise relative to their own work-as-done.

“The perspectives of field experts need to be synthesised via the closer integration of relevant system actors, system designers, system influencers and system decision makers, depending on the purpose. The demands of work and various barriers (organisational, physical, social, personal) can seem to prevent such integration. But to understand work-as-done and to improve the system, it is necessary to break traditional boundaries.” From: Systems Thinking for Safety/Principle 1. Field Expert Involvement

There are many influences on who speak to, how, for how long, and when, for example:

  • Desire for certainty – by introducing new accounts, we may well introduce uncertainty, which may bring us anxiety.
  • Prejudice and confirmation bias – we may have a predetermined goal to achieve, or a preconceived idea about what happened and who is responsible for an outcome, and choose (more or less consciously) who and how we speak to people in order to confirm our hypothesis.
  • Time – listening to different accounts takes time, which is always limited. Even when there is time, we may perceive as better spent on something else (e.g., analysis, reporting, action). Sometimes, system constraints such as regulations can force the issue (see the example here).
  • Theory of causation – we may perceive that that those closest to an event (e.g, an air traffic controller) are ‘causal’ to it, and therefore important to hear, while those less close to an event (e.g,, a procedure writer) are merely ‘contributory’ to it (and therefore less important to hear). The second group are rarely interviewed, and so we tend to hear the first story, and not the second story (see talk here).
  • Expertise – we may simply lack the competency to investigate an issue appropriately.

Broadly these and other influences relate to barriers to new thinking about systems and safety, outlined here.

Multiple perspectives are not a sources of weakness. Diversity is a source of resilience, even – or especially – when accounts do not agree. This is counterintuitive for those who wish to have a straightforward, perhaps mechanistic, account.

This advice might help (adapted from Systems Thinking for Safety Ten Principles White Paper and Learning Cards):

  • Listen to people’s stories. Consider how people can best tell their stories from the point of view of how they experienced events at the time. Try to understand the person’s situation and world from their point of view, both in terms of the context and their moment-to-moment experience.
  • Understand their local rationalities. Be curious about how things make sense to people at the time. Listen to people’s individual goals, plans and expectations, in the context of the flow of work and the system as a whole. Focus on their ‘knowledge at the time’, not your knowledge now. Understand the various activities and focus of attention, at a particular moment and in the general time-frame.
  • Seek multiple perspectives. Don’t settle for the first explanation; seek alternative perspectives. Discuss different perceptions of events, situations, problems and opportunities, from different people and perspectives, including those who you might think are not directly involved. Consider the implications of these differential views. One way to do this is to adopt a group approach to debriefing, as explained in this Etsy Debriefing Facilitation Guide on leading groups to learn from accidents, by John Allspaw @allspaw, Morgan Evans @NeonMorgan, and Daniel Schauenburg @mrtazz.

I will leave you with this – an advertisement of my childhood, which remains my favourite of all time. I talk about it here.

“An event seen from one point of view gives one impression. Seen from another point of view, it gives quite a different impression. It’s only when you get the whole picture that you fully understand what’s going on.”

You may well have to accept that you can never fully understand what went on. But you can get past the basecamp of understanding.


See also:

Human Factors at The Fringe

Human Factors at The Fringe: The Girl in the Machine

Human Factors at The Fringe: My Eyes Went Dark

Human Factors at The Fringe: Nuclear Family

Human Factors at the Fringe: Lemons Lemons Lemons Lemons Lemons

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , , , , | 1 Comment

The Safety-II Dance: A Podcast by Greater Than Code

A few weeks ago, I had a chat with Jamey HamptonJessica KerrJohn K. Sawers of Greater Than Code. Here is the podcast that resulted, expertly produced by Mandy Moore.

In the podcast, we roamed around topics of human factors/ergonomics, system performance and human wellbeing, empathy, appreciative inquiry, asset-based community development (ABCD), and Safety-II.

All Greater Than Code podcasts are on their website and on iTunes.

GmmVl5De_400x400

Posted in Culture, Human Factors/Ergonomics, Safety | Tagged , , , , , , , | 1 Comment

Suitably Qualified and Experienced? Five Questions to ask before buying Human Factors training or consultancy

Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance. (IEA, 2018)

This definition – accepted by human factors and ergonomics (HF/E) societies worldwide – emphasises that HF/E is a discipline and profession. A discipline is “a branch of knowledge, typically one studied in higher education”. A profession is “a paid occupation, especially one that involves prolonged training and a formal qualification” (Oxford dictionaries).

Practitioners of ergonomics and ergonomists contribute to the design and evaluation of tasks, jobs, products, environments and systems in order to make them compatible with the needs, abilities and limitations of people. (IEA, 2018)

This contribution tends to be made by HF/E practitioners in two ways:

  1. as an external human factors consultant/trainer
  2. as an in-house human factors specialist (a typical job description is here).

But how do we assess whether a practitioner is a ‘suitably qualified and experienced person’ (SQEP)?

This is an important question because there is so much at stake for system performance and human well-being, but it is not straightforward to answer. In this post, I provide five questions that will help. The questions are for reflection and discussion. They are not definitive. In considering these questions, the point is not necessary to answer “yes’ to every question. Some will be more relevant than others, and there will be exceptions. But especially where the answer to two or more questions is “no”, there should be careful consideration as to why this is the case.

The emphasis of this post is not on those who fulfill specific HF/E roles in-house (e.g., HF/E in medical simulation). In such cases, internal practitioners with HF/E-related roles may well have education and experience in a specific area of HF/E, and use this in their role. But they would probably not describe themselves as ‘HF/E specialists’ (just as I have education in counselling but would not call myself a counsellor). This post does not cover these in-house practitioners, though they may wish to consider the questions and what support they might need.

Rather, the post concerns paid-for HF/E consultancy and training, and also employment as an HF/E specialist, where one has to abide by ethical professional standards in the practice of HF/E.

3534516458_48e4e8595f_b

Marco Bellucci CC BY 2.0 https://flic.kr/p/6okjAW

1. Qualification

Do they have a recognised qualification in HF/E?

There are several academic programmes in HF/E in the UK, USA, and other countries, which you can find via the relevant Society or Association in your country. Some of these programmes will be accredited by your national HF/E Society (the Centre for Registration of European Ergonomists offers is a guide to such courses in Europe).

An HF/E qualification gives reassurance that the person has undertaken an approved programme of study in HF/E, which addresses the relevant competencies (e.g., the CIEHF Professional Competency Guidance, or the Requirements for Registration of European Ergonomists in Europe). (But note that some qualifying courses are no longer offered and so may not be listed.)

Others academic programmes will not be accredited, but will offer a substantial component of HF/E as part of a mixed programme, or as a substantial part of (e.g., a major in) a programme in experimental psychology, industrial engineering, systems engineering, patient safety, occupational health and safety, etc. This is especially true in the USA), which only a small minority of the programmes listed on the Human Factors and Ergonomics Society website are accredited by the Human Factors and Ergonomics Society. Most Human Factors practitioners (HF being the dominant term used in the US) tend to have academic qualifications in psychology.

For specialist external HF/E consultancy and commercial HF/E training support, a university degree in HF/E (or closely related discipline, as listed by the HFES in the USA) will usually be necessary, and perhaps a higher postgraduate (e.g., Doctorate) degree in very specific circumstances (e.g., expert witness work).

2. Accreditation and Membership

Do they have an appropriate level of accreditation or membership of an HF/E related professional organisation?

Unlike some professions, the terms ‘human factors specialist’ or ‘ergonomist’ are not legally protected, or regulated (e.g., by the Health and Social Care Professions Council in the UK), and so are not regulated with titles that are legally protected (e.g. Registered Occupational Psychologist, Registered Dietitian, Registered Physiotherapist).

However, HF/E is subject to accreditation (e.g., registration, certification, and chartership) in many countries (e.g., UK, USA, Canada, Australia, NZ, and Europe as a whole). So perhaps the easiest way to have confidence in the competency of an HF/E consultancy, training provider, or individual practitioner is to check for accreditation. This varies throughout different countries. In the UK, the Chartered Institute of Ergonomics and Human Factors provides various accreditations via Chartership, which is conferred to those members who fulfil certain criteria. This includes “having a high level of qualification and experience and being able to demonstrate continuing professional development”. Additionally, different grades of membership of the CIEHF – Fellow, Registered Member, Graduate Member, Technical Member – reflect competency, proficiency and experience.

Member and consultancy directories of HF/E Societies and Associations are available to help. For instance, Members of the HFES can be seen here. Chartered Members of the CIEHF can be seen here. Registered Consultancies that are accredited by the CIEHF can be seen here. You can find other directories of individuals and organisations via the relevant Society or Association in your country. (Note that ‘Associate’ or ‘Affiliate’ Membership is, in most cases, available to anyone and indicates interest and commitment – since all members have to abide by the Code of Conduct – but does not provide assurance of qualifications or experience. Therefore a minimum membership grade for paid support should typically be Graduate or Technical Member.)

In some cases, those who identify as ‘human factors specialists’ will have accreditation via other professional organisations. Typically, these relate to psychology and engineering. Some human factors specialists will be Chartered Psychologists in the UK. (There are other organisations relating to psychology and human factors, often in specific sectors, but these are not recognised by the International Ergonomics Association, which is the umbrella organisation for Human Factors and Ergonomics worldwide. These other organisations also sometimes require members to purchase the organisations’ own training for accreditation, which raises questions that are beyond the scope of this post.) The point is that many who are accredited via another route (e.g., Chartered Psychologist or Chartered Engineer) may well be competent HF/E practitioners, but perhaps for specific aspects of HF/E and not in the whole score of HF/E, and may have a different perspective (e.g., more aligned with psychology) and different approach (e.g., more cognitive-behavioural, social-organisational).

Accreditation will require that the person undertakes appropriate continued professional development, and submits evidence of this. The is important, but difficult for buyers of consultancy and training services to assess. Accreditation and membership removes some of that burden, because the Society does this as a requirement of the person’s membership.

3. Code of Ethics

Do they abide by a code of ethical conduct from an HF/E-related society or association?

This issue is covered by Accreditation above, but it its worth considering specifically because it is so important. A person offering HF/E consultancy or training services who is a member of an IEA Federated Society will have to abide by the Code of Conduct of that Society. The person should be aware of the Code. In any case, the Code (e.g. the CIEHF Code of Conduct) will cover such ethical standards, such as:

  • working within limits of competence
  • representation and claims of effectiveness
  • supervision
  • respect for evidence
  • confidentiality
  • impartiality
  • probity
  • considerations of religion, gender, race, age, nationality, class, politics or extraneous factors.

Professional societies of other disciplines and professions (e.g., psychology, engineering, health and safety) will also have codes of ethical conduct, and while these will not reference ergonomics, they will refer to similar sorts of issues mentioned above, and so working within competence would normally be formally recognised as an ethical issue.

This is an important question to ask anyone offering HF/E services and training, or seeking a job as an HF/E specialist.

If the person is not operating under the Code of Conduct of a professional organisation, then the protections available are limited to those under the law.

4. Experience

Do they have experience in the HF/E work and in the domain of interest?

The question here is whether the person has relevant experience in:

  • the kind of HF/E work (e.g., interface design, fatigue assessment, human error identification, cognitive work analysis, manual handing assessment), and
  • the sector of application (e.g., manufacturing, oil and gas, aviation, healthcare).

The first is the more important of the two, since HF/E – more than many other disciplines and professions – applies across sectors. HF/E practitioners tend to spend time in several sectors in their career. However, sector knowledge is important and HF/E specialists with a deep knowledge of one sector will have a greater understanding of the stakeholders, activities, procedures, technologies, regulations, cultures, etc. So at a micro level of application (e.g., the design of display elements or manual handling), much in HF/E crosses sectors. But at a macro level (e.g., the integration of HF/E throughout an organisation), this is not the case. When it comes to training others in aspects of HF/E (e.g., short courses), experience in the sector is a huge advantage, if not essential.

If the HF/E specialist offering consultancy or training services is accredited, then issues will be covered by the Code of Conduct or Ethics of their HF/E Society or Association, and the person will have to abide the relevant requirements (it is the focus of several items of the CIEHF Code of Conduct).

5. Social recognition

Is the person recognised as an HF/E specialist by other qualified HF/E specialists?

It can be hard to know if a person is suitably qualified and experienced, though answering ‘yes’ to the above will suggest that the person is. But there will be occasions when people fall outside of one or more of the criteria above, but where HF/E colleagues and associated would say that the person is an HF/E specialist. This will tend to involve those who specialise in a specific aspect of HF/E, but perhaps do not call themselves human factors specialists or ergonomists (and perhaps use other terms, such as UX designer, interaction designer, etc), and who are not a member of an HF/E Society or Association (e.g., a Technical Member of the CEIHF). Such people may well use HF/E theory and methods appropriately, and may even be an recognised expert in the specialism. In this case, social recognition by experienced HF/E specialists will give a good indication.

Summing up

To sum up, here are the five criteria and questions that apply to paid-for human factors and ergonomics (HF/E) consultancy and training support and employment, that may help with reflection and discussion.

1. Qualification – Do they have a recognised qualification in HF/E?

2. Accreditation – Do they have an appropriate level of membership of an HF/E related professional organisation?

3. Code of Ethics – Do they abide by a code of ethical conduct from an HF/E related society or association?

4. Experience – Do they have experience in the HF/E work and the domain of interest?

5. Social recognition – Is the person recognised as an HF/E specialist by other qualified HF/E specialists?

The aim of these criteria and questions is to ensure that professional standards – including ethical standards – are met. The criteria and questions are frames above in the context of HF/E, but in fact they apply to any professions, such as psychology, dietetics, or physiotherapy. Proper consideration of the criteria and questions should help to protect organisations, individuals, and the integrity of the profession.

Further Reading

Education and application is discussed practically (in the context of aviation, but applicable more generally), in:

Hawkins, F. H. (1987). Human factors in flight.. Gower Technical Press, pp. 326-341.

Posted in Human Factors/Ergonomics | Tagged , , , , , , | 1 Comment

Work-as-Imagined Solutioneering: A 10-Step Guide

Have you ever come across a ‘problematic solution’ that was implemented in your workplace, and wondered, “How did this come to be?” Wherever you sit in an organisation, the chances are that you have. Many problematic solutions emerge from a top-down process that I will call work-as-imagined solutioneering.

In this post, I outline a typical process of 10 Steps by which problematic solutions come into being. Some of the steps may be skipped, but with the same outcome: a problematic solution.

At the end of the post, you will find 10 ‘Solutions’ from healthcare, provided by healthcare practitioners in a series of posts on this blog on the archetypes of human work. These solutions do not typify the process below (since the process that these solutions were subject to is not known to me). And the solutions will all probably have various advantages and disadvantages. The solutions simply provide rich and messy examples of unimagined and unintended side-effects. But you will be able to think of many others in your own context (please provide an example as a comment or get in touch).

Throughout the 10 Steps, I will use terms to describe seven kinds of systems that must be reckoned with when making changes in socio-technical systems (from Martin’s [2004] Seven Samurai framework).

Slide14Step 1. Complex problem situation

The process of work-as-imagined solutioneering starts with a complex problem situation. Complex problem situations  occur in systems with:

  • a number of stakeholders with conflicting goals
  • complex interactions between stakeholders and other elements of the socio-technical system (visible and invisible, designed and evolved, static and dynamic, known and unknown),
  • multiple constraints (social, cultural, technical, economic, regulatory, legal, etc), and
  • multiple perspectives on the nature of the problem.

Problems may well be interconnected to form a ‘mess’.

Step 2. Complexity is reduced to something simple

Complex problem situations are hard to understand and have no obvious solutions. This is unappealing to most people. Understanding complex problem situations requires that we seek to understand:

  • the various expressions of, and influences on, the problem,
  • the context system, including the stakeholders, their activities, the tools and artefacts that they use, the context or environment (physical, ambient, social, cultural, technical, economic, organisational, regulatory), and
  • the history of the context system.

One of the hallmarks of work-as-imagined solutioneering is a neglect of one or more of these facets of the problem situation or context system. This is partly because understanding requires:

  • high levels of field expertise – expertise in the work that is influenced by and influences the problem, whatever the work is,
  • an understanding of people (which can be approached via various disciplines: psychology, sociology, anthropology, community development, human factors/ergonomics, etc),
  • an understanding of socio-technical systems and the nature of change in such systems, and
  • sufficient expertise in a human-centred and system-oriented design process.

Once you have approached the problem situation in a sensible way, an analysis of stakeholder assets and needs should follow.

Unfortunately, once a problem is identified, the perceived urgency to do something creates pressure to be efficient, when thoroughness is required – a blunt-end efficiency-thoroughness trade-off. The required thoroughness is time-consuming and difficult. It requires specialist expertise and – crucially – bridging social capital to engage with field experts in order to get the understanding necessary to help, rather than hinder. [There is almost always a lack of expertise, and we should try to understand why solutions make sense to managers and not simply berate them.]

So these critical activities (understanding the context system and problem situation, and understanding stakeholder assets and needs) are often neglected. And complexity is reduced to something simple. For example, a mismatch between demand, resources and capacity may reduced to a problem of ‘poor performance’. A mismatch between work-as-prescribed and work-as-done is reduced to ‘non-compliance’ or ‘violation’. A mismatch between design and performance is reduced to ‘human error‘.

Step 3. Someone has a solution waiting for a problem

While there may be little understanding of the complex problem situation, solutions are at hand. Past experience, ideas from other industries or contexts, and committee-based idea-generation or diktats from authority figures make a number of ‘solutions’ available. Examples include:

  • measures
  • monitoring arrangements
  • quantified performance targets and limits
  • commercial off-the-shelf products (equipment, artefacts)
  • checklists
  • procedures
  • standard training
  • processes
  • incentives
  • punishments
  • reorganisation or activities, processes and reporting lines
  • redistribution of power.

Most of these (aside from targets, in most circumstances) are not inherently Bad Things. The Bad Thing is introducing them – any of them – without a proper understanding of the context system and the problem situation within that context system. But it is too late. The focus is now on the solution – the intervention system.

Step 4. Compromises to reach consensus

As the solution (intervention system) is revealed, people at the blunt end are now at the sharp end of a difficult process of design and implementation. There are disagreements and they start to see a number of complications. But the stability of the group is critical. The intervention system is put out for comment, usually to a limited audience and with the aim to prove its viability. There are further insights about the problem situation and context system, but these arrive in a haphazard way, instead of through a process of understanding involving design and systems thinking. Eventually, compromises are made to achieve of consensus and the intervention system is specified further. Plans are made for its realisation. The potential to resolve the problem situation is hard to judge because neither the problem situation nor the context system is properly understood.

Step 5. The project becomes a thing unto itself

The focus now turns to realisation. The problem situation and context system, which were always out of focus, are now out of view. The assets and needs of all stakeholders were never in view, but the needs of the stakeholders who are invested in the roll-out the solution (intervention system) have been met: they can now feel reassured that something is being done. The need of corporate anxiety-reduction is now being addressed. Something is being done.

So the focus now switches from the intervention system to the realisation system – the system for bringing the solution into effect (management teams, resources, project management processes, materials, etc).

Step 6. Authorities require and regulate it

As the intervention system (the ‘solution’) gets more attention, authorities believe that this is a Good Thing. Sometimes, solutions will be mandated and regulated, and monitored by the with regulatory power. Now there is no going back.

Step 7. The solution does not resolve the problem situation

As the solution is deployed, it becomes the deployed system. This is not necessary the same as the original idea (the intervention system). Compromises have been made along the way, both by those responsible for the intervention system (compromising on aspects of the concept), and by those responsible for the realisation system (compromises on aspects of implementation).

The design or implementation (or both) of the solution meets a need (corporate anxiety reduction) but does not resolve the original problem. The original problem remains, perhaps in a different form. Never events still happen (Solution 4), a ‘paperless’ discharge summary process (Solution 6) still requires paper. The feedback loops, however, contain delays and distortion, which we will come back to.

Step 8. Unintended consequences

Not only does the solution not resolve the original problem, but it brings new problems that were never imagined. These include problems concerning system conditions (e.g., higher unwanted demand, more pressure, more use of resources), and problems concerning system behaviour (e.g., increased workload, unwanted workarounds).

Here are some healthcare examples:

A Duty of Candour (Solution 1) process results in a “highly bureaucratic process which has reinforced the blame culture.”

A Do Not Attempt Resuscitation (DNAR) form (Solution 2) results in patients being”subjected to aggressive, yet ultimately futile, resuscitation measures which may include multiple broken ribs, needle punctures in the arms, wrists and groin, and electric shocks” and nurses and paramedics working “in such fear of not doing CPR when there is no DNACPR that they may override their own professional judgement and do CPR when it is clearly inappropriate.”

Dementia diagnosis targets (Solution 3) result in “naming and shaming supposedly poorly diagnosing practices – published online. Setting doctors harmful tasks, leading them almost to “process” patients.”

Never Events list (Solution 4) – similar to various popular zero harm initiatives, “ignored the potential for using never events as a stick to beat people up with, … ignored the potential for gaming the data, … ignored the potential for people to become fearful of reporting and the loss of learning as a result.”

A ‘paperless’ discharge summary process actually results in more paper, along with”discrepancies between the notes of doctors, nurses, physiotherapists, occupational therapists, and social workers” (Solution 5). Similarly, following the implementation of a computerised medical system, “work-as-done reverted back to the system that was in place before where secretaries still had to print results on bits of paper and hand them to consultants to action” (Solution 6).

Amidst these unintended consequences, the context system has now changed and there may well be competing systems that address the problem, masking the effects of the deployed system. For instance, along with a Central Line Associated Bacteraemia (CLAB) checklist (Solution 9) another deployed system was CLAB packs, “These put all required items for central line insertion into a single pack thereby making it easier for staff to perform the procedure correctly.” Which has the effect imagined?

Furthermore, there may be inadequate collaboration or support from collaborating systems and sustainment systems (which collaborate with the deployed system to achieve some goal or help it continue to function). Examples include blunt-end roles for monitoring, analysis, feedback, and the supply of tools, materials, and technical support. These stakeholders are typically far removed from operational work-as-done and do not understand the assets and needs of those who work on the front line. It may be that thedeployed system cannot even function as intended, as designed or as originally implemented.

Step 9. People game the system

Many work-as-imagined solutions can be gamed, and it may well be locally rational to the people who do – rather than imagine – the work. This is typical of measures (especially when combined with targets or limits) and processes. Following are some healthcare examples.

Radiology request forms are meant to be completed and signed by the person requesting the procedure. However, “In the operating theatre, the surgeon is usually scrubbed and sterile, therefore the anaesthetist often fills out and signs the form despite this being “against the rules” (Solution 7).

On the introduction of Commissioning for Quality and Innovation payments framework (CQUINs) to drive innovation and quality improvement in the NHS, clinicians are “demotivated by the process of collecting meaningless data and are tempted to use gaming solutions to report best performance” (Solution 8), having informed the commissioners of problems with the deployed system and offering suggested improvements to the metrics (which do not fit the intervention system concept).

Checklists for the prevention of Central Line Associated Bacteraemia (CLAB) (Solution 9) is are completed “retrospectively without watching the procedure, as they were busy with other tasks”.

Step 10. It looks like it works

The gaming, combined with feedback lags and poor measures, may well give the illusion that the deployed solution is working, at least to those not well connected to work-as-done.

After introducing the CLAB bundle (Solution 9) “very high levels of reported checklist compliance” were observed “followed by the expected drop in our rates of infection, confirming the previously reported benefits.” but the drop instead “appears to be due to the use of CLAB packs. These put all required items for central line insertion into a single pack thereby making it easier for staff to perform the procedure correctly.”

With the WHO Surgical Safety Checklist (Solution 10), “The assumption within an organisation at ‘the blunt end’ is that it is done on every patient” despite “clear evidence that there is variability in how the checklist is used both within an organisation and between organisations”.

Of course, there may well be knowledge that work-as-imagined does not align with work-as-done, but this is an inconvenient truth. Too often, what we are left with is a separation (or even inappropriate congruence) of the four varieties of human work: work-as-imagined, work-as-prescribed, work-as-done, and work-as-disclosed. This is enacted in a number of archetypes of human work.

This is not the end of the process, but by this stage, the project team that worked on the originally intended solution (the intervention system) have moved on. The deployed system remains and now we must imagined a solution for both the original problem and the new problems.


Solution 1: Duty of Candour

Over the last few years there has been a call to enshrine ‘saying sorry’ in law. This became the ‘duty of candour’. When this was conceived it was imagined that people would find the guidance helpful and that it would make it easier for frontline staff to say sorry to patients when things have gone wrong. Patient advocates thought it would mean that patients would be more informed and more involved and that it would change the relationship from an adversarial to a partnership one. In practice this policy has created a highly bureaucratic process which has reinforced the blame culture that exists in the health service. Clinical staff are more fearful of what to say when something goes wrong and will often leave it to the official process or for someone from management to come and delivery the bad news in a clinical, dispassionate way. The simple art of talking to a patient, explaining what has happened and saying sorry has become a formalised, often written, complied duty. The relationships remain adversarial and patients do not feel any more informed or involved as before the duty came into play. Suzette Woodward, National Clinical Director, Sign up to Safety Team, NHS England @SuzetteWoodward


Solution 2: Do Not Attempt Cardiopulmonary Resuscitation (DNACPR) form

A Do Not Attempt Resuscitation (DNAR) form is put into place when caregivers feel that resuscitation from cardiac arrest would not be in the patient’s best interests. These forms have received a significant amount of bad press, primarily because caregivers were not informing the patient and/or their families that these were being placed. Another problem with DNAR forms is that some clinicians feel that they are being treated as “Do Not Treat” orders, leading (they feel) to patients with DNAR forms in place receiving sub-standard care. This means that some patients who would not benefit from resuscitation are not receiving DNAR forms. As a result when these patients have a cardiac arrest they are subjected to aggressive, yet ultimately futile, resuscitation measures which may include multiple broken ribs, needle punctures in the arms, wrists and groin, and electric shocks. It is not unusual to hope that these patients are not receiving enough oxygen to their brains to be aware during these last moments of their lives. Anonymous, Anaesthetist

What is sad is that this is not an unusual story. Unless a person dying in Hospital or a Nursing Home has a DNACPR then CPR will be usually be done. CPR may even be done when a person in frail health dies at home without a DNACPR, because the paramedics may be instructed to do CPR ”Just in case it was a cardio-pulmonary arrest”. Nurses and paramedics work in such fear of not doing CPR when there is no DNACPR that they may override their own professional judgement and do CPR when it is clearly inappropriate. Recently a nurse was reprimanded by the Nursing and Midwifery Council for not trying CPR on a nursing home resident who, in my opinion, was clearly already dead. I know of a case in our Hospital in which CPR was started on a person whose body was already in rigor mortis. Dr Gordon Caldwell, Consultant Physician, @doctorcaldwell

Solution 3: Dementia Diagnosis Targets

There are high levels of burnout. A target-driven culture is exacerbating this problem. A typical example was when the government seemingly became convinced by poor quality data which suggested that dementia was under diagnosed So it decided to offer GPs £55 per new diagnosis of dementia. Targets were set for screening to take place – despite the UK National Screening Committee having said for years that screening for dementia was ineffective, causing misdiagnosis. And when better data on how many people had dementia was published – which revised the figures down – it was clear that the targets GPs were told to meet were highly error-prone. The cash carrot was accompanied with beating stick, with the results – naming and shaming supposedly poorly diagnosing practices – published online. Setting doctors harmful tasks, leading them almost to “process” patients, fails to respect patient or professional dignity, let alone the principle of “do no harm”. [Extract from article The answer to the NHS crisis is treating its staff better, New Statesman.] Margaret McCartney, General Practitioner, @mgtmccartney


Solution 4: Never Events List

When we created the list of ‘never events’ at the National Patient Safety Agency we genuinely thought that it would lead to organisations focusing on a few things and doing those well. We thought it was a really neat driver for implementation of evidence based practice (e.g. the surgical safety checklist). We ignored the potential for using never events as a stick to beat people up with, we ignored the potential for gaming the data, we ignored the potential for people to become fearful of reporting and the loss of learning as a result. We importantly ignored the fact that in the vast majority of cases things can never be never – that it is a fact of life that things can and do go wrong no matter how much you try to prevent it. There is no such thing as zero harm and the never events initiative unfortunately gave the impression that it could exist. Suzette Woodward, National Clinical Director, Sign up to Safety Team, NHS England @SuzetteWoodward


Solution 5: ‘Paperless’ Discharge Summary Process

Our paperless Discharge Summary process generated about 5 times as many sheets of A4 as the old paper system, as the ‘paperless’ prescription got corrected and refined prior to discharge. Then we still were told we had to print a copy to go into the paper notes and of course the patient has to have a paper copy because there was no way to email it to the patient. The software could not message pharmacy, so we had to print out the discharge meds to be sent to pharmacy, who then checked found the errors, got doctors to correct them, then another print out, and round again. Then there were discrepancies between the notes of doctors, nurses, physiotherapists, occupational therapists, and social workers, and and soon we are all working on different problems in different directions, and the patient becomes a ‘delayed discharge’. There are so many paper copies that sometimes an earlier incorrect paper copy gets filed into the notes. Then, unless someone hits ‘Finalise’, the pdf copy never gets emailed to the GP at all. Dr Gordon Caldwell, Consultant Physician, @doctorcaldwell


Solution 6: Computerised Medical systems

With the installation of a fully computerised system for ordering all sorts of tests (radiology requests, lab requests, etc.) work-as-imagined (and -as prescribed) was that this would make work more efficient and safer, with less chance of results going missing or being delayed. Prior to the installation there was much chat with widespread talk of how effective and efficient this would be. After installation it became apparent that the system did not fulfill the design brief and while it could order tests it could not collate and distribute the results. So work-as-done then reverted back to the system that was in place before where secretaries still had to print results on bits of paper and hand them to consultants to action. Craig McIlhenny, Consultant Urological Surgeon, @CMcIlhenny


Solution 7: Radiology Request Forms

Radiology request forms are meant to be completed and signed by the person requesting the procedure. In the operating theatre, the surgeon is usually scrubbed and sterile, therefore the anaesthetist often fills out and signs the form despite this being “against the rules”. Managers in radiology refused to believe that the radiographers carrying out the procedures in theatre were “allowing” this deviation from the rules. Anonymous.


Solution 8: CQUINs (Commissioning for Quality and Innovation payments framework)

Commissioners often use CQUINs (Commissioning for Quality and Innovation payments framework) to drive innovation and quality improvement in the NHS. In theory, the metrics relating to individual CQUINs are agreed between commisioners and clinicians. In practice, some CQUINs focus on meaningless metrics. A hypothetical example: a CQUIN target for treating all patients with a certain diagnosis within an hour of diagnosis is flawed due to a failure of existing coding systems to identify relevant patients. Clinicians inform the commissioners of this major limitation and offer suggested improvements to the metrics. These suggested improvements are not deemed appropriate by the commissioning team because they deviate significantly from previously agreed definitions for the CQUIN. The clinicians are demotivated by the process of collecting meaningless data and are tempted to use gaming solutions to report best performance. This situation is exacerbated by pressure from the management team within the NHS Trust who recognise that failure to demonstrate adherence to the CQUIN key performance indicators is associated with a financial penalty. The management team listen to the clinicians and understand that the data collection is clinically meaningless, but insist that the clinical team collect the data anyway. The motivational driver to improve performance has moved from a desire to improve clinical outcomes to a desire to reduce financial penalties. The additional burden is carried by the clinical team who are expected to collect meaningless data without any additional administrative or job plan support. Anonymous, NHS paediatrician


Solution 9: Central Line Associated Bacteraemia (CLAB) checklists

The use of checklists for the prevention of Central Line Associated Bacteraemia (CLAB) is well described and has been taken up widely in the healthcare system. The purported benefits of the checklist include ensuring all steps are followed as well as opening up communication between team members. After introducing the CLAB bundle into our Intensive Care Unit, we saw very high levels of reported checklist compliance followed by the expected drop in our rates of infection, confirming the previously reported benefits. However, when we observed our staff it became apparent that they were actually filling in the checklist retrospectively without watching the procedure, as they were busy with other tasks. The fall in the CLAB rate could therefore not have been due to the use of a checklist and instead appears to be due to the use of “CLAB packs”. These put all required items for central line insertion into a single pack thereby making it easier for staff to perform the procedure correctly. Carl Horsley, Intensivist, @horsleycarl


Solution 10: WHO Surgical Safety Checklist

The WHO Surgical Safety checklist was introduced into the National Health Service following the release of Patient Safety Alert Release 0861 from the National Patient Safety Agency on 29 January 2009. Organisations were expected to implement the recommendations by February 2010 including that ‘the checklist is completed for every patient undergoing a surgical procedure (including local anaesthesia)’. All organisations have implemented this Patient Safety Alert and the WHO Surgical Safety checklist is an integral part of the process for every patient undergoing a surgical procedure. Whilst the checklist appears to be used in every patient, there is clear evidence that there is variability in how the checklist is used both within an organisation and between organisations. Within an organisation, this variability can occur between teams with differences in the assumed value of using the checklist  and within a team between individuals or professional groups. Its value can degrade to a token compliance process to ‘tick the box’. The assumption within an organisation at ‘the blunt end’ is that it is done on every patient. Alastair Williamson, Consultant Anaesthetist, @TIVA_doc


Reference

Martin, J. N. (2004). The Seven Samurai of Systems Engineering: Dealing with the Complexity of 7 Interrelated Systems. Presented at the 2004 Symposium of the International Council on Systems Engineering (INCOSE). Available here.

Note: This is a post from June that curiously disappeared from the blog. I probably pressed a wrong button somewhere. Like ‘Move to Trash’, os something similarly unclear.

Posted in Human Factors/Ergonomics, Safety, systems thinking | Tagged , , , , , , , , , , , , | Leave a comment