By Dr. Ben Appleby and Dr. Maddie Swannack
Next Lesson - Summary and Glossary
Abstract
Over the last 30 years, the NHS has put a lot of emphasis on improving patient safety and offering a quality experience to all patients. This is to save money in the long run because it reduces the frequency of compensation payouts. There are numerous ways this is achieved.
An adverse effect is an injury caused by medical management (rather than underlying pathology) that causes prolonged hospitalisation, disability, or both. Some of these are not preventable, but some are, and these mistakes come through either flaws in the systems or human error.
There are four types of human error: slips, lapses, mistakes and intentional violations. This means that any systems in place need to account for these human mistakes, but this can be difficult, often leaving responsibility on the staff to work extra hard.
The James Reason Framework of Error explains how pre-existing factors (latent failures) can influence actual adverse effects (active failures). It is explained easily by the Swiss Cheese Model.
There are two orders of problem solving: first order (basic, quick, immediate fix) and second order (takes longer, but is a more permanent fix).
Root cause analysis or systems analysis are a toolbox of techniques that can be used to analyse following a failure to find the cause of it.
Speaking up can be difficult in the NHS because there is often a strict hierarchy. It is important however, to fight this, and to speak up when something is wrong, so that we can provide the best care possible to patients. This is done through freedom to speak up guardians.
Core
Patient safety and quality in the NHS
Until recently, by some standards, quality and safety were poorly managed in the NHS. A series of scandals and new research has brought the importance of patient safety and quality healthcare to the fore.
Issues have been identified through;
- Evidence of substandard care in some areas.
- Huge variations in healthcare outcomes and decisions between separate CCGs.
- Direct cost of litigation on already limited NHS resources.
Safety issues usually derive from quality of healthcare. Hospitals which priorities quality perform better than those which simply try and achieve a minimum of safe treatment.
High quality healthcare is characterised by:
- Safety – no needless deaths.
- Effectiveness – no needless pain or suffering.
- Patient-centredness – focussed on the patient’s needs and priorities.
- Timeliness – no unwanted waiting.
- Efficiency – no waste.
- Equitability – no one left out.
This is obviously difficult to achieve, especially with limited resources. Variations in healthcare across the country suggest that not everyone receives the same quality of care. Variation is not always a bad thing as the different populations that make up the UK have different needs to be met. However, this alone does not account for the presence of waste or inequity.
A good example of this variation is seen in the requirements to receive IVF funded by the NHS. This has been known as the ‘postcode lottery’, with local trusts being able to decide how many cycles they will fund. There are many mechanisms in place to try to improve care quality in the NHS:
- Use of standard setting with NICE quality standards of what high quality evidence based care should look like.
- Use of commissioning to drive improvement (this means putting things like contracts out for tender to encourage competition).
- Use of financial incentives through things like Quality Outcomes Framework (give GP practices that perform ‘better’ more funding) and Commissioning for Quality and Innovation (CQUIN).
- Increasing emphasis on disclosing information by making it easier to blow the whistle on safety issues and introducing duty of candour where clinicians are obliged to speak up.
- Use of regulation and inspection from the Care Quality Commission or Public Health England (through analysis types discussed later).
- Use of clinical audits.
- Feedback from patients such as the ‘friends and family test’.
- Revalidation of doctors every 5 years as a GMC requirement to ensure that clinicians are fit to practice are keeping up to date with medical advances.
An adverse effect is an injury caused by medical management (rather than underlying pathology) that causes prolonged hospitalisation, disability, or both. A preventable adverse effect is an adverse event that is predictable and avoidable.
Some adverse effects are unpredictable, e.g. a first time adverse drug reaction (an allergic reaction to penicillin on its first use in a patient). Although this harm derived from medical management so is an adverse event there is at the moment no way to predict this event. Should a method come about to predict this event then it would be reclassified as either a preventable adverse event or never event.
However some adverse events are preventable. The most severe adverse events which have multiple protections to prevent their occurrence are known as never events as they simply should never happen. Examples of preventable adverse events include:
- Operations being performed on the wrong patient or on the wrong site. This would be classed as a never event.
- Retained objects like surgical swabs. This is a never event.
- Wrong dose or type of medication given. This is a never event.
- Some infections, like line infections.
All humans are fallible, meaning mistakes can sometimes just happen. Many errors are highly predictable therefore good systems design should anticipate human error and support individuals within the system.
There are four main types of human error:
- Slips – failure of attention (misread the operating proforma and operate on the wrong site).
- Lapses – failure of memory (forgetting to fill in a referral form for a patient so they don’t receive the care they need).
- Mistakes – rule based or knowledge based (not knowing that all patients with a dairy allergy have to wear a red wrist band).
- Intentional violations – doing something harmful on purpose. These are rare but have lead to a number of high profile cases such as that of Harold Shipman.
The state of a healthcare system can sometimes make human errors more likely: inadequate training, understaffing, long hours, drugs that look the same, poorly designed equipment, lack of checks before procedures are done, different methods of doing the same task in different places.
Sometimes failures can be the fault of an individual (incompetency, negligence, malign behaviour), but more often it is that the system failed and no one is to blame. Systems and processes not designed on safety are more likely to fail and require work arounds which don’t solve the root issue. This common systems failure leads to:
- Short term work arounds instead of long term changes to the system (called 1st order problem solving – see below).
- The tendency to rely on people to put more work in to overcome the problem.
- The need for people to be vigilant all the time.
- A tolerance for poor quality care.
The James Reason Framework for Error 'The Swiss Cheese Model'
This model attempts to explaining why errors occur despite normally effective systems put in place. Reason’s framework is most famously explained as the swiss cheese model.
The Swiss Cheese model imagines a system as a number of slices of swiss cheese i.e. each slice has holes (see below). Each slice represents a safeguard in a system to prevent an error occurring (an active failure). However, each slice has holes, this demonstrates existing flaws (latent failures) within each safeguard. Normally this is not an issue as the flaws are unlikely to all occur simultaneously, therefore stopping active failures from occurring. However, eventually each hole will line up and allow for an active failure to occur. Examples of ‘holes’ could be things like flawed protocols, time pressures, understaffing, uncommunicative patients and illegible notes to name just a few factors.
Diagram showing the Swiss Cheese model, showing how different flaws in the systems and human error can stack up to cause an active failure or loss.
Creative Commons Image Source Davidmack [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)]
In a nutshell, the James Reason Framework for Error is therefore composed of two parts:
- Latent failures (holes) are the predisposing conditions of the system that encourage active failures e.g. short staffing and similar looking drugs.
- Active failures are the acts that directly lead to patient harm e.g. administration of the incorrect drug. Active failures occur after latent failures align e.g. the wrong drug is selected by the tired overworked staff member as it is easily confused for another similar looking drug. Had the drug been more distinctive or the clinician more focused the active failure would’ve been prevented.
There are two orders of problem solving:
First Order Problem Solving - a basic form of problem solving consisting of doing exactly what it takes to complete the task (no more and no less) e.g. if the drip stand is broken using some surgical tape to fix it. The stand will of course break again in future meaning that the problem has only been temporarily and poorly solved leaving it up to the next user of the faulty stand to fix it again.
Second Order Problem Solving - a more advanced form, where the problem is stripped back to its root (using forms of analysis seen below) and then a solution is worked upwards from there. This can take a lot longer (which is why it is often not completed) but reduces the chances of the errors happening again. In the example above, it would involve asking why the drip stand is broken and why it hasn’t been replaced. ‘The department won’t supply any more drip stands as they keep being damaged by door frames’. The second order solution would be to transport the stands collapsed so that they fit through the doors and ask the department to supply new stands with this intervention in place. This then means that new stands are acquired and are then not broken as they aren’t damaged by doors. Barriers to second order problem solving include; busy doctors ‘ I don’t have the time to collapse the stand’, too new/uncomfortable/alien an idea ‘I’m not sure I’m comfortable with collapsing the stand’, traditional standards and beliefs ‘I was told that a doctor who collapses their drip stand is no true doctor at all!’ or ‘Adjusting drip stands? We don’t do that around here.’
Root Cause Analysis/Systems Analysis
These interchangeable terms both refer to; analysis that exists to identify the underlying causes of failure while looking at the whole picture. Some helpful tools when considering a root cause analysis:
- The 5 Whys – basically asking why until it can’t be answered anymore. Eg. Patient died (why?) because they were given a drug that they were allergic to (why?) because it wasn’t written in their file that they were allergic to it (why?) because it was written in the wrong patient file (why?) because the file was kept in the wrong place (why?) because the current filing system doesn’t work with this many patients. This then finds the root issue which can then be amended to prevent further harms.
- Timelines – two timelines are drawn, one of what happened and one of what should have happened as per guidelines or best practice. They are then compared to see where the timeline which resulted in harm deviated from best practice.
- Fishbone diagrams – (see diagram below).
- Plan-Do-Study-Act Cycles – a cycle that takes place around an intervention. Plan what the intervention should be, enact (do) the intervention, study the intervention and see where it did well and where it went wrong and act on those findings to plan new improvements to the intervention. This cycle then continues continually improving with each cycle of practice.
These are all examples of techniques that could be used in the analysis of failure.
An example of a Fishbone Diagram – it helps to explain the different things that can contribute to the problem and prompts discussion on which areas might have contributed.
SimpleMed Original by Maddie Swannack
Negligence is a key feature of mistakes made in the NHS. To be truly negligent, four criteria must be met:
• The defendant (the healthcare worker) owed a duty of care – by definition all healthcare workers have a duty of care to their patient.
• The defendant was in breach of that duty – by definition for there to be an enquiry to negligent behaviour an error must’ve been made.
• The breach of duty caused damage – it must have caused harm.
• The damage was foreseeable and therefore preventable – if the clinician was simply ‘unlucky’ and the harm was unforeseeable this would not count as negligence.
N.B - please refer to our disclaimer, the authors of this article are not lawyers and this article does not constitute legal advice.
Speaking up when witnessing a failure in healthcare is important because it highlights where harm to patients could (or has already) occurred. By speaking up, this prevents further harms to patients. It can be difficult to speak up for a number of reasons:
- Loss of situational awareness – you might not realise that a situation has become dangerous even if it originally wasn’t.
- Authority gradients – more senior person says that it is safe.
- Too much deference to others – thinking that senior individuals always know better.
- Fear of future hostility – co-workers might be unpleasant towards you after their review and there may be tacit retribution in the workplace.
- Not certain it’ll make a difference – colleagues might not listen or change their behaviour. They might only change their opinion about you.
The NHS encourages speaking up through a number of methods including:
A statutory duty of candour, meaning each person who makes a mistake must have:
- A face to face discussion with anyone affected about what happened and why.
- A discussion about further actions needed.
- A written report on what happened.
- An apology issued to anyone affected.
An implementation of Freedom to Speak up Guardians, a person within each trust who encourage everyone to speak up and act as a first step.
Stop the line - (in some trusts) if any member of the team says the phrase ‘stop the line’ this highlights that there is a possible patient safety issue and that the procedure or intervention should be stopped immediately.
SBAR is a useful tool used to pass information from one clinician to another in a way that is clear. This helps to avoid situations where information is missed in handover.
Situation – Who am I? Who is the patient? Where are they? What is going on with the patient (key points of history and key concerns)?
Background – What is the patient’s clinical background or context?
Assessment – What do I think the problem is?
Recommendation – What do I need to do next? What do I need from you?
Reviewed by: Dr. Marcus Judge
- 4685