Making the most of safety data

Page 1

USE

OF

I N F O R M AT I O N

Making the most of safety data: do not throw the baby out with the bathwater! Katherine Cheema and Samantha Riley Abstract In the National Health Service in England there are many sources of information pertaining to patient safety. This paper sets out to describe the challenge of measuring patient safety and describes the key data sources that underpin the national understanding of the area. The paper will describe how utilizing all of the available patient safety data, irrespective of the variability inherent, can ensure that practising clinicians have a better understanding of the current picture of patient safety and can fully evidence the efficacy of their improvement actions. Examples of effective triangulation of these data sources are given with acknowledgement of the challenges this can present in terms of engagement and understanding, particularly in the clinical context. Recommendations for the effective use of information in the assessment of patient safety are also provided.

Introduction Patient safety is a foundation principle of the National Health Service (NHS) and indeed of the caring professions, being a core part of delivering quality services. In a world of dwindling resources that requires us to do more for less, an unrelenting focus on ensuring that our patients remain safe while in our care is crucial to guarantee that we deliver the best care possible. In order to achieve this, rigorous measurement of harm events and other patient safety information is crucial, not only in identifying where we may have gone wrong, but also to help us understand where improvement work has been successful and point to possible next steps in our journey towards harmfree care. So, what data are there to interrogate to shed light on the state of patient safety in the NHS? We are certainly not short of data. In fact, we are positively swimming in it, with a range of datasets and tools that are regularly in use to collect patient safety data and fully accessible to clinical teams across the country. The Institute for Healthcare Improvement’s (IHI) global trigger tool is widely used as a deep but rapid approach to assess the cause and avoidability of harm, primarily in hospital-based care. We have extensive risk and root cause reporting to the National Reporting and Learning System (NRLS) hosted by the National Patient Safety Agency (NPSA) as well as a centralized database of serious incidents via the Strategic Executive Information System system, healthcare-associated infections via the Health Protection Agency (HPA) data

Katherine Cheema MSc Research Methods, NHS South of England Quality Observatory, UK; Samantha Riley BSc (Hons) Computer Science, NHS South of England Quality Observatory, UK Email: samantha.riley@southeastcoast.nhs.uk

Clinical Risk 2012; 18: 124 –130

collection system and a host of standard datasets that record administrative data that can be interrogated to assess patient safety events. This year, incentivized through a national clinical quality incentive payment (CQUIN), another tool has been added to the mix, the NHS Safety Thermometer, a point prevalence tool that identifies patients with specific harms, covering pressure ulcers, harm from falls, catheterized patients with urinary tract infections and venous thromboembolism and from which can be derived a composite ‘harm free care’ indicator. This is of course not to mention information collected under the auspices of local programmes, audits and projects such as that used within as well as less frequent national surveys, such as the HPA’s catheter-associated infection audit, which collects additional data often outside the regular datasets. Given the wealth of data that are available to clinicians working within the NHS, we might expect our measurement of harm to be integral to all our regular reporting and readily available at all levels and in all areas where care is delivered. Our progress in reducing avoidable harm should be second to none; after all, we have all these data to help us identify what is working and what is not. In reality, each individual data source is slightly different, either using differing measurement methodologies, alternative definitions or designed to operate in specific care settings. Accordingly, many users of patient safety data are ‘wedded’ to preferred measurement methods, based on the premise that a different definition or methodology could not provide them with the specific answer they require. There is some truth in the suggestion that different data sources will yield differing ‘answers’. This paper aims to show how utilizing all of the patient safety data, irrespective of the variability inherent within it, can ensure that you have a better understanding of the current picture of patient safety and can fully evidence the efficacy of your improvement actions. We provide examples of where DOI: 10.1258/cr.2012.012021


Making the most of safety data

effective triangulation of data in this area has provided valuable information which otherwise may have been missed.

The measurement challenge in patient safety The use of data generally within the English NHS has a somewhat patchy history, with a historically strong record of data collection, but little evidence, until recently, of systematic use of such data in the improvement of clinical services (for examples of how the use of data in improvement has been more systematic, see the NHS Improvement publication, Improving adult asthma care: Emerging learning from the national improvement projects). There are several reasons for this, which underpin the challenge that is presented in this paper. The first is that the current infrastructure supporting information flow in the NHS is centred around hospitalbased care and not across the all health and social care sectors. Given the research base and our understanding of harm incidence and management, having as full a picture of a whole health economy as possible is particularly crucial. Secondly, the effective recording of harm incidence and management through centralized data management systems is not always as robust as would be wished, particularly in secondary care where the treatment and prevention of specific harms is unlikely to be the primary focus of care. This, in tandem with the perceived lack of a systematic method of recording, results in a significant underestimate of the burden of care associated with harm through current data sources. While this largely impacts on established national data flows, local flows through risk management systems and clinical audit will also be affected by these issues. In addition, the continuing problem of variability in the interpretation of key definitions surrounding patient harm (particularly in relation to specific clinical details such as those relating to pressure ulcer grading or urinary tract infections due to catheterization) creates a real challenge. Thirdly, central data collections have often been utilized through contract management and sometimes in a punitive manner, using any information collected for judgement as opposed to service improvement. These factors have conspired to give a general lack of trust in routine data, in terms of accuracy, validity, timeliness as well as purpose. This has led to a negative feedback loop whereby lack of usage of data within the clinical context has meant a lack of action to address the issues with it, leading to further mistrust and so on. These practical and cultural issues can only be overcome through the continued use of data at all levels of the healthcare system with a supporting message from the highest level that the use of data for improvement is the most important aspect of measurement and an acknowledgement of the shortfalls in the data source where appropriate.

Triangulating data: proof of added value The examples provided here are based on the analysis of real data undertaken by the South of England Quality

125

Observatory which has been used by local NHS teams to inform discussion and subsequent decision-making. The analyses were derived from data sources that are readily available to NHS organizations (and in some cases publicly available). Naturally, individual teams of clinicians will have far more additional local intelligence to add and data are always best reviewed by a multidisciplinary team with both information/measurement experts and clinical expertise, ensuring that all aspects, including data quality caveats and any points of clinical relevance, are understood thoroughly by all stakeholders. Later in this paper we provide an overview of the four key sources of patient safety data. Many readers of this paper will no doubt be familiar with these datasets. What readers may not be aware of is the significant added value which can be achieved by the triangulation of safety information from a variety of sources. It is important to state from the beginning that data (even when triangulated) rarely provide answers to a question, but rather inform the basis of informed and focused discussions within clinical teams which can result in positive action being taken. Clinicians should be aware of this and feel empowered to ask questions of data rather than feel dictated to by it. In this first example data are triangulated from three sources: Secondary Uses Service (SUS) data, data collected via the NHS Safety Thermometer and data from the performance monitoring data collections system, UNIFY 2. All the data pertain to venous thromboembolism (VTE). The SUS data are presented as a rate per 10 000 episodes; this serves to help standardize the measure and allow comparisons between regional benchmarks and other organizations. [Figure 1] shows the results for trust A from April 2007 to November 2011 and indicates a rising trend in VTE cases with a marked spike in the early part of the financial year 2011/2012, well above the rate shown for the region (‘SEC rate’). There is little else we can glean from this information; there is no indication as to why there has been a rise and perhaps it may be dismissed as simply indicative of improved coding of VTE rather than a ‘real’ increasing trend in cases. Reviewing data for Trust A with regard to risk assessment (from the UNIFY 2 data collection system, [Figure 2] indicates that there has been an enormous increase in the proportion of patients risk assessed for VTE from September 2010 to April 2012, which appears to correlate with the period of most increase in cases per 10 000 at this organization. This seems counterintuitive; more patients are being risk assessed and yet the rate of cases is increasing. Is risk assessment therefore not accompanied by effective preventive treatment? Information from the NHS Safety Thermometer [Figure 3] appears to indicate that provision of VTE prophylaxis (the red line) has by no means followed the same trajectory as VTE risk assessment (the blue line). A number of conclusions could be drawn from this very brief analysis; perhaps the increased focus on VTE risk assessment has improved the identification of VTE cases, maybe the drop in VTE prophylaxis in September 2011 is indicative of a poor preventive regimen for VTE at Trust A and these factors combined have served to drive the Clinical Risk

2012

Volume 18

Number 4


126

Cheema and Riley

Figure 1 Rate of venous thromboembolism cases per 10 000 episodes against regional rates with 95% confidence limits for Trust A. April 2007– November 2011. Source: SUS, data prepared by South of England Quality Observatory

Figure 2 Venous thromboembolism risk assessment within 24 hours of admission as a proportion of admissions from Trust A. September 2010– April 2012. Source: UNIFY2, data prepared by South of England Quality Observatory

apparent increase in VTE cases. In reality, there are many more questions and pieces of information that need to be reviewed together with the relevant clinical expertise in order to draw firm conclusions, but this example serves to illustrate how by combining measures from different sources we can at least provide areas to probe further. As in any improvement programme, data rarely provide answers but are invaluable in pointing the way towards further investigation and next steps. In the first example provided, we were able to start to tell a story about the state of VTE in a particular organization from three data sources that were generally in agreement, at least in as far as overall trends were concerned. There was no specific disparity between sources to contend with and as such it is easy to forget that the underlying Clinical Risk

2012

Volume 18

Number 4

Figure 3 Venous thromboembolism (VTE) risk assessment and VTE prophylaxis as a proportion of patients surveyed in a monthly point prevalence survey for Trust A. September 2010– September 2011. Source: NHS Safety Thermometer, data prepared by South of England Quality Observatory. (A colour version of this figure is available online at http://cr.rsmjournals.com/)

definitions and data collection mechanisms for each source were very different. In our second example, looking at falls, we are faced with a very different picture. The first source of data, from Trust B’s clinical risk system (Figure 4), show an improving picture where, while the overall rate of falls per 10 000 admissions is well above the regional benchmark (red line), the rate of cases is falling over time with a relatively static picture from September 2010 onwards. The second source, from the NHS Safety Thermometer (Figure 5), suggests a different scenario, where the pattern is far less certain and suggests an increasing trend from September 2010 onwards, also above the benchmark (dotted line). Viewing just one of these would thus potentially give a misleading picture of falls care in Trust B.


Making the most of safety data

Figure 4 Total falls recorded per 10 000 admissions for Trust B. April 2009 –March 2012. Source: local clinical risk system, data prepared by South of England Quality Observatory. (A colour version of this figure is available online at http://cr.rsmjournals. com/)

127

will be included. What characteristics does this subset have that may differentiate it from the general population? In this instance, almost 50% of the patients surveyed each month are on trauma and orthopaedic and rehabilitation wards where the incidence of falls may be expected to be higher than, for example, on general medical wards as patients are mobilized post surgery. This not only explains why the trends between the two sources differ but also suggests areas for focus for further improvement. Specifics as to where improvements in falls prevention can be made can be gleaned from more process driven audits such as the National Clinical Audit of Falls and Bone Health in Older People.1 In Trust B the results indicate that the provision of written falls prevention advice is poor. A selection of results for Trust B can be seen in Table 1. This may be viewed in tandem with information collected via, for example, the National Hip Fracture database;2 again this differs from the other data sources in that it focuses on a very specific condition but nevertheless will contain useful data to put alongside that which is already presented here as a means of indicating areas for focus. In both the examples given here the understanding of the harm in question has only been enhanced by the use of multiple data sources. These examples do not of course represent an exhaustive collection of the range of information available, but do provide some indication of the benefits that can be realized through the triangulation of a variety of readily available data sources. In particular, the use of both outcome and process measures together has enabled the identification of areas for future improvement, Table 1 Selected results for Trust B from the National Clinical Audit of Falls and Bone Health in Older People National average, (%)

Trust B, (%)

66

86

29

71

69

48

38 16 44 20

43 18 0 10

Hips 37 Non-hips 11

7 16

Hips 12 Non-hips 7

0 0

Audit area

Figure 5 Total falls recorded as a proportion of patients surveyed for Trust B, September 2010 –April 2012. Source: NHS Safety Thermometer data prepared by South of England Quality Observatory

It may be tempting in this scenario to dismiss the NHS Safety Thermometer data as they are based (in this case at least) on a relatively small sample of patients, between 50 and 120 patients, surveyed on a specific day each month. The clinical risk system records all falls as they occur at which point these data are entered onto the relevant information system. However, the differing methodologies here can actually be instructive in identifying further the underlying issues that have resulted in a high incidence of falls (remember the decreasing and subsequent static trend remained high against the regional benchmark). The major difference between the data sources lies in the population it examines; in the case of Figure 4, all cases from all patients are shown while in Figure 5 only a small subset of patients

1.2.12 Analgesia within 60 minutes of contact 2.2.5 Assessment of cognitive function included where indicated delirium screen, performed within 72 hours of surgery 2.2.6 Was an attempt within 24 hours of surgery to mobilize the patient 3.3.4 Documented lying and standing BP readings 3.6.5 Did the patient attend an exercise programme within 12 weeks of the fall 3.7.1 and 3.7.2 Was home hazard assessment performed in the patients own environment 5.2 Is it documented within the nursing/medical/therapist notes that written falls prevention information has been given to patient or carer

Hips Non-hips Hips Non-hips

Clinical Risk

2012

Volume 18

Number 4


128

Cheema and Riley

Figure 6 Flow chart of data triangulation method applied to the examples in the current paper

for example in the examination of VTE prophylaxis or of the indicators in the National Clinical Audit of Falls and Bone Health in Older People. If we limit our use of data in the patient safety arena to outcome measures alone we are in danger of losing both breadth and depth of understanding as well as failing to appreciate where local improvement efforts have been successful, which forever reasons may not have influenced the higher-level outcomes we might have wished. It must also be acknowledged that there may be some resistance to using particular datasets, even in conjunction with others, either because of simple personal preference, long-term experience or because of concerns with regard to data quality, specific methodologies or definitions employed. While such concerns should not be dismissed, the benefits that can be gained (as well as the incidental benefits in terms of improving data quality), generally Clinical Risk

2012

Volume 18

Number 4

outweigh the desires and preferences of individuals. A developmental approach to this kind of analysis, in conjunction with a multidisciplinary review of the data, is often best for ensuring that stakeholders, especially the clinicians who are expected to deliver continuous improvement, understand the benefits of triangulation of data irrespective of its source or level of data quality. A vital component of this multidisciplinary team is an experienced information analyst who is familiar with the datasets and can provide vital expertise in terms of the interpretation of the data. Generally speaking, the more sources that are included in the analysis, the greater the understanding of the issue at hand. Figure 6 above illustrates the process through which the examples above were analysed. The most important points to note from this process are its cyclical nature in which, once actions for improvement have been formulated, the


Making the most of safety data

129

relevant measures should be continuously monitored and triangulated to ensure no learning is lost, and the point at which the team decide whether or not a clear message is evident from the data. Through a process of hypothesis testing, reasons for differences in the data will become apparent; it is crucial that clinicians lead this part of the analysis as it is likely that the underlying reason for differences will already be known. The analyst role at this stage is to be aware of the methodological issues that underpin the data.

Key sources of harm data In this section we aim to provide a brief overview of the four key sources that are routinely used for assessment of safety from a national perspective. The earlier examples demonstrate how these data sets can be used in combination to inform discussion and subsequent action plans. Variances between the differing data sources and their underlying definitions are important to note but need not be an insurmountable obstacle in utilizing the data effectively. Advantages and disadvantages of the key sources of safety data

Source Adminstrative data

Advantages Full coverage, clear standards

Case note review tools

In depth, clinically led, identified as ‘gold standard’ Point prevalence Rapid, clinically led surveys Incident reporting

Full coverage, clear standards, in depth analysis

Disadvantages Variable clinical coding quality, not clinically defined or led Time consuming, impractical at scale Surveys a specific point in time, requires operationalized definitions Known to under report, variable thresholds for reporting

Administrative data within the NHS, in this context, take the form of a national data flow whereby information from patient notes with regard to admission, provider and clinical details form standard datasets pertaining to inpatient, outpatient and accident and emergency services flow from each provider to a central system known as the SUS. Thus, the dataset has true national coverage and a level of standardization that allows aggregation and disaggregation up and down the NHS hierarchy from both a provider and a commissioner perspective. Note that the mandatory submissions do not include information on community services (e.g. delivered in a patient’s home). As the name suggests, SUS is not designed specifically to hold data to be used in the delivery of care and is primarily used for contract setting and payment purposes. However, the quality of the clinical information held within SUS is a crucial factor in determining payment, and thus, can be utilized for other purposes such as those discussed here. SUS data flow into the Hospital Episode Statistics (HES) database. Many

official statistics are derived from HES. Harm would be identified from these datasets using the clinical coding to identify specific harms (e.g. the use of and ICD-10 code L89 to identify a pressure ulcer) and thus the utility of administrative data in the NHS as a data source for harm is primarily a function of the quality and depth of the clinical coding associated with each admission/attendance record. Case note review involves clinical teams looking in depth at a selection of patient notes in order to assess whether any harm occurred. This is a simple and effective way of identifying harm to patients as medical notes constitute the primary record of a patient’s contact with health services and review by clinical teams ensures that their expertise is considered in the identification of patient harm incidents. Case note review methods can be enhanced by the use of tools that enable users to record their findings and use the data gathered to help inform improvement projects and act as a patient safety indicator. The IHI’s Global Trigger Tool is one example of such a tool and is utilized in many acute care organizations. There is some evidence to suggest that case note review is considered a ‘gold standard’3 in terms of identifying harm as well as avoidability of harm which is far less likely to be identified through the other sources discussed here. Case note review also has the benefit of good clinical ownership. However, it is a very resource-intensive process that requires significant commitment of clinical time. Point prevalence surveys are broader than case note review but also have the advantage of being clinician led. Patients on a ward or seen by an organization/team are surveyed on or during a specified time period for a range of patient harms. In the NHS Safety Thermometer example clinical teams identify whether or not patients in the selected cohort have any of four specified harms and records details of these in the Safety Thermometer survey instrument. The approach is more scalable than case note review and can be applied in any healthcare setting, but can be subject to the same sample size and timing issues, dependent on the purpose of the data collection. In addition to this, a survey that can be applied speedily in a clinical setting naturally requires clear and simple operationalized measures that will not always be able to support the collection of ‘in depth’ data. Finally, adverse incident reporting is a method of data collection through which provider organizations across all sectors of the health service report identified untoward incidents. Generally, this reporting is carried out via provider risk systems and is fed into the NRLS which enables collation of data at the regional and national level. There has long been a feeling that these systems under-report harm, for a range of reasons, some of which have been discussed above. Empirical evidence for this comes from Vincent et al. (2001)4 who estimate the rate of adverse events in hospital to be approximately 10% of admissions; this stands against the NRLS rate of around 5%.5 However, the focus of patient safety and the move away from a ‘blame culture’ has led to reporting through this mechanism becoming more reliable in recent years and NRLS data are used routinely in assessment of provider performance and Clinical Risk

2012

Volume 18

Number 4


130

Cheema and Riley

as the basis for national reports on patient safety. However, thresholds for reporting incidents can be variable between, and occasionally, within organizations.

Conclusions and recommendations This paper has set out to describe the advantages of using multiple data sources to fully inform and assess improvement programmes within the area of patient safety. Based on this analysis there are several recommendations that can be made:

† †

† †

Always use multiple data sources; they provide a more holistic and robust picture when viewed together even if their underlying definitions and methodologies are different. Where possible use a range of indicators that provide outcome, process and balancing measures; this will not only provide a richer picture but ensure that focus on one area does not have an adverse effect on another part of the system. Data are best used as a starting point for discussion and review of an issue or in planning actions for improvement. Continuously promote and improve data quality as this is generally the major argument against the use of many common data sources; this is best done through regularly reviewing and using the data available and ensuring that any caveats are clearly identified. Promote the use of data throughout all patient safety projects and other clinical areas; build a community of measurement where using information effectively is part of day-to-day clinical business with ownership at all levels of the clinical team.

Clinical Risk

2012

Volume 18

Number 4

In-depth analysis ‘in the round’ with a multidisciplinary team of clinical and informatics expertise is the best approach to reaching conclusions based on data and agreeing improvement actions and ensuring that all stakeholders are fully engaged with all data sources, irrespective of their personal preferences. Include service improvement or analytical expertise from the beginning to help identify data sources and carry out analysis which is best for the current context. In the UK we are very fortunate to have multiple datasets available at a national level; there is not often a need to ‘reinvent the wheel’.

References and notes 1 Royal College of Physicians. National Audit of the Organisation of Services for Falls and Bone Health for Older People. London: RCP, 2011 2 National Hip Fracture Database is a joint venture of the British Geriatrics Society and the British Orthopaedic Association, and is designed to facilitate improvements in the quality and cost effectiveness of hip fracture care. It allows care to be audited against the six evidence-based standards set out in the BOA/BGS Blue Book on the care of patients with fragility fracture, and enables local health economies to benchmark their performance in hip fracture care against national data. See www.nhfd.co.uk, last checked 31 May 2012 3 Michel P, Quenon JL, de Sarasqueta, Scemama O. Comparison of three methods for estimating rates of adverse events and rates of preventable adverse events in acute care hospitals. BMJ 2004;328:199 4 Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. BMJ 2001;322: 517–9 5 National Learning and Reporting Service (2005) Building a memory: preventing harm, reducing risks and improving patient safety


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.