Article Text

Download PDFPDF

Neonatal quality measures: time to show developmental progress?
  1. Lisa Barker,
  2. David Field
  1. Department of Health Sciences, University of Leicester, Leicester, UK
  1. Correspondence to Dr Lisa Barker, Department of Health Sciences, 22-28 Princess Road West, University of Leicester, Leicester LE1 6TP, UK; Lb278{at}le.ac.uk

Abstract

Healthcare improvement is synonymous with quality measurement and involves assessing how well care is delivered and the results achieved in comparison with a desired standard or standards. Quality measurement has become part of routine healthcare in the developed world as a means of detecting inadequate quality performance which, if not dealt with promptly, can have far-reaching consequences as seen in recent well publicised UK examples. The growth in the use of quality measurement has led to increasing attention on the processes and measures employed—in particular how measures are chosen, reported and used. This has included consideration of the attributes that make a good quality measure using testing protocols to ensure that any potential measure is fit for purpose and using summative reporting frameworks. All of these tools are already used in some specialties outside neonatal care. This article explores this wider experience and considers how the lessons learnt might helpfully be applied to neonatal care.

  • Neonatal
  • Monitoring

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The increasing focus on care quality in all specialties across the developed world has resulted from both a desire to achieve the best outcomes for our patients and also as a means of ensuring that ‘value for money healthcare’ does not lead to care that is in any way suboptimal. In the UK and USA alone, there are at least eight bodies focused heavily on this task.1–8 Such processes remain at a relatively early stage in neonatal care.

The approach to healthcare improvement is often pursued by what have become known as ‘quality improvement collaboratives’.9 It is perhaps easiest to understand these at a local level, but organisations can participate in such collaboratives at a national and even international level. Their focus is on improving a particular aspect of care and typically this involves a number of steps: selection of a topic or problem, setting a target or benchmark, collecting and managing data, dissemination of results, and then improvements in the relevant processes of care, that is, the plan–do–study–act cycle. At a local level, this process may take the form of a quality improvement initiative such as a neonatal unit aiming to reduce its infection rate. When multiple organisations are involved, for example at a national level, the initial steps can be performed and methods of good practice shared, but the responsibility for interventions leading to improvement still falls on the local unit.

The approach to quality measurement in neonatal care around the world varies widely with some organisations such as the Vermont Oxford Network10 doing much to engage individual hospitals in thinking about care quality. However, for those responsible for healthcare provision, such as the National Health Service in the UK, the challenges and responsibilities are somewhat wider and encompass a role that includes monitoring healthcare quality/standards to ensure performance is adequate.

Current UK neonatal quality measurement

The most established neonatal quality measures in the UK are those of the National Neonatal Audit Programme11 which began in 2007. The quality measures have remained largely the same since their introduction and were initially selected by a small group of neonatal experts and with advice from a lay organisation. There was no pilot phase before their wider introduction and some of the items chosen have proved difficult to collect and as a result ‘missing data’ have, to date, made the overall results difficult to interpret.

More recently, the Care Quality Commission (CQC)2 has introduced a set of around 150 hospital indicators as part of ‘hospital intelligent monitoring’. Two indicators refer to neonatal care. The first is a composite indicator consisting of inhospital standardised mortality for patients admitted as an emergency and with a primary diagnosis from an extensive list of diagnoses combined with perinatal mortality, attributed to the Trust at which the birth took place. The second measures neonatal non-elective readmissions within 28 days of delivery. The readmission can be to any acute Trust, but is attributed to the Trust where the birth took place. Readmissions with a length of stay less than 1 day are excluded. Both of these indicators have a number of problems that limit their reliability. Although the CQC quality measure for inhospital mortality is standardised for gender and level of neonatal unit, this is inadequate in terms of addressing a number of biases specific to neonatal care:

  • referral bias: where an individual Trust has a particularly large proportion of high risk cases transferred in utero

  • inclusion bias: where individual Trusts have an ethos of being reluctant to offer active care to the most immature babies (<24 weeks of gestation)

  • case mix: where individual Trusts take on pregnancies with an extremely high risk of adverse outcome (major congenital anomalies).

Developing quality indicators for neonatal care that are fit for purpose

Initial considerations

The choice of measure must be appropriate for its intended purpose which might include: audit, governance, benchmarking and productivity/cost effectiveness.12 Healthcare quality measures are commonly based on assessing a structure, process or outcome.13

  • Structure measures assess attributes of an organisation or its staff, for example, nursing establishment. The attraction of structure measures is that they are collected for routine organisational purposes and are thus readily available. Structure measures ‘represent necessary conditions for the delivery of quality healthcare’.14 However, they cannot be used in isolation to make inferences about quality of care.

  • Process measures assess activities that are delivered as part of care. They are often surrogates for a desired endpoint as they can be measured in a timely manner with the potential to intervene and potentially improve health outcomes, but careful consideration must be given to the strength of association between the process and the health outcome. When choosing process measures, unless it is clear that the standard against which the process is to be measured is either 100% or 0%, setting a clinically relevant benchmark can be difficult to achieve.

  • Outcomes are generally considered the most clinically relevant as they focus on how a ‘patient feels, functions or survives’.15 However, attributing an outcome to a particular event or approach to care, especially when it is a long-term outcome, can be difficult. Outcome measures may require case-mix or risk adjustment before comparisons can be made fairly with other apparently similar services.16

In neonatal care where the gold standard outcome is generally seen as ‘survival without significant disability’, measurement is by definition much delayed and hence it can be difficult to establish a clear link between clinical care and the outcome of interest. Therefore, other ‘intermediate’ measures are important in filling the potential gap between care delivery and outcome and these are typically process measures. Table 1 shows examples of existing or potential quality indicators relevant to neonatal care in terms of structure, process and outcome. Quality measures based on a process or outcome are generally accepted to be the most useful.

Table 1

Examples of quality indicators in terms of a structure, process or outcome

Selecting an aspect of care for a quality measure

A number of methods have been suggested to guide the choice of topic or aspect of care that should be monitored for quality. These include: the presence of a national guideline, the existence of a care gap, public health relevance, economic impact and the impact on quality of life. Evidence of which method is the best is currently lacking. Having identified an aspect of care, there is then a danger that quality indicators are selected on the basis of what is available and practical (‘measurable’) rather than what is meaningful. Quality measures should be evidence based; however, for many potential measures evidence may be subject to interpretation or may not be available. A number of medical specialties have published reports describing how they have chosen their quality measures, most commonly by selecting aspects of care viewed as best practice from guidelines. One systematic review exists of methods used for choosing quality measures17 and involves 48 studies published in English, French and German up until 2010. No studies compared different methodological approaches to generating quality measures; however, the use of expert clinical opinion was seen in 35 of the 48 studies based on a consensus methodology and/or an expert panel. This approach can be useful if a quality indicator is felt to be clinically important but an evidence base is not available.

The choice and selection of quality measures for neonatal care have received little attention. The Californian Perinatal Quality Care Collaborative (CPQCC) used clinical opinion to choose eight neonatal quality measures from a larger set of 28 routinely reported process and outcome measures. The eight preferred quality measures were: antenatal corticosteroid use, hypothermia (<36°C) during the first hour of life, nonsurgically induced pneumothorax, nosocomial infection, need for oxygen therapy or mechanical ventilation at 36 weeks gestational age, discharge on any human breast milk, mortality in the neonatal intensive care unit (NICU) during the birth hospitalisation, growth velocity. This approach was intended for use in very low birthweight babies only and with the aim of combining these into a composite indicator.18 ,19

The number of potential quality measures for neonatal care, as with many specialties, is vast and clinical input is required to select the most clinically important for ensuring the measures chosen are fair and legitimate and also for ensuring ‘clinician buy-in’. Clinician buy-in is an essential element in driving data collection of adequate quality and completeness as well as facilitating acceptance of any results and recommendations that arise from the process. However, it can still lead to selection of ‘convenient’ rather than useful quality measures. A possible solution is the use of indicator testing protocols for improving objectivity and help minimising the unintended consequences attached to a particular measure (see below).

Although the systematic review of methods used to choose quality measures showed limited patient involvement in the quality measure process,20 there is an increasing and welcome tendency to include measures that directly meet the aspirations of patients/families when assessing care quality. While such measures are not always aligned with clinical priorities, they should be seen as of no less importance and within neonatal care in the UK such involvement already exists.11

Creating robust standardised quality measures

Characteristics of a robust quality measure

A number of organisations12 ,14 ,21 have reported on the attributes that a quality measure should have to be considered fit for purpose. These are shown in figure 1. The Organisation for Economic Co-operation and Development Health Care Quality Indicators Project14 examined recommendations regarding the optimal characteristics of a quality indicator from the national documents of four countries (UK, Canada, Australia and the USA) as well as from The Commonwealth Fund, World Health Organisation (WHO) and the European Core Health Indicators (figure 2). These two sets of recommendations are not identical but show significant overlap.

Figure 1

Attributes of a quality measure.12 ,14 ,21

Figure 2

Technical quality of healthcare indicators found in seven national and international documents on performance quality indicators. Based on concepts relating to the technical quality of healthcare found in national documents on performance/quality indicators in selected member countries, from The Healthcare Quality Indicators Project.14

Creating robust quality measures that will help drive improvement in care and outcomes requires consideration not only of the measures themselves but of the measurement system as a whole including data collection, interpretation, as well as communication to professionals and the public. Particular attention should be paid to the system by which the data will be collected in order that data quality and completeness do not undermine the credibility of the process.

The use of testing protocols

Building on the concept of quality measures that are fit for purpose, quality measure testing protocols assess the attributes a quality measure should have (figure 1) and against which a proposed quality indicator can be tested in terms of its suitability.22–24 Overall, the frameworks consider three key areas:

  • Relevance: level of evidence, evidence of a care gap

  • Scientific soundness: reliability, variability and risk adjustment

  • Implementation: barriers to implementation including data entry, data collection and unintended consequences (see below).

The protocols combine an assessment of basic feasibility and clinical opinion to evaluate potential quality indicators. The use of quality measure testing protocols to generate useful and robust quality measures has been tried but as yet only in a limited number of areas. The Quality and Outcomes Framework (QOF)25 in UK General Practice adopted the use of a quality measure testing protocol following the initial introduction of quality measures that had many unforeseen problems. The testing protocol resulted in some measures being amended and others being abandoned altogether.22 ,26 As yet, such a framework appears not to have been applied to neonatal or paediatric quality measures.

Using a testing protocol can help to evaluate individual quality indicators and add transparency to the process. However, any chosen set of measures must be dynamic, with the possibility of regular review and change including revisiting previous measures in order to prevent staleness and preserve responsiveness of the quality measurement system.

Managing multiple measures

Healthcare providers face an increasing number of measures, dash boards and so on. Multiple quality measures generate a large amount of data to process and interpret, making meaningful judgments about clinical care difficult. The principal aims of quality measurement (to improve healthcare, allow comparisons with other healthcare providers and analyse change over time) can be clouded by the volume of data and how it is presented. Most quality indicators are treated as linear and independent, for example, in the UK National Neonatal Audit Project. Profit et al27 drew attention to the ‘inadequacy of drawing conclusions regarding institutional performance based on a single or limited set of measures of quality of care’. In a study that compared neonatal unit performance across eight neonatal quality measures selected for use with very low birthweight infants, there was considerable variation within and across each measure. Performance in one seemed to have little predictive effect on another, despite correlations that would be expected from the literature; for example, that units with success at following processes to avoid healthcare associated infection would also demonstrate better growth velocity and lower mortality, but this was not seen.27

An alternative to the linear approach is to combine multiple quality measures to form a composite indicator score or a summary index.28 A summary index averages out the number of times a target(s) is met by the number of times it could have been achieved, an approach used in primary care.29 A composite score contains anything from a few to multiple indicators that can be weighted differently and then aggregated together.2 There are a number of advantages and disadvantages related to the use of summary indexes (table 2), not least the choice of contributing measures and the proportion in which they contribute to the score. There has been some attempt to standardise the development of composite indicators20 ,28 but any focus on neonatal or perinatal care has been limited. The use of frameworks in the form of summary indexes or composite scores may improve clinical interpretation and communication.

Table 2

Advantages and disadvantages of a composite or summary index score20 ,28

Difficulties with quality measures in healthcare

Unintended consequences

Since the introduction of quality measures, there have been international concerns about potential unintended consequences.

  • Fixation: Fixation on issues being measured leading to less holistic care as others areas become neglected. There is even more pressure to focus on specific aspects of care when specific measures are linked to financial incentives/penalties.

  • Gaming: Individuals or organisations ‘consciously find ways to achieve a required level of performance against a given measure, or appear to achieve it, without actually achieving the changes that the measure was designed to achieve’.12 This has been commonly cited in relation to managing the UK 4 hour target in emergency departments.

  • Target-fatigue: The increased use of quality measures runs the risk of creating ‘target-fatigue’ weakening their potential effectiveness.12 A dynamic set of measures that are responsive to improvement, with quality indicators that have been achieved being replaced with new ones, may help to reduce target fatigue.

Data and interpretation of measures

There are a number of ways data can be collected for quality measures. The format, accuracy and validity of a dataset taken from a database must be considered when using databases in research and when using the data as part of quality assessment. Data collected from datasets that are already in existence are often the cheapest approach but may not yield the most reliable results, as this was not their intended use. Specially collected datasets have the advantage of being focused and may have methods for checking the reliability of the data but in comparison are often more time consuming, expensive and less sustainable.

The introduction of a quality measure should occur in parallel with work to identify how the data will be used. Important questions in this process should include:

  • How will outliers be identified, that is, what level of deviation from the accepted level of performance will be considered significant?

  • When monitoring for relatively rare outcomes (creating high levels of uncertainty in relation to point estimates), at what stage can an estimated rate of outcome be considered abnormal?30

  • When comparing hospitals, how large must the difference be before they can be considered significant, that is, when does the true difference extend beyond that which could occur by chance?31

  • If appropriate, how will case mix variation be dealt with?

  • How will units be supported if concerns are identified?

There are no easy or perfect solutions to these questions but their impact is significant, especially with public reporting. Transparency on the chosen approach to these difficulties should accompany any published results.

Risk adjustment

The topic of risk adjustment is a major issue. The organisation of neonatal care in much of the developed world, with centralisation of the sickest babies to a relatively small number of centres, adds a further complication to the interpretation of data about a variety of aspects of care. Simply attributing care back to the unit of birth is controversial and potentially misleading. Special measures are needed to determine whether apparent outcomes are real or the result of referral practices.32 Other factors also complicate the situation as demographic and socioeconomic influences within a particular population (affecting prematurity rates) can have major effects on the risk profile of individual centres.

Improvement in care versus judgment

The UK's CQC describes the measures they have introduced as not creating a ranking of hospitals but a system on which to decide ‘when, where and what to inspect’. It is hard to imagine that in the public arena, such banding will be seen as anything other than hospitals that are top of the league or bottom of the league. Quality measures placed in the public arena for patient choice and linked to funding and commissioning lead to blurring of the line between judgment and quality improvement.12 Using any quality measure for multiple purposes must be carefully considered.

Considerations for the future

The use of healthcare quality measure is here to stay. It is likely to continue to grow. Quality measures can help improve the quality of care patients receive and improve outcomes. Before these potential benefits can be fully realised for neonatal care in the UK, it would appear that there is scope for both greater rigour and engagement in the process.

References

Footnotes

  • Contributors The idea for this review article came from DF. LB wrote the first draft. Both LB and DF have contributed to further drafts and editing of the review article. Each author, as listed in the manuscript, has seen and approved the submission of this version of the manuscript and takes full responsibility for the manuscript.

  • Competing interests LB is a trainee representative on the National Neonatal Audit Programme11 project board.

  • Provenance and peer review Commissioned; externally peer reviewed.