Article Text
Statistics from Altmetric.com
For those involved in neonatal care the concept of risk adjustment, in the informal sense, is part of everyday life. We regularly talk to parents about the risk of death in their baby if he or she is born at a particular gestation. Similarly we are aware that the risk of death as we perceive it can be weighted by other events such as being born with particularly low Apgar scores. The disease severity scoring systems that exist in neonatal care have developed through a process that formalises the assessment of the risks attached to a particular baby. Archives of Disease in Childhood has published previously a review of how such scores are derived with a commentary on some of the most widely used systems.1
The use of disease severity scores arose first in other specialties primarily as a means of allowing comparison between heterogeneous groups of patients. For example how can you compare the efficiency of two adult orthopaedic units if the length of stay in hospital A is significantly longer than hospital B but the average age of the patients is significantly greater in hospital A? The development of a disease severity score would allow such variation in patient mix to be taken into account and the two units compared fairly with variation in their mix of patients, at baseline, removed. In neonatal care, survival rate was chosen as the most important outcome for comparison and hence most scores were designed to adjust for risk of death particularly in preterm babies.
Those who have developed the scores have made different decisions about the importance of accuracy of prediction versus complexity of the score. For example is it better to have a score based on just five factors which can account for 90% of the patient variation or …
Footnotes
Competing interests: None.