It was an odd happenstance that Dr Foster – a gentleman best known for his rain-ruined, nursery rhyme expedition to Gloucester – should have his proverbial 15 minutes of contemporary news fame in the middle of last week’s heat wave.
The unfortunate doctor, you may recall, went to Gloucester in a shower of rain. Ignoring the truly excruciating rhyme ahead, he stepped in a puddle, right up to his middle, and never went there again.
The news to which this doggerel relates is, of course, allegedly failing hospital trusts, and specifically Hospital Standardised Mortality Ratios (HSMRs) – the widely used measures of hospital death rates developed, publicised, defended and refined by today’s equally fictional Dr Foster.
The doctor and his HSMRs took rather a beating in last week’s report by Prof Sir Bruce Keogh, National Medical Director for the NHS in England: “However tempting it may be, it is clinically meaningless and academically reckless to use such statistical measures to quantify actual numbers of avoidable deaths” (p.5).
Strong words, and justified – because this is precisely what had happened the previous weekend, with numerous media claims that the report would be about 13,000 ‘needless deaths’ at the 14 NHS hospitals selected, because of their high mortality rates, for special investigation. It wasn’t. The report contained no such numbers, and instead provided detailed, focused recommendations to assist the improvement of the hospitals’ serious but not irremediable problems.
Sir Bruce’s report had been calculatedly hijacked, but who he held chiefly responsible – Ministers and their advisers, the media, even some collusive involvement of Dr F himself – was unclear. The outcome, sadly, was unmistakeably clear. Health Secretary Jeremy Hunt’s parliamentary presentation of the report became a shameful partisan blame-fest – so depressing for so important a topic that, as a completely non-expert observer but low-key Dr Foster fan, I was moved to attack my keyboard.
I remember well my own first encounter with Dr Foster in January 2001. I was teaching a course here at Birmingham University on policy research methods, and in, of all places, a two-part Sunday Times supplement, there appeared some near-perfect raw material for a student assignment: the first ever listing of standardised ‘death rates’ (HSMRs) for England’s or any other nation’s hospitals.
So what, my students discussed, were these ‘metrics’, and what did they really measure? What did they include, and exclude? Who’d collected and analysed the data? How did they relate to other possible measures of a hospital’s care and performance? What was the range, and where were the highest and lowest ratios – that ‘Where?’ question providing an additional reason for my recalling that first Dr Foster’s Good Hospital Guide.
A hospital’s Standardised Mortality Ratio is usually presented as a percentage: the recorded deaths in hospital from most (but not all) diseases, as a percentage of the number that would normally be expected, after taking account of, or standardising for, a wide range of factors concerning the patients and the nature and severity of their illnesses.
HSMRs’ other key feature, consistently misunderstood, is that they measure hospitals not against some objective clinical standard, but against each other. An HSMR of 100 is the national average; below 100 means fewer deaths than statistically expected; over 100 means more. Not needless, preventable or avoidable deaths, not deaths from incompetent care, simply more than statistically expected. Even if all hospitals were good, half would still have ratios of 100+ and look ‘bad’ – and vice versa.
The Dr Foster Guides and website emphasise these points scrupulously. A high HSMR should be treated as a warning: a risk, but not proof, of failings in care, and reason for further investigation, with attention focusing mainly on ‘outliers’ – those outside, especially if repeatedly outside, the normal range. University Hospitals Birmingham NHS Foundation Trust’s HSMRs, though consistently over 100, are thus less immediately concerning than the 130+ ratios of Basildon & Thurrock (2005-09) and Mid Staffordshire (2005-07).
However, as Sir Brian Keogh noted, in the dash for political advantage or media headlines, the temptation to elbow aside these literally health warnings is powerful indeed. So, although those first hospital ratios weren’t listed in league table format, they were quickly sorted into one and the range calculated.
It was wide and, although all mortality rates have fallen significantly in the past decade – and, of course, the HSMR baseline adjusted accordingly – it remains so today. Then, University College London Hospitals had the lowest ratio of 68, and most of the low ratios were in London and the South-East. But two of the three highest were on our proverbial doorstep in the West Midlands: Walsall Hospitals Trust with 119 and Sandwell with 117.
My recollection is that these hospitals and trusts, not to mention their patients, had little advance notification of their figures. Certainly, there were widespread protests – by those assuming that, if this was a ‘Good Hospital Guide’, high-ratio hospitals must be ‘bad’. However, despite their susceptibility to such misinterpretation, HSMRs were here to stay. Which begged the obvious question: who was this pioneering but troublesome Dr Foster?
As already indicated, there is no actual Dr Foster. The name was the whimsical invention of two journalists involved in producing the 2001 Sunday Times supplements. But, if there were a real doctor, the only possible candidate is someone you may well have seen recently on your TV screens, Professor Sir Brian Jarman.
A one-time GP who by the 1990s had become a distinguished Imperial College academic, he developed the ‘Jarman Index’ – a formula for distributing government funding to the nation’s hospitals – which gradually evolved into the HSMR, a formula for identifying a hospital’s share of responsibility for its death rates. It was a major statistical advance, but the then Health Secretary was nervous and refused Jarman permission to publish individual hospitals’ HSMRs.
He took his stats, therefore, to two journalists rather more committed to the idea that transparent, debatable research findings and more informed patients had key roles to play in improving health care: the Sunday Times’ Tim Kelsey and the Financial Times’ Roger Taylor. The outcomes were swift and far-reaching: the first of the now annual Dr Foster Good Hospital Guides, and Dr Foster Intelligence – an initially private company that since 2006 has been half-owned by the Department of Health (another controversial development) and is today an internationally renowned provider of healthcare information.
And the drivers of almost all this growth, and indeed of the career progression of the key actors, have been HSMRs – which might surprise some of my 2001 students, who had no difficulty identifying what they saw as potential weaknesses.
Yes, HSMRs are a purely statistical exercise – no visits, inspections, interviews or case notes. Yes, if the indicators in the formula change, so too could the ratios. Yes, they record only in-hospital deaths, and not even all of them. Yes, they surely could be manipulated – by discharging terminally ill patients into hospices or ‘the community’, or (as three West Midlands trusts were later accused of doing) by stretching the ‘admitted for palliative care’ code and thereby raising the expected death rate. And yes, it does seem a rather blunt way of measuring quality of care – or indeed the overall performance of a large hospital.
To their credit, many hospitals’ response to a high HSMR has been to work with the Dr Foster team, to try to understand better the causes and thereby bring the ratio down. Walsall, for example, reduced its HSMR in five successive years, down to 103 by 2005/06.
There have also, though, been continuous criticisms of both HSMR methodology and interpretation – from health care professionals, the media and academia – particularly after 2007, when some of Dr Foster’s statistical ratios contradicted the inspection-based assessments of the Care Quality Commission.
There followed the first Francis Inquiry into the Mid Staffordshire NHS Foundation Trust, and with it the development and official approval of a new, more comprehensive mortality measure – the Summary Hospital-level Mortality Indicator (SHMI) – covering all, instead of most, in-hospital patient deaths, plus those occurring up to 30 days after discharge from hospital.
The two measures sound similar, and frequently they produce broadly similar results, as shown in the 2012 Dr Foster Guide. Birmingham’s HSMR is 112, its SHMI 105; Sandwell & West Birmingham 99 and 97; Coventry & Warwickshire 103 and 107; Walsall 117 and 113; Royal Wolverhampton 100 and 103.
But they can differ significantly – and did for several of the 14 trusts investigated in the Keogh Report. You might think that the Government, having finally found in SHMIs a more comprehensive mortality measure than HSMRs, which most statisticians and clinicians seem to accept as more reliable, would use it to select the hospital trusts it wished to have investigated.
Wrong! The supposedly failing trusts were picked because of being high ‘outliers’ for two consecutive years (2010/11 and 2011/12) on either of the two measures. So Tameside and Basildon/Thurrock, for example, were included apparently because of their higher than expected SHMIs, but Burton and Sherwood because of higher than expected HSMRs.
We’re into circumstantial evidence here. But, suppose you were a Government keen to rubbish Labour’s NHS record and frighten patients and electors into viewing further privatisation more favourably. It surely wouldn’t seem a bad tactic to maximise the number of allegedly failing ‘killer’ hospitals – 14 is nearly one in 10 of England’s acute hospital trusts – and feed the media scare stories about thousands of ‘avoidable’ deaths. Or has my imagination run away with me?
Chris Game is a Visiting Lecturer at INLOGOV interested in the politics of local government; local elections, electoral reform and other electoral behaviour; party politics; political leadership and management; member-officer relations; central-local relations; use of consumer and opinion research in local government; the modernisation agenda and the implementation of executive local government.