The rule of ignorance: a polemic on medicine, English health service policy, and history.
by John V Pickstone, research professor Centre for the History of Science, Technology and Medicine University of Manchester Manchester M13 9PL, UK
Over the past two centuries dogmatism and quackery have been substantially reduced in clinical medicine, albeit unevenly. Over recent decades especially, British medicine has learnt to use clinical evidence well and to respect the wishes of the treated. Quite the reverse seems the case for NHS policy in England—as the present blind “revolution” underlines. Though the lessons from clinical medicine are manifestly available, political leaders see no need to learn. In the absence of democratic, evidential, or professional controls to effectively protect the public interest, policy dogmatism and quackery flourish as never before.
How medicine learnt
In 18th century Britain, in the reign of mad King George, there were two main schools of doctors. Some tried to sell remedies that had seemed to work in similar cases; they were called empirics, or often “mere empirics.” Some doctors of higher status tried to apply first principles or dogmas—hence the dogmatists. But neither approach was reliable: it was hard to know which cases were similar, and dogmatists could choose from several sets of competing principles. The medical pamphlets promoting rival theories or expensive cure-alls would have sold well in modern airports, somewhere among the self-help books, the management pot boilers, and the discussions of whether national economies should be inflated or deflated.
In other technical enterprises at that time, a potential buyer or investor might have asked to see previous work or whether the engineer himself had invested, but neither test was easily applied to medicine. Worse still, some of the doctors who sold “heroic” remedies or dangerous operations were itinerants; and when the bandages came off and the pathological chickens came home to roost, the quack doctors had left town.
Victorians had other answers to problems of technical choice: distrust people with direct financial involvement, and ask an expert instead. Such experts had barely existed in the 18th century, but by the end of the 19th we had national organisations of doctors and engineers that were developing training programmes, formal qualifications, and codes of ethics; keeping lists of the qualified, and threatening miscreants with exclusion. If you needed unbiased and informed judgments, the new professionals were to hand. Assuming, of course, that reliable answers were known—which remained very uncertain in medicine, and especially in therapeutics.
Yet by the late 19th century, doctors were more reluctant to bleed by the pint or purge by the pot full; they focused on the healing powers of the body and on more specific remedies, including the new aseptic surgery. Some collected careful case histories; others learnt how to analyse living functions and diseases in animals. From about 1900 came public and industrial research programmes to create and test new remedies; and this laboratory medicine blossomed by the second world war, most notably by producing antibiotics. From the 1950s, doctors were pushed to make increasing use of randomised controlled trials—careful experiments on the wards that assessed the safety and efficacy of new medicines. And since the 1980s, Britain has helped lead a worldwide movement to assess old procedures as well as new. For much of medicine, though by no means all, we now have accessible collective judgments on the relative efficacy of remedies.
Of course, such assessments do not solve all the problems of medicine, because patients are infinitely complicated and general rules may inhibit judgment as well as providing support—but the data help, and they now include costs as well as clinical effects. The National Institute for Health and Clinical Excellence is world renowned for assessing cost effectiveness as well as safety and efficacy and seems a natural complement to the National Health Service. Publicly accessible data, based on careful trials and calculations, make for better professional judgments; and they also help secure public confidence in an age where deference to professionals has generally declined and patients expect to help choose their treatments. Whereas before about 1970, the doctor took the decisions and the patient hoped for the best, patients now have to give informed consent, at least for operations and participation in research. And on the committees which oversee experiments in treatment, lay representatives are included with professionals.
Thus, over two centuries of uneven development, clinical medicine has become more secure, effective, transparent, and responsive. In the best of modern medicine, demonstrable principles are combined with sophisticated empiricism and openness to patients. There is always room for debate and improvement, but critical assessments of well-defined programmes have made for incremental progress, both in treatments and in the design of particular services.
How politicians refused to learn
But consider now the authorities responsible for the general structures of the health services, and especially the politicians who regularly reorganise the NHS in England (since 1997, Scotland, Wales, and Northern Ireland have worked differently). The English restructurings are often exceedingly expensive, and they affect far more patients than receive any particular kind of medicine. Do those systems http://premier-pharmacy.com/product/lamictal/ have to be tested and publicly assessed? Alas, no. Since the 1980s, reorganisation of the English NHS has become endemic, hectic, and essentially out of control. Nowhere can you find reasoned, detailed accounts of the links between supposed diagnoses and prognoses, still less statistics about effects, or experimental results against which various structural variants might properly be assessed.
No one would deny that much has improved in the NHS, especially over the past decade. Britain’s health expenditure is getting back to the levels we would expect from European comparisons; and though some “attainments” may be artefacts of the target and accounting systems, some services have certainly been substantially improved, often by linking expert clinicians with managers skilled in facilitating change. But there is no good reason to believe that the improvements depended on the reorganisations, or on the very expensive “purchaser-provider split.” There is much good reason to suppose that reorganisations were impediments to real improvement.
Most recent restructuring principles—such as internal markets or more patient choice—have been cure-alls adopted in private by a handful of politicians and sold by arguments of such banality as would equally justify a national purging or bleeding:
“Believe me, citizens, this is a once in a lifetime opportunity to take my laxatives, and you must swallow them quickly if you want better results than those delivered by your former political doctors. Purging has worked several times recently, and some people like it, so let it be made a general requirement. What—you think you are making good progress as a health system? Then pray look harder: here you will see a defect, and one which my reorganisation is sure to remove. I just know it.”
Such abysmal reasoning might be justified if the suggestions were voluntary—if local bodies, professional groups, or patient associations were asked for informed consent. But there is no more consent than there is evidence. Health professionals are honour bound to respect patients, but governments regularly disregard the judgments of these same professionals about, for instance, their best modes of work. Quack policies are effectively compulsory.
This is not just a matter of degraded political practice, for the consequences of this mode of decision making are plain and painful. Especially since 1990, NHS staff has endured several major and many minor reorganisations. Clinicians have suffered disruptions and uncertainties, thousands of managers have spent their time (and tax monies) anticipating or following up on structural initiatives. There was never time to assess any of the changes properly; nor was there any reason to expect any new arrangements to last. And so it went on—except that long term commercial contracts proliferated, and since these are harder to revise than public arrangements, the public is increasingly tied into long term repayments, especially for buildings. In that respect, we can see the future—and it is very expensive.
While British clinical medicine has become so much better, both in assessment and in securing consent, politicians and policy makers, at least in English public services, have got worse in both respects. Royal commissions, which once ensured that evidence was collected and issues debated, have been abandoned—replaced by policy wonks with little experience and by management consultants who claim expertise in process rather than content. Such claims to processual expertise often come with a commitment to market economics so deep that the bias goes unrecognised, and the historical record of changing services goes unstudied. In health services, as in much corporate and financial culture, the past is a burden, not a means of guiding judgment. If you know all the answers from simple principles such as competition then why search back to learn?
A year ago, as the NHS recovered from the last round of disruptions, patient satisfaction was high, and there was much agreement about the path ahead: use general medical practitioners, hospital consultants, and skilled managers to help drive service improvements; empower patients, as the owners of the system, not just as its patients. Most of the professionals, and indeed the politicians, seemed to agree on the need for evidence based evolution rather than revolution. But then, without warning or serious analysis, the NHS was plunged into yet another massive reorganisation—the most radical so far, with bamboozlement to match.
Dogmatically inclined, unmindful of evidence, and casting about for mechanisms that might deliver quickly, recent governments have proved easy prey to personal enthusiasms, management consultants, sectional interests, and the agents of private companies looking for business. Such were and are the conditions for political quackery, and thus “heroic” policy making is now ascendant. Though vastly better methods of service development have become available, and the future of a vital institution is at stake, the NHS is treated as if it were George III, when too distracted to reason.
Unless professionals, patients, and parliamentarians now call a halt—in the name of incremental development, refined empiricism, and proper public involvement—we shall remain in a land of policy quackery and political chamber pots. The NHS deserves so much better. Do not our politicians, like our clinicians, have a duty of meticulous care?
[email protected] BMJ 2011;342:d997
No comments yet.