The snake oil sales man, newly arrived in town and frequently presenting himself as a 'doctor', would begin pitching his concoction with grandiose claims that it was a panacea or perhaps a cure for a disparate and extensive list of ailments. With his claim enthusiastically verified by a surreptitiously placed shill among the assembled audience, he could rely on the invisible catalysts of crowd psychology and peer pressure to gradually take hold. He and his accomplice would then leave town well before any unsuspecting consumers discovered they had been victims of an elaborate and often costly hoax (1).
The example of the snake oil salesman, taken from popular folklore, illustrates the need for robust methods of assessing the efficacy of health care interventions and trustworthy ways of disseminating this knowledge. Most progress has largely occurred over the last 50 years. The first modern randomised trial (of streptomycin) was performed in the 1950s, and it was not until 1972 when Archie Cochrane's monograph, 'Effectiveness and efficiency: random reflections of health services' (2) began to lay the foundations for the modern evidence-based movement. Cochrane's thesis stated that as resources would always be limited, they should be used to provide equitably those forms of health care which had been shown in properly designed evaluations to be effective. In particular, he stressed the use of the randomised controlled trial (RCTs) as he felt these were likely to provide much more reliable information than other sources of evidence.
Almost simultaneously, John Wennberg's work in the U.S. led to methods for determining population-based rates for the utilization and distribution of health-care services. This demonstrated large variations in health care across different geographical areas. David Eddy's doctoral thesis also imparted vital early momentum to the nascent discipline for which he introduced the term 'evidence based' in 1990 (3), which was then extended to 'evidence-based medicine' by Gordon Guyatt and colleagues in 1992 (4). Eddy's analysis challenged the value of routine chest x-rays and annual Pap smears for women at low risk of cervical cancer and overturned the prevailing medical dogma. The implication was clear — medicine was (and still is) susceptible to what John Eisenberg termed 'eminence based medicine'. Since then, international variations in health care processes and outcomes have been documented with the realization that increased health care spending is not uniformly associated with improvements in patients' health status. Variation is a constant.
Identifying 'what works' and disseminating this information to health care professionals has been made increasingly difficult by the explosion in scientific knowledge and an epidemiological transition that has increased demands on clinicians while increasing the complexity of health care. Novel drugs are now released at the rate of almost one per every one and a half weeks (5), over 30,000 biomedical journals are published annually, and more than 17,000 new medical books published each year (6). Systematic reviews of the literature have demonstrated that many studies are grossly inadequate and thus potentially misleading, and that over 95% of articles in medical journals do not meet the minimal standards of critical appraisal (7).
Clinical guidelines have become increasing popular as a tool for synthesizing the biomedical literature. They have also attracted the attention of policy makers in light of their potential to reduce the delivery of inappropriate care and support the timely introduction of new knowledge into clinical practice. Proponents of clinical guidelines have claimed that use of the available scientific evidence can be increased by putting in place the infrastructure required to assure the systematic implementation of practice guidelines (8).
However, the quality of clinical guidelines is also susceptible to variation. Biases or conflicts of interests may affect the interpretation of evidence, and the views of guideline authors may not be representative of multidisciplinary professionals. Their recommendations may also be harmful if they are incorrect or do not consider the impact on the resources available for other services. Guidelines have also been described as ideal vehicles for the rapid market dissemination of the pharmaceutical industry's products, particularly if they avoid mention of cost altogether (9). For these reasons, there have been attempts to introduce quality criteria for the process of developing clinical guidelines.
During the 1990s, there was widespread public discontent in England with variations in care that have come to be termed 'postcode' variations, analogous to 'zip code variations' in the U.S. This resulted in patients in adjoining geographical areas, whose care came under the responsibility of different funding bodies in the single payer system, having different and sometimes contradictory coverage policies (10). In the case of life extending treatments, this soon became publically and politically unacceptable.
This issue eventually led to a decision to establish a centralized, government-funded body called the 'National Institute for Health and Clinical Excellence' (NICE) whose role it would be to provide national guidance on which 'health technologies' should be available, for whom, and in which circumstances. In addition to appraising evidence of clinical effectiveness, NICE was remitted to also consider cost effectiveness. This was based on the premise that without explicit consideration of cost, recommendations could have no implications for policy. This process is worth reviewing in more detail.
The essence of the NICE approach to resource allocation is a utilitarian perspective that seeks to maximize efficiency ('the greatest good for the greatest number') of the pharmaceutical budget by estimating the value for money obtained from particular treatments. An independent advisory committee of health service professionals and lay people is requested to review clinical evidence that is submitted by an academic centre, which also undertakes a health economic assessment in addition to that submitted by the manufacturer of the technology under evaluation. Stakeholder engagement has played a significant role in the process, and testimony is also provided by clinical specialists and lay people who are selected from professional and voluntary sector bodies.
Both scientific value judgments ('what is good about the evidence') and social value judgments ('what is good for society') are involved in weighing evidence. As NICE's appraisal committee has no legitimacy in making social value judgments, a 'citizens council' of 30 lay people, demographically representative of the population has been established. This council has been consulted on the bioethical principles underpinning the challenge of ensuring distributive justice (or 'fairness' in allocating resources). How generalisable the views of these are to the greater population of the U.K. (around 60 million people) is a moot point.
NICE's advisory committees may produce binding recommendations that deal with a drug or several drugs within a therapeutic class (termed health technology guidance) or nonbinding recommendations that span a complete pathway of care (termed 'guidelines'). At this time, 117 NICE guideline summaries are currently posted to the National Guideline Clearinghouse. The guidance and guidelines are produced according to the prerequisites of deliberative democracy, i.e., publicity, revision, relevance, and in the case of binding guidance, the opportunity for stakeholders to lodge an appeal.
Recommendations by NICE that a particular drug should not receive funding within the single payer health care system of England and Wales can have important ramifications. These are particularly acute for individuals with life threatening illnesses and when a treatment which extends life is being considered. Unsurprisingly, NICE receives considerable public and media scrutiny. Its guidance may be subject to an appeal by drug manufacturers, and approximately half of these are upheld and returned to the (original) appraisal committee for reconsideration. In some notable cases, the judgment of the appeals panel, which is overseen by NICE's chairman, has subsequently been tested by judicial review within the courts, with mixed results so far that have largely been in favor of NICE.
The NICE guidelines are less controversial although this is not always the case. The main area where value has been added is for topics where there is no pre-existing guidance. Where there is already strong professional consensus, existing specialist guidelines hold sway for clinicians. Establishing credibility with professional groups is a slow process that improves gradually as participation increases.
What evidence exists, then, for the effectiveness of NICE? NICE is successful in producing guidance, and the majority of the recognition that it receives is based on its responsibility for process. In outcome terms, there is largely an absence of evidence. What evidence there is suggests a modest effect in specific therapy areas. Therefore, it may be best to conceive of NICE as having slowed the rate of health care expenditure on drugs (around 10% of health spending in the U.K. and U.S.), rather than as a tool for reducing it below baseline levels. This is partly because guidance primarily designed for synthesizing knowledge is not necessarily effective as a practical implementation tool, and guidance implementation is, itself, subject to variation. The increased use of guideline derived indicators may help lay the foundations for further translating the value of guidelines into practice and monitoring progress.
A criticism of NICE sometimes lies in that the length of time needed to produce guidelines (up to 2 years) and undertake technology appraisals is too long. This may result in the withholding of funding until NICE has produced guidance, or the diffusion of a particular innovation may already be substantially underway by the time guidance arrives. NICE is attempting to streamline its processes to remedy this and is considering evaluating products at the time of launch.
There is likely to be increased attention on learning from the experience of organizations like NICE following the $1.1 billion recently allocated to studying clinical effectiveness in the United States. Several caveats should be borne in mind. The utilitarian perspective is intrinsically population based, and so clinical excellence for the individual and cost effective, 'clinical excellence' for the population are not always the same. Also, as Alexis de Tocqueville noted over 170 years ago, cultural preferences in the United States place particular emphasis on the autonomy of the individual, the individual's freedom to choose, and a limited role for government (11). Culturally sensitive approaches will need to be considered.
Notwithstanding these differences, the NICE paradigm offers a valuable insight into the application and challenges facing the application of evidence-based medicine and health economics in pursuit of the current holy grail of resources strapped health systems: better value.
Rubin Minhas, MB ChB
Santa Monica, CA
The views and opinions expressed are those of the author and do not necessarily state or reflect those of the National Guideline Clearinghouse™ (NGC), the Agency for Healthcare Research and Quality (AHRQ), or its contractor, ECRI Institute.
Potential Conflicts of Interest
Dr. Minhas states that he has chaired guidelines for the National Institute for Health and Clinical Excellence (NICE) (UK) and is an independent member of one of its advisory committees. He is a Harkness Fellow in Healthcare Policy and Practice, supported by the Commonwealth Fund.
- Snake oil salesmen were onto something. Scientific American. Nov 1. 2007.
- Cochrane AL. Effectiveness and Efficiency: Random Reflections on Health Services. London: Nuffield Provincial Hospitals Trust, 1972.
- Eddy DM. "Practice policies: where do they come from?" JAMA 1990;263 (9): 1265, 1269, 1272.
- Guyatt G, Cairns J, Churchill D, et al. ['Evidence-Based Medicine Working Group'] "Evidence-based medicine. A new approach to teaching the practice of medicine." JAMA 1992;268:2420-5.
- The Pharmaceutical Price Regulation Scheme: an OFT market study. 2007. http://www.oft.gov.uk/shared_oft/reports/comp_policy/oft885.pdf . (PDF Help)
- Lowe HJ, Barnett O. Understanding and using the medical subject headings (MeSH) vocabulary to perform literature searches. JAMA 1994;271:1103-1108.
- Haynes RB. Where's the meat in clinical journals? ACP J Club 1993;119:A23-A24.
- Jencks SF, Huff ED, Cuerdon T. Change in the quality of care delivered to Medicare beneficiaries, 1998-1999 to 2000-2001. JAMA 2003;289:305-312.
- Haycox A, Bagust A, Walley T. Clinical guidelines-the hidden costs. BMJ 1999;318: 391-3.
- 'NHS body to end postcode prescribing.' BBC News http://news.bbc.co.uk/2/hi/health/271522.stm .
- A. de Tocqueville, Democracy in America (New York: Alfred A. Knopf, 1948).
PDF Help: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader® .