evidence-based practice principal components
In 1996, DL Sackett et al defined evidence-based medicine as the the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.
A decade on, I Masic et al added, It integrates clinical experience and patient values with the best available research data.
Evidence-based practice has been part of medicine for centuries, at least conceptually. It was called clinical judgment and was based on what the physician had learned as apprentices* and what their experience had taught them. What was believed to be the science of medicine informed the art of medicine. Too often, however, the time-honoured and “classic” signs and clinical exams linked to a particular condition proved unreliable in establishing the diagnosis.
*By which I mean a junior doctor (intern, resident, or registrar) under the watchful eye of a senior clinician.
Old school clinicians decry young doctors’ waning skills in examining patients and their increasing reliance on lab data and imaging reports. The rhetorical questions are:
(1) How reliable is a thorough clinical exam for diagnosing significant disease? Answer: Not very.
(2) Does one’s accuracy increase with experience? Answer: Not really.
When the clinical exam goes toe-to-toe with objective diagnostic tools, it falls short. As an example of the (un-) reliability of the clinical exam: a cohort of patients admitted to a Veterans Affairs emergency room were examined by three clinicians and compared to the gold standard, an AP chest film. The sensitivity of the clinical diagnosis was 58%, and the specificity 66%…i.e., not particularly good.
As an example of accuracy based on experience, a group of internal medicine residents/registrars assessed heart sounds programmed into a mannequin; half couldn’t identify aortic or mitral regurgitation; two thirds missed mitral stenosis. In a further study from the same group, residents/registrars were compared to medical students in identifying 12 different heart sounds recorded from real patients; both groups correctly identified only 20% of the sounds (for the record, the residents did slightly better than the medical students…hooray). Their auscultation skills were just as abysmal.
Experts in the evolving arena of evidence-based practice believe that medicine’s transition towards greater objectivity must occur at two levels.
The first is at the macro scale or at what David M Eddy, a pioneer in evidence-based philosophy, viewed as population-level policies such as clinical practice guidelines and insurance coverage of new technologies. In a 1990 article in the JAMA, Eddy laid out principles for formulating evidence-based guidelines and population-level policies. These principles explicitly delineate available evidence pertaining to a policy and consciously anchor a policy, not to current practices or the beliefs of experts, but to experimental evidence…to hard data. The policy must be consistent with and supported by evidence. The pertinent evidence must be identified, described, and analyzed. The key stakeholders must determine whether a policy is justified by the evidence, and their rationale must be memorialized in writing, casting it in stone.
The second level is at the micro scale and refers to how individual physicians translate evidence into their daily practice. Gordon Guyatt at McMaster University first used the term evidence-based medicine in the early 1990s, specifically referring to a modern approach to teaching medical students. This was later echoed by Sackett and colleagues, who championed the integration of evidence in patient management decisions.
In 2005, Eddy neatly tied the macro and micro scales together, Evidence-based medicine is a set of principles and methods intended to ensure that to the greatest extent possible, medical decisions, guidelines, and other types of policies are based on and consistent with good evidence of effectiveness and benefit.”
EBM is at the bleeding edge of medical progress and forces the healthcare establishment to justify policy and practice. Systematic reviews of published research inform decisions on management. The best known and most reliable systematic reviews are those released by Cochrane (formerly, the Cochrane Collaboration, corporate motto, Trusted evidence. Informed decisions. Better health, logo at bottom), a UK-based NGO formed to organize medical research findings to facilitate evidence-based choices about health interventions faced by health professionals, patients, and policy makers. Cochrane has 37,000 volunteers in 130 countries. In 2004, the Canadian Medical Association Journal viewed Cochrane as the best single resource for methodologic research and for developing the science of meta-epidemiology.
Systematic reviews, which are a cornerstone of evidence-based practice, must be measured in different dimensions:
• Level of evidence This has been semi-quantified by the U.S. Preventive Services Task Force (1989), Oxford CEBM Levels of Evidence (2000, 2011), and GRADE (endorsed by the WHO, NICE, and the Canadian Task Force for Preventive Health Care)
• Bias (potential for), imprecision, inconsistency and indirectness
• Quality of evidence
For high quality evidence, there is a very low probability of further research completely changing the presented conclusions
For very low-quality evidence, new research will probably completely change the presented conclusions