Practice Parameters for the Use of Spinal Cord Stimulation in the Treatment of Chronic Neuropathic Pain

Current Best Evidence

“. . . any statement to the effect that there is no evidence addressing the effect of a particular treatment is a non sequitur.  The evidence may be extremely weak—the unsystematic observation of a single clinician, or generalization from only indirectly related physiologic studies—but there is always evidence," (Guyatt et al., 2000)

Sackett et al. defined evidence-based medicine as “. . . the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.  The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research [emphasis added].  By individual clinical expertise we mean the proficiency and judgement that individual clinicians acquire through clinical experience and clinical practice,” (Sackett et al., 1996).

Despite this clear emphasis on the integration of individual expertise and research evidence, some authors have adopted a definition of “evidence-based medicine” that privileges the results of clinical trials and discounts the weight of clinical experience (Alberta Research Centre for Health Evidence), dismissing even the conclusions of acknowledged experts as “mere consensus” (Bastian 1996). 

What, then, constitutes the proper hierarchy of evidence?  Some investigators assert that the evidence provided by meta-analyses is of the highest quality.  Shah (Shah 2000), for example, notes that Benson and Hartz (Benson & Hartz 2000) claim that observational studies and randomized controlled trials (RCTs) “can produce similar estimates of the effects of treatment” and that Concato et al. (Concato et al., 2000) have shown that “meta-analyses of observational studies produce results that are similar to meta-analyses of randomized trials.”  Guyatt et al. (Guyatt et al., 2000), however, maintain that a single RCT provides better evidence than a systematic review of observational studies and point to a meta-analyses of observational studies (Stampfer and Colditz, 1991), which concluded that hormone replacement therapy in women would lead to “a 50% reduction in relative risk of coronary events,” whereas the Heart and Estrogen/progestin Replacement Study RCT (Hulley et al., 1998) found no such effect. 

Despite their support of the practice of evidence-based medicine, Guyatt et al. (Guyatt et al., 2000) acknowledge that, when making clinical decisions, “evidence is never enough” and the ‘hierarchy [of the strength of evidence for treatment decisions] is not absolute.”  Thus, as a practical matter, the type of evidence that informs clinical decisions ranges from the results of RCTs to expert opinion based on direct experience and on comparable experience (or techniques) used in other medical specialties.  The recommendations and options offered in this document, therefore, reflect the current status of evidence-based practice of SCS.  Because SCS elicits perceptible paresthesia, this evidence does not include the results of “blinded” studies. 

Our grading system sets a new standard for evidence-based clinical practice.  In order for a standard to be useful in the evaluation of the evidence that helps us make rational decisions about the clinical care of patients, the standard itself must be rational, realistic, and practical.  Thus, for example, a dictum from Medicare (the rationale for which is discussed separately) has the same practical value as the highest level of evidence.  A risk/benefit calculation in combination with expert clinical opinion might likewise compensate for a lacunae in the evidence supplied by the results of clinical trials.  In our grading system, therefore, we are not suggesting that an “A” equals the highest level of evidence; instead, we define an “A” as a recommended or required clinical action that is valid, useful, or non-negotiable.

Stampfer MJ, Colditz GA. Estrogen replacement therapy and coronary heart disease: a quantitative assessment of the epidemiologic evidence. Prev Med 20(1):47-63, 1991.

Shah NR. What is the best evidence for making clinical decisions?. JAMA 284(24):3127-3128, 2000.

Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ 312(7023):71-72, 1996.

Hulley S, Grady D, Bush T, Furberg C, Herrington D, Riggs B, Vittinghoff E. Randomized trial of estrogen plus progestin for secondary prevention of coronary heart disease in postmenopausal women. Heart and Estrogen/progestin Replacement Study (HERS) Research Group. JAMA 280(7):605-613, 1998.
Abstract

Guyatt GH, Haynes RB, Jaeschke RZ, Cook DJ, Green L, Naylor CD, Wilson MC, Richardson WS. Users' Guides to the Medical Literature: XXV. Evidence-based medicine: principles for applying the Users' Guides to patient care. Evidence-Based Medicine Working Group. JAMA 284(10):1290-1296, 2000.
Abstract

Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med 342(25):1887-1892, 2000.
Abstract

Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med 342(25):1878-1886, 2000.

Bastian H. Raising the standard: practice guidelines and consumer participation. Int J Qual Health Care 8(5):485-490, 1996.