Chapter 43
- Spinal, Epidural, and Caudal Anesthesia
*
- David L. Brown
Spinal, epidural, and caudal neuraxial blocks result in sympathetic
block, sensory analgesia, and motor block, depending on dose, concentration, or volume
of local anesthetic, after insertion of a needle in the plane of the neuraxis. Despite
these similarities, there are significant physiologic and pharmacologic differences.
Spinal anesthesia requires a small mass (i.e., volume) of drug, virtually devoid
of systemic pharmacologic effect, to produce profound, reproducible sensory analgesia.
In contrast, epidural anesthesia necessitates use of a large mass of local anesthetic
that produces pharmacologically active systemic blood levels, which may be associated
with side effects and complications unknown with spinal anesthesia. The introduction
of combined spinal and epidural techniques blurs some of these differences but also
adds flexibility to clinical care.
The blurring of differences between spinal and epidural anesthesia
began early, when Corning[1]
reported on spinal
anesthesia and local medications of the cord (see Chapter
1
). It remains unclear whether his injection of cocaine "between spinous
processes" produced spinal or epidural anesthesia. Bier understood he was producing
spinal anesthesia in 1898, and through self-investigation of spinal anesthesia, he
had personal knowledge of the symptoms of postdural puncture headache. These early
years were principally involved with the advancement of spinal rather than epidural
anesthesia for at least three reasons. First, the only practical local anesthetic
available until 1904 (when procaine was synthesized) was cocaine, which was more
suited to spinal than epidural anesthesia because of systemic side effects at doses
required for each. Second, the equipment available for neuraxial blocks favored
spinal anesthesia because the end point of cerebrospinal fluid (CSF) return was well
defined and did not demand sophisticated glass syringes and needles required for
epidural anesthesia. Third, muscle relaxants had not been introduced, and spinal
anesthesia produced superb skeletal muscle relaxation, facilitating surgical exposure.
These advantages of spinal anesthesia historically created many
enthusiastic clinicians. Morton[2]
promoted high
spinal anesthesia for surgical procedures carried out on the head and neck, whereas
Koster[3]
used total spinal blockade for thoracic
and intracranial procedures. Spinal anesthesia was not limited to surgical conditions
but was touted for the treatment of medical conditions (e.g., pulmonary edema) by
taking advantage of its venodilatory effect. However, a number of impediments prevented
more widespread use of spinal and epidural blocks.
* See American Society
of Anesthesiologists Practice Guidelines for Acute Pain Management in the Perioperative
Setting, Practice Guidelines for Chronic Pain Management, and Practice Guidelines
for Obstetric Anesthesia.
Despite the many advantages and safety of spinal anesthesia, Kennedy
and colleagues[4]
described "grave spinal cord paralysis"
accompanying the use of spinal anesthesia in 1950; this report was followed by one
in 1954 detailing the well-publicized Woolley and Roe trial in England.[5]
In the latter instance, two patients, Woolley and Roe, had neurologic injury after
receiving their spinal anesthetics, which were administered in the same hospital,
on the same day, and by the same anesthetist in 1947. The exact cause of their neurologic
dysfunction remains cloudy. Was it contaminated ampules or a toxic substance administered
into the subarachnoid space by mistake?[6]
Anesthesiologists continue to face confusion about balancing the
risks and benefits of spinal anesthesia, specifically those involving continuous
spinal anesthesia or the use of 5% lidocaine. The U.S. Food and Drug Administration
(FDA) withdrew some spinal catheters in 1992 because of concerns about a perceived
association between the small-bore catheters and development of cauda equina syndrome.
[7]
This decision appears to have been made with
as many political implications as scientific ones. Since that time, attention turned
to the questions about the appropriate uses of intrathecal 5% lidocaine[8]
and how to minimize risks of neuraxial anesthesia with concomitant use of low-molecular-weight
heparin (LMWH).[9]
Other impediments to the effective use of neuraxial blocks are
the predictable decreases in arterial blood pressure and heart rate through the accompanying
sympathectomy with its attendant vasodilation and blockade of cardioaccelerator fibers.
Maintaining arterial blood pressure and heart rate at normal values during these
blocks often requires administration of vasoactive drugs and intravenous fluids.
The extent to which these steps are necessary is discussed later.
Another clinically important impediment to successful use of these
blocks is the idea that the "blocks should do it all." It seems unreasonable to
expect a single injection of local anesthetic into the subarachnoid or epidural space
to provide ideal conditions for all patients undergoing various surgical procedures,
even in the face of an adequate block. It seems likely that far more spinal and
epidural anesthetics have failed because of inadequate intravenous sedation and anxiolysis
than technically flawed blocks.
Evidence is accumulating that the use of neuraxial blocks, principally
continuous epidural techniques to provide postoperative analgesia, may be able to
decrease perioperative morbidity. These techniques may also decrease length of hospital
stay and allow more efficient use of our increasingly stretched health care monies.
[10]
[11]
To gain
maximum benefit and minimize complications from these blocks, attention to technique
and anatomy is essential, and the blocks should be used when the risk-benefit equation
for the block is favorable.[12]
[13]