Previous Next

Responding to Simulated Critical Incidents

Using a realistic hands-on anesthesia simulator, Gaba and DeAnda [131] [248] [256] studied the response of anesthesia trainees and experienced anesthesia faculty and private practitioners to six preplanned critical incidents of differing type and severity:

  1. Breathing hoses too short to turn the table 180 degrees, as requested by the surgeon
  2. Endobronchial intubation (EI) resulting from surgical manipulation of the tube
  3. Intravenous tubing occlusion
  4. Atrial fibrillation (AF) with rapid ventricular response and hypotension
  5. Disconnection between the endotracheal tube and the breathing circuit
  6. Ventricular tachycardia/fibrillation

These investigators measured both the detection time (as described in the section on vigilance) and the correction time (the time from the event's onset until any one of a predefined set of corrective actions was first taken). They assessed the information sources by which subjects detected the incidents and then confirmed and diagnosed the problem. They asked subjects to "think aloud," to allow subjective analysis of their decision-making strategies. A summary of the data is shown in Figure 83-10 . Major findings from this set of studies included the following:

  1. Events differed from each other in their inherent ease of solution. Some events (e.g., airway disconnection) were rapidly detected and corrected. Some problems (e.g., intravenous occlusion) were difficult to detect, but once they were detected, the diagnosis and therapy were rapidly achieved. Other problems (EI, AF) were easy to detect, using one of several redundant information sources as the first clue (six for EI, four for AF), but they required additional time (7 to 8 minutes for EI; 1.5 to 4.5 minutes for AF) to confirm the abnormality, to establish a diagnosis, and to initiate appropriate therapy. Diagnosis and the planning and monitoring of therapy used a large number of information sources (11 for EI; 9 for AF).
  2. For each incident, there was considerable interindividual variability in detection and correction times, in information sources used, and in the actions taken. In each experience group there were some who required excessive time to solve the problem or who never solved it. Also, in each experience group at least one individual made major errors that could have had a substantial negative impact on a patient's clinical outcome. For example, one faculty member never used electrical countershock to treat ventricular fibrillation. One private practitioner treated the EI as if it were "bronchospasm" and never assessed the symmetry of ventilation. One resident never found the airway disconnection.
  3. The average performance of the anesthetists tended to improve with experience, although this varied by incident. The performance of the experienced groups was not definitively better than that of the second-year residents (who were in their final year of training at that time). Many (but not all) novice residents performed indistinguishably from more experienced subjects.
  4. The elements of suboptimal performance were both technical and cognitive. Technical problems included choosing defibrillation energies appropriate for internal paddles when using external paddles, ampule swap, and failure to inflate the endotracheal tube cuff, resulting in a leak. Cognitive problems included failure to allocate attention to the most critical problems and fixation errors.


3044


Figure 83-10 Response times of anesthesiologists with different levels of experience to four simulated critical incidents: A, endobronchial intubation; B, intravenous occlusion; C, atrial fibrillation; and D, airway disconnection. Detection time is represented by open circles, and correction time by solid circles (see text for definitions of these times). Unless there is overlap between response times, each circle represents a single individual. The scale of response times is different for each event. There is substantial variability among incidents and among individuals. Although there was a trend to better performance with increased experience, major errors were made by individuals in all groups. (From DeAnda A, Gaba DM: Role of experience in the response to simulated critical incidents. Anesth Analg 72:308–315, 1991.)

Schwid and O'Donnell,[130] from the University of Washington, used the Anesthesia Simulator Consultant (ASC) screen-only simulator (Anesoft Corp., Issaquah, WA) ( Table 83-8 ) to perform an experiment similar to those of Gaba and DeAnda using a realistic simulator. This method enabled them to evaluate some elements of anesthetist behavior more carefully, albeit with the limitations imposed by presenting the OR "on the screen." After working on several practice cases without critical incidents, each subject was asked to manage three or four cases involving a total of four serious critical events (esophageal intubation, myocardial ischemia, anaphylaxis, and cardiac arrest). The progression of each event was mediated by the interaction of physiologic and pharmacologic models with the actions taken by the subject. The anesthesiologists studied had varying experience levels. One group was made up of 10 anesthesia residents with at least 1 year of anesthesia training, whereas the other two groups contained 10 anesthesia faculty members and 10 private practitioners, respectively.

Major findings of the study included the following:

  1. Significant errors in diagnosis or treatment were made in every experience group. The errors occurred for both diagnosis of problems and for deciding on and implementing appropriate treatment. For example, 60% of subjects did not make the diagnosis of anaphylaxis despite available information on heart rate, blood pressure, wheezing, increased peak inspiratory pressure, and the presence of a skin rash. In managing myocardial ischemia, there were multiple failures ( Table 83-9 ).

  2. 3045
  3. Thirty percent of subjects did not compensate for severe abnormalities while considering diagnostic maneuvers.
  4. Fixation errors were frequent in which initial diagnoses and plans were never revised, even when they were clearly wrong.


TABLE 83-9 -- Failure rate in the management of simulated myocardial ischemia using the anesthesia simulator consultant
Incident Anesthesia Residents (%) Anesthesia Attendings (%) Anesthesiologists in Practice (%)
Untreated tachycardia 30 50 70
Untreated hypotension 40 60 20
Inappropriate drug 20 10  0
Unable to recall infusion dose 50 20 10
Unable to calculate infusion rate 70 40 40
From Schwid HA, O'Donnell D: Anesthesiologists' management of simulated critical incidents. Anesthesiology 76:495–501, 1992.

Westenskow and associates[120] used a test lung and remotely activated faults in the breathing circuit to test the anesthetist's ability to identify faults related to ventilation and the anesthesia breathing circuit after hearing an alarm. One group of subjects used standard alarms, set to factory defaults, of an anesthesia machine, which included a capnograph. The other group used the same anesthesia machine with the alarms disabled along with a neural network-based intelligent alarm and fault identification system. The mean "human response time," which was the time between the sounding of the first alarm and the time of event identification, ranged from approximately 15 seconds for airway disconnection to approximately 90 seconds for an endotracheal tube cuff leak. The 10 anesthesiologists tested with the standard alarm setup were unable to identify the fault within 2 minutes on 11 occasions—5 cuff leaks, 3 airway obstructions, and 3 stuck-open expiratory valves. However, in such circumstances, they did take appropriate compensatory actions while continuing to search for the cause (e.g., increasing the fresh gas flow to compensate for a cuff leak).

The intelligent alarm apparatus used data from three sensors (in-line capnograph, spirometer, and airway pressure). A neural network determined whether any of seven faults were present, and if so, displayed a text message specifying the fault as well as an animated diagram of the lung, airway, and anesthesia breathing circuit with the faulty component highlighted in red. It is interesting that the smart alarm system took slightly longer on average to detect a fault than did the conventional alarm system (25 versus 21 seconds), but the human response time was markedly reduced for three of the seven faults. There were no statistically significant differences between anesthesia residents and faculty members using either alarm system.

The investigators suggested that the more specific alarm messages in their intelligent alarm system could direct the attention of the anesthetist to the occurrence of specific problems and in so doing would decrease workload and reduce the likelihood of fixating on inappropriate information. They stated that such a system's advantages would be even greater in a more realistic clinical environment in which the anesthetist has multiple complex tasks, and not just the detection and identification of ventilation-related events.

Loeb and Fitch[257] developed and tested an auditory display of six physiologic variables. Encouraged by the popularity of the pulse oximeter pulse-tone,[258] [259] [260] they investigated whether the addition of auditory cues would enhance the detection rate and speed of predefined events. The results showed that the combined display (visual and auditory) lead to a faster detection of events, even though the rate for correct identification of the event was slightly higher in the "visual only" display (80% versus 88%). It seems that there is a potential to improve the detection rate of changes in physiologic variables using more sophisticated display modalities, thus enhancing the "effective vigilance" of anesthetists.

Complex, Multiple Personnel Simulations of Anesthetic Crises

In the process of evaluating a new type of training for anesthetists concerning crisis management, Howard and colleagues[132] collected anecdotal data on the responses of teams of anesthetists, surgeons, and nurses to planned (and unplanned) critical events. These experiments largely confirmed the results of the studies described earlier and extended them to include more complex management issues and team interactions. Howard and colleagues found a substantial incidence of difficulties in managing multiple problems simultaneously, applying attention to the most critical needs, acting as team leader, and communicating with personnel and using all available OR resources to best advantage.

Botney and coworkers[133] [134] analyzed similar videotapes from 18 different simulator training sessions on crisis management. In one event, a volatile anesthetic vaporizer had been left on a 4% and was hidden beneath a printout from the noninvasive blood pressure monitor. Simultaneously, there was a mechanical failure of the capnograph, making it impossible to confirm endotracheal intubation using CO2 measurements. This event purposefully presented an invitation to become fixated on the endotracheal tube while ignoring other relevant information. Five of 18 subjects never discovered the volatile anesthetic overdose despite catastrophic effects on blood pressure and heart rate and clear evidence that the endotracheal tube was correctly placed. Of those who


3046
did detect the vaporizer setting, the average time to detection was nearly 4 minutes, with some subjects taking more than 12 minutes.

In the second event studied, there was a loss of pipeline O2 supply while an anesthetist was assuming the care of a critically ill patient who required an FIO2 of 100% to achieve satisfactory blood oxygenation. The O2 cylinder on the machine was empty (i.e., it had not been checked by the initial anesthetist, who had left the case after becoming ill). The pipeline failure was quickly detected (19 seconds), but the responses to it were extremely variable and showed a variety of problems. Five of 18 anesthetists closed the anesthesia circuit (which preserves the existing oxygen in the circuit), but all 5 subsequently switched to ventilation with a self-inflating bag using room air or to mouth-to-tube ventilation. Five of 18 could not open the reserve oxygen cyclinder because they could not locate the tank wrench attached to the machine (it tended to rest between two gas cylinders). Several teams had trouble in mounting a new oxygen tank on the anesthesia machine; problems with the gasket disk were frequent. The individuals did not appear to have a well-formulated plan for managing this event, and they did not optimally coordinate their actions with their assistants or with the other OR personnel.

A study by Byrne and colleagues[216] looked at differences in the performance of experienced and less experienced anesthetists. Using a self-developed patient simulator system they measured time to treatment and deficiencies in patient care in 180 simulations. The results showed significant differences only between the first and second year. As seen in other studies,[228] significant errors occurred at all levels of experience and most of the anesthetists deviated from established guidelines. These studies underpin the importance of recurrent training for experienced anesthetists and the truism that experience is not a substitute for excellence.

The reader is referred to Chapter 84 for discussion of newer studies with the use of patient simulators and performance assessment.[25] [30] [210] [212] [218] [228] [261] [262] [263] [264] [265] [266] [267] [268] [269] [270]

Indirect Observation of Anesthetists Involved in Difficult Cases

An unusual approach involving indirect observation of actual cases was used by Cook and associates[63] at Ohio State University. Rather than collecting data on the case itself, these investigators transcribed the discussions of interesting cases occurring at a weekly quality assurance conference. They argued that this approach allowed them to apply a "neutral observer criterion" to the behavior of the anesthetist. The investigators acknowledged the risks of hindsight bias and selection bias with this methodology, but they suggested that their technique provided a unique window on human performance issues.

Fifty-seven cases were analyzed, of which 21 had a full cognitive analysis in the final report. From the presentation and discussion of a case, the investigators classified the evolution of the events into one of five categories: acute incident, going-sour incident, inevitable outcome incident, difficult airway incident, and no-incident incident. For each case, the cognitive analysis was "based on using knowledge about the cognitive demands of the task domain and data about practitioner activities to analyze the practitioner's information-processing strategies and goals, given the resources and constraints of the situation." The investigators postulated a cognitive cycle described by Neisser[271] as a component of data-driven activation of knowledge and knowledge-driven observation and action.

Cook and colleagues[63] called attention to several issues that surfaced in their cognitive analysis of these cases, including the following:

  1. Multiple themes. Many cases involved several lines of concern simultaneously, each of which could have interacted with another (e.g., tight coupling). Each theme had multiple means available to deal with it. Maintaining "situation awareness" was important. The multiple themes sometimes generated competing or conflicting goals. Adaptive planning (as described in the section on the abstract task analysis) was sometimes required.
  2. Unusual situations. The greatest expertise was seen with infrequent or unusual situations, rather than with typical situations.
  3. Allocation of attention. The allocation of attention to relevant stimuli or to the most important "theme" was an important issue. The attentional shifts were not always well supported by existing alarm and display technologies.
  4. Cognitive workload. Anesthetists attempted to reduce their cognitive workload whenever possible.
  5. Team interaction. Cooperative work, team interaction, and communications issues were problems in several cases. These stemmed from both individual and organizational failures to coordinate information and efforts from different organizational components (e.g., ICU and OR, surgeons and anesthesiologists).

Direct Observation of Anesthetists

A team of cognitive scientists and anesthesiologists at the University of Toronto[69] [70] conducted direct observations of anesthesiologists and obtained verbal "think aloud" protocols during actual case management. The group in Tuebingen also performed direct observations for their task analysis studies.[111] [112] [113] Devitt and coworkers[29] performed a study to assess the validity of performance assessment during simulated scenarios.

Video Analysis of Actual Trauma Resuscitations and Anesthetics

Mackenzie and Xiao and colleagues[36] [272] [273] [274] [275] [276] [277] pioneered in the analysis of actual clinical care of anesthetists captured on videotape, focusing on trauma resuscitations and anesthesia of trauma patients at the Maryland Shock Trauma Unit. Their sophisticated recording system captures audio, video, and vital signs data and only requires the clinicians to insert a videotape to start the whole system.[273] Analyses of these cases have revealed inadequacies in the availability and arrangement of monitoring equipment, as well as nonexistent or ambiguous communication.

Problems faced by all investigators are the lack of an accepted standard for objective or subjective evaluation of anesthetist performance and the absence of an agreed-on methodology for analyzing and describing anesthetist performance. Several of the previously mentioned groups


3047
are working on methodologies for evaluating both technical and behavioral aspects of performance. * The measurement of complex performance is a difficult problem, and it is likely to be some time until there is a well-established metric for performance assessment (see also the section on performance evaluation using patient simulators in Chapter 84 ).

Previous Next