|
Using a realistic hands-on anesthesia simulator, Gaba and DeAnda [131] [248] [256] studied the response of anesthesia trainees and experienced anesthesia faculty and private practitioners to six preplanned critical incidents of differing type and severity:
These investigators measured both the detection time (as described in the section on vigilance) and the correction time (the time from the event's onset until any one of a predefined set of corrective actions was first taken). They assessed the information sources by which subjects detected the incidents and then confirmed and diagnosed the problem. They asked subjects to "think aloud," to allow subjective analysis of their decision-making strategies. A summary of the data is shown in Figure 83-10 . Major findings from this set of studies included the following:
Figure 83-10
Response times of anesthesiologists with different levels
of experience to four simulated critical incidents: A,
endobronchial intubation; B, intravenous occlusion;
C, atrial fibrillation; and D,
airway disconnection. Detection time is represented by open
circles, and correction time by solid circles
(see text for definitions of these times). Unless there is overlap between response
times, each circle represents a single individual. The scale of response times is
different for each event. There is substantial variability among incidents and among
individuals. Although there was a trend to better performance with increased experience,
major errors were made by individuals in all groups. (From DeAnda A, Gaba
DM: Role of experience in the response to simulated critical incidents. Anesth
Analg 72:308–315, 1991.)
Schwid and O'Donnell,[130] from the University of Washington, used the Anesthesia Simulator Consultant (ASC) screen-only simulator (Anesoft Corp., Issaquah, WA) ( Table 83-8 ) to perform an experiment similar to those of Gaba and DeAnda using a realistic simulator. This method enabled them to evaluate some elements of anesthetist behavior more carefully, albeit with the limitations imposed by presenting the OR "on the screen." After working on several practice cases without critical incidents, each subject was asked to manage three or four cases involving a total of four serious critical events (esophageal intubation, myocardial ischemia, anaphylaxis, and cardiac arrest). The progression of each event was mediated by the interaction of physiologic and pharmacologic models with the actions taken by the subject. The anesthesiologists studied had varying experience levels. One group was made up of 10 anesthesia residents with at least 1 year of anesthesia training, whereas the other two groups contained 10 anesthesia faculty members and 10 private practitioners, respectively.
Major findings of the study included the following:
Incident | Anesthesia Residents (%) | Anesthesia Attendings (%) | Anesthesiologists in Practice (%) |
---|---|---|---|
Untreated tachycardia | 30 | 50 | 70 |
Untreated hypotension | 40 | 60 | 20 |
Inappropriate drug | 20 | 10 | 0 |
Unable to recall infusion dose | 50 | 20 | 10 |
Unable to calculate infusion rate | 70 | 40 | 40 |
From Schwid HA, O'Donnell D: Anesthesiologists' management of simulated critical incidents. Anesthesiology 76:495–501, 1992. |
Westenskow and associates[120] used a test lung and remotely activated faults in the breathing circuit to test the anesthetist's ability to identify faults related to ventilation and the anesthesia breathing circuit after hearing an alarm. One group of subjects used standard alarms, set to factory defaults, of an anesthesia machine, which included a capnograph. The other group used the same anesthesia machine with the alarms disabled along with a neural network-based intelligent alarm and fault identification system. The mean "human response time," which was the time between the sounding of the first alarm and the time of event identification, ranged from approximately 15 seconds for airway disconnection to approximately 90 seconds for an endotracheal tube cuff leak. The 10 anesthesiologists tested with the standard alarm setup were unable to identify the fault within 2 minutes on 11 occasions—5 cuff leaks, 3 airway obstructions, and 3 stuck-open expiratory valves. However, in such circumstances, they did take appropriate compensatory actions while continuing to search for the cause (e.g., increasing the fresh gas flow to compensate for a cuff leak).
The intelligent alarm apparatus used data from three sensors (in-line capnograph, spirometer, and airway pressure). A neural network determined whether any of seven faults were present, and if so, displayed a text message specifying the fault as well as an animated diagram of the lung, airway, and anesthesia breathing circuit with the faulty component highlighted in red. It is interesting that the smart alarm system took slightly longer on average to detect a fault than did the conventional alarm system (25 versus 21 seconds), but the human response time was markedly reduced for three of the seven faults. There were no statistically significant differences between anesthesia residents and faculty members using either alarm system.
The investigators suggested that the more specific alarm messages in their intelligent alarm system could direct the attention of the anesthetist to the occurrence of specific problems and in so doing would decrease workload and reduce the likelihood of fixating on inappropriate information. They stated that such a system's advantages would be even greater in a more realistic clinical environment in which the anesthetist has multiple complex tasks, and not just the detection and identification of ventilation-related events.
Loeb and Fitch[257] developed and tested an auditory display of six physiologic variables. Encouraged by the popularity of the pulse oximeter pulse-tone,[258] [259] [260] they investigated whether the addition of auditory cues would enhance the detection rate and speed of predefined events. The results showed that the combined display (visual and auditory) lead to a faster detection of events, even though the rate for correct identification of the event was slightly higher in the "visual only" display (80% versus 88%). It seems that there is a potential to improve the detection rate of changes in physiologic variables using more sophisticated display modalities, thus enhancing the "effective vigilance" of anesthetists.
In the process of evaluating a new type of training for anesthetists concerning crisis management, Howard and colleagues[132] collected anecdotal data on the responses of teams of anesthetists, surgeons, and nurses to planned (and unplanned) critical events. These experiments largely confirmed the results of the studies described earlier and extended them to include more complex management issues and team interactions. Howard and colleagues found a substantial incidence of difficulties in managing multiple problems simultaneously, applying attention to the most critical needs, acting as team leader, and communicating with personnel and using all available OR resources to best advantage.
Botney and coworkers[133] [134] analyzed similar videotapes from 18 different simulator training sessions on crisis management. In one event, a volatile anesthetic vaporizer had been left on a 4% and was hidden beneath a printout from the noninvasive blood pressure monitor. Simultaneously, there was a mechanical failure of the capnograph, making it impossible to confirm endotracheal intubation using CO2 measurements. This event purposefully presented an invitation to become fixated on the endotracheal tube while ignoring other relevant information. Five of 18 subjects never discovered the volatile anesthetic overdose despite catastrophic effects on blood pressure and heart rate and clear evidence that the endotracheal tube was correctly placed. Of those who
In the second event studied, there was a loss of pipeline O2 supply while an anesthetist was assuming the care of a critically ill patient who required an FIO2 of 100% to achieve satisfactory blood oxygenation. The O2 cylinder on the machine was empty (i.e., it had not been checked by the initial anesthetist, who had left the case after becoming ill). The pipeline failure was quickly detected (19 seconds), but the responses to it were extremely variable and showed a variety of problems. Five of 18 anesthetists closed the anesthesia circuit (which preserves the existing oxygen in the circuit), but all 5 subsequently switched to ventilation with a self-inflating bag using room air or to mouth-to-tube ventilation. Five of 18 could not open the reserve oxygen cyclinder because they could not locate the tank wrench attached to the machine (it tended to rest between two gas cylinders). Several teams had trouble in mounting a new oxygen tank on the anesthesia machine; problems with the gasket disk were frequent. The individuals did not appear to have a well-formulated plan for managing this event, and they did not optimally coordinate their actions with their assistants or with the other OR personnel.
A study by Byrne and colleagues[216] looked at differences in the performance of experienced and less experienced anesthetists. Using a self-developed patient simulator system they measured time to treatment and deficiencies in patient care in 180 simulations. The results showed significant differences only between the first and second year. As seen in other studies,[228] significant errors occurred at all levels of experience and most of the anesthetists deviated from established guidelines. These studies underpin the importance of recurrent training for experienced anesthetists and the truism that experience is not a substitute for excellence.
The reader is referred to Chapter 84 for discussion of newer studies with the use of patient simulators and performance assessment.[25] [30] [210] [212] [218] [228] [261] [262] [263] [264] [265] [266] [267] [268] [269] [270]
An unusual approach involving indirect observation of actual cases was used by Cook and associates[63] at Ohio State University. Rather than collecting data on the case itself, these investigators transcribed the discussions of interesting cases occurring at a weekly quality assurance conference. They argued that this approach allowed them to apply a "neutral observer criterion" to the behavior of the anesthetist. The investigators acknowledged the risks of hindsight bias and selection bias with this methodology, but they suggested that their technique provided a unique window on human performance issues.
Fifty-seven cases were analyzed, of which 21 had a full cognitive analysis in the final report. From the presentation and discussion of a case, the investigators classified the evolution of the events into one of five categories: acute incident, going-sour incident, inevitable outcome incident, difficult airway incident, and no-incident incident. For each case, the cognitive analysis was "based on using knowledge about the cognitive demands of the task domain and data about practitioner activities to analyze the practitioner's information-processing strategies and goals, given the resources and constraints of the situation." The investigators postulated a cognitive cycle described by Neisser[271] as a component of data-driven activation of knowledge and knowledge-driven observation and action.
Cook and colleagues[63] called attention to several issues that surfaced in their cognitive analysis of these cases, including the following:
A team of cognitive scientists and anesthesiologists at the University of Toronto[69] [70] conducted direct observations of anesthesiologists and obtained verbal "think aloud" protocols during actual case management. The group in Tuebingen also performed direct observations for their task analysis studies.[111] [112] [113] Devitt and coworkers[29] performed a study to assess the validity of performance assessment during simulated scenarios.
Mackenzie and Xiao and colleagues[36] [272] [273] [274] [275] [276] [277] pioneered in the analysis of actual clinical care of anesthetists captured on videotape, focusing on trauma resuscitations and anesthesia of trauma patients at the Maryland Shock Trauma Unit. Their sophisticated recording system captures audio, video, and vital signs data and only requires the clinicians to insert a videotape to start the whole system.[273] Analyses of these cases have revealed inadequacies in the availability and arrangement of monitoring equipment, as well as nonexistent or ambiguous communication.
Problems faced by all investigators are the lack of an accepted standard for objective or subjective evaluation of anesthetist performance and the absence of an agreed-on methodology for analyzing and describing anesthetist performance. Several of the previously mentioned groups
|