Volume 4, No. 4 • Winter 1989

APSF Grant Program Supports Clinical Research

Jeffery B. Cooper, Ph.D.

The APSF has provided support for 11 research projects over the past three years. These were selected from 86 applications reviewed by the Scientific Evaluation Committee. What is the process by which the grants are selected? What kinds of projects have we sponsored? What issues are targeted in the applications? What has been the product of our efforts?

The Scientific Evaluation committee is composed of nine members. Each reviews every application, writes a brief critique, and assign a priority score. The scores are tabulated and the top group (from 30% to 50% of all applications, depending on the distribution of scores) is ranked a second time with each reviewer having the benefit of the others’ critiques (this is important given the diversity of interests of the committee). The top applications are funded depending upon the total funds available. The winners are announced at the annual meeting of the APSF Board of Directors, held on the Friday or Saturday of the Annual Meeting of the ASA. The turn around time is fast; the application deadline is June 15, winners are announced in mid-October and funds are available January 1.

The criteria used in judging applications are stated in the announcement of the grant program. The guidelines for application provided from the APSF administrative office also gives examples of what types of projects may be of interest: new clinical methods for prevention and/or early diagnosis of mishaps, evaluation of new and or re-evaluation of old technologies for prevention and diagnosis, identification of predictors of patients and/or anesthetists at increased risk for mishaps, development of innovative methods for study of low frequency events, and methods for measurement of cost-effectiveness of techniques designed to increase patient safety.

The 86 applications were received from 57 institutions. The proposed topics define what this diverse set of investigators believe constitutes the essence of anesthesia patient safety and can be classified within the following subjects: outcome assessment, monitoring, training, risk assessment, specific injury prevention (study or solution of a defined problem), alarms, simulators, quality assurance, incident study, clinical study, human factors and device development. Most applications have elements involving at least two categories.

Many applications (26) addressed the prevention of injury associated with a specific complication, incident, outcome or clinical problem, e.g., myocardial infarction, disconnection, esophageal intubation, excessive blood loss, awareness, difficult airway, syringe swap. The foci included

development of improved monitoring approach, new device or technique or a method to better predict occurrence of the event. Twenty-one projects involved monitoring, most often assessment of efficacy or effectiveness in general or for some specific problem or development of a new or improved modality. Nine projects proposed to attack the ubiquitous problem of alarms, most often how to better integrate them or improve upon the rate of false alarms.

Twenty projects proposed the study of outcome. Many were targeted at specific are-as such as day surgery, pediatrics, the elderly, or a specific anesthetic technique, e.g., spinal anesthesia or operative procedure, e.g., C-section. Also, ten applications proposed some form of study of anesthesia incidents, developing systems to collect them, establishing how often they happen, determining how training can prevent them or studying how clinicians respond to them. Nine projects related to assessment of risk, i.e., predicting when problems or outcomes are more likely to occur, either in general or for specific problems or populations.

Nine applications related to the study or improvement of clinician training or the use of training to prevent a specific problem. Six projects addressed human factors issues, while seven projects dealt with simulators specifically as means to study human factors or training.

Of the 11 applications approved and funded (see Table), two categories of study stand out – outcome study and simulators (five each). Yet, the exact topic of an application was probably not the most important factor in its being selected for funding. Often , other reasons were responsible for applications being downgraded. Many simply were not well written, did not have adequate justification for the study or did not have a sound research plan. Frequently, no hypothesis %,as stated or statistical issues were inadequately appreciated or not sufficiently explained. It is likely that innovation is given more weight in the reviewers’ judgments (the concept of simulators has never been specifically mentioned in the list of topics of possible interest to the APSF). Perhaps the best advice that can be given to applicants is to seek the counsel of someone experienced in research and the pursuit of funding.

It is still too early to judge if the APSF research grant program is directed to the right things. The earliest projects we have helped to support have produced, good publications with new and innovative information; witness the work of Cohen et. al., in Manitoba, Gaba, et. al., at Stanford and Schwid in Seattle. Several ongoing project are very promising.

The APSF encourages investigators and inventors to submit their ideas for review. Even if not APSF funded, the effort can be the basis of a search for funds elsewhere, often from a hospital or department. The proposal format is comparatively simple and the turnaround time is relatively fast (efforts will be made to speed up feedback on unsuccessful proposals even sooner next year to hasten the process of improvement and resubmission to another potential funding source). If successfully funded by the APSF, it seems that investigators often set a great deal of additional support from their institutions, perhaps because the approval of the highly critical Committee on Scientific Evaluation indicates that the work is meaningful and scientifically acceptable. In fact, the APSF may be an important mechanism for departments to get outside peer review for patient safety efforts, which have few other sources of support.

The Research Grants Program is seen as one of the more important functions of the APSF. It has clearly stimulated much new thinking about patient safety. While unfortunate that our funding is so limited, it appears to date that the re-sources have been and are being put to good use.

1990 Research Grants Awards

Mark Warner, M.D. , Department of Anesthesiology, Mayo Clinic: Epidemiological Analysis of Perioperative Mortality and Major Morbidity

This project seeks to define predictive risk factors, etiologies and incidence of perioperative mortalities and major morbidities by budding upon the existing Mayo Anesthesia Database and integrating it with the ongoing Rochester Epidemiology Project and other institutional databases. These other databases and extensive, describing the population of the surrounding county, and should provide a set of information rarely available for analysis of anesthesia outcomes.

David Woods, Ph.D., Department of Industrial and Systems Engineering, Ohio State University: Human Performance in Anesthesia: A Corpus of Cases Analyzed Using Cognitive Science Techniques

Dr. Woods will bring his background in cognitive and experimental psychology to the study of human performance in anesthesia. Using descriptions from the Ohio State Department of Anesthesia mortality and morbidity conferences, he will apply a process-tracing methodology derived from cognitive science to trace the flow of information and the activation of knowledge during these events. The project will produce a collection of encoded protocols. This is an exciting opportunity for interdisciplinary study.

Howard Schwid, M.D., Department of Anesthesiology, University of Washington, School of Medicine: Evaluation of Anesthesiologists’ Responses to Simulated Critical Incidents Using the Anesthesia Simulator Recorder

Dr. Schwid, previously funded by the APSF Grant Research Program for work on his simulator/recorder, will now use that device to grade skills of one hundred anesthesiologists in managing cases. The study wig help determine how well anesthesiologists are prepared to manage a particular critical incident and examine the relationship between performance and years of experience and retraining.

Dr. Cooper, Mass. General Hospital, Boston, is Chairman of the APSF Scientific Evaluation Committee