776

THE EVALUATION OF CONTINUING MEDICAL EDUCATION PROGRAMS* S. E. SIVERTSON, M.D. Associate Chairman, Department of Continuing Medical Education University of Wisconsin Madison, Wis.

IN evaluating programs of continuing medical education (CME) we can appraise their content for accuracy, their rhetoric for perspicuity, and their technology for appropriateness. We can pretest, posttest, and late post-test in order to measure the knowledge gained. We can interview participants at the beginning to identify their objectives, at the midpoint to determine if the learners believe they are realizing their objectives, and at the end to see if their "happiness index" is high, medium, or low. All these have their place, but again and again we fall short in measuring the effect of this education on the health care delivered by providers. This is because the concept of process and outcome of a system for the delivery of health care is not yet taught and learned early in the continuum of medical education. In addition, those long established in the practice of medicine are struggling to grasp these new concepts and the terminology. They are faced with that most difficult task: unlearning the old which seems efficient and replacing it with the new which, until learned, is inefficient. And we have yet to come to grips with the powerful influence of circumstance and environment on the care that must be delivered and on the new learning that must take root. HISTORICAL PERSPECTIVE There is historical precedent for this perspective. A slight thread beginning in dim antiquity is now weaving a dominant pattern with greater clarity. Echoing through 8oo years are the words of Maimonides:' "May there never develop a need or notion that my education is complete, but give me strength and leisure and zeal continually to en*Presented as part of a Symposium on Continuing Medical Education held by the Committee on Medical Education of the New York Academy of Medicine October 10, 1974.

Bull. N. Y. Acad. Med.

EVALUATION OF CME PROGRAMS -~

~

~

VLAINO

M

77 7

large my knowledge." Was he possibly counseling the physician not to become so overwhelmed by service that the updating of his knowledge and skills is neglected? In 1935 Youmans2 underlined "a sympathetic appreciation of the problems and limitations of general practice" for the medical educator. In 1940 the report of the Commission on Graduate Medical Education3 said, "Because postgraduate medical education is so new and has been growing so rapidly there is no clear definition of its content and no accepted standards of its quality and length." In 1955 the American Medical Association (AMA) published its study of Postgraduate Medical Education in the United States4 (known as the Volian Report), which briefly mentioned methods for evaluating programs of postgraduate medical education. Subsequently, in 1957, the AMA published A Guide Regarding Objectives and Basic Principles of Postgraduate Medical Education Programs which in I970 was retitled Essentials of Approved Programs in Continuing Medical Education.5 Thereafter, in i96i, Miller et al.6 published Teaching and Learning in Medical School. The next year, i962, saw publication of Life Time Learning for Physicians (known as the Dryer Report).' Three years later came Planning for Medical Progress (called the Coggeshall Report).8 Because of two statements, i967 was important. Miller" said, "In a very practical sense the most important element of continuing education may be that of leading practitioners to a study of what they do, to an identification of their own educational deficits, to the establishment of realistic priorities for their own educational programs." Williamson said10 ". . . that in the evaluation of educational effectiveness measurement of what physicians actually do is more important that recording what they claim they should do." In 1971 the AMA invited state medical societies" to assume the responsibility of accrediting (planning and evaluating) programs of CME within their borders. Hurst proclaimed in 1972,12 "We must find a way for the busy physician to learn from what he does all day and all night." The planning and evaluation of CME for the future has now become fairly clear, but it has only emerged within the last four decades of medical history. PLANNING AND EVALUATION

Whether the program is a one-time conference, home study, or ongoing as in daily work, evaluation must evolve from a logical sequence Vol. 51, No. 6, June 1975

778

S. E. SIVERTSON

778~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

S. E.SVET

CIRCUMSTANCE

E

I/

RI

KNOWLEDGE

/

%&0R

SKILL

C---COMMUNITY

0 M----OTHER E

m %

ATTITUDE

1

0 -U --PATIENT T

A

- N

C ENVIRONMENT

Schema for Continuing Medical Education

of planning. The schema in the accompanying figure can assist in understanding the planning process and also suggests some variables which have been ignored for too long. The identification of educational objectives must ultimately derive from the assessment of the performance of those who provide care. This requires an accurate documentation of the outcome of that performance-whether this involves the patient, community, or othersas it effects recertification and relicensure examination. The figure depicts interrelating feedback which, if effective, guides the planning and evaluation of learning that updates knowledge, adds new skills, changes attitudes, and improves performance. It reinforces an essential factthat the outcome of what physicians do is more important than what physicians proclaim they should do. The basic educational methods for assessing knowledge and the knowledge gained from an educational program still hold. Pretesting, post-testing, and late post-testing have been enlivened by recent modifications and the addition of new techology. Programmed instruction problems in the management of patients and self-assessment examinations presented on tapes and slides, computers, films, television, videoBull. N. Y. Acad. Med.

EVALUATION OF CME PROGRAMS

779

tapes, and satellites have added immeasurable fascination to independent study. The acquisition of knowledge is now garnished with a measure of relevance and convenience never before available. Planning and evaluation in the attitudinal arena is difficult. Most methods are still in the developmental stages and of necessity are measured by inference, relying on skillfully designed questionnaires.13 A pragmatic alternative simply may be to ask a physician practicing in an up-to-date community if he thinks that the treatment of essential hypertension is worthwhile. If his answer is "no" a reasonable conclusion would be that his attitude and knowledge need some educational assistance. In addition, the observation of an individual's performance, of what he does and does not do, can reflect his attitude. All of these approaches are informative, but there is always a question of validity shading their interpretation. For practical purposes, skill and performance must be treated as one. Methods for evaluating performance have been reviewed by Barro.14 Briefly restated they are: i) direct observation, with the problem of expense and expenditure of time; 2) the simulation encounters with patients, raising the question of validity; 3) matching the qualities of a practitioner to the attributes of the ideal physician, which again raises the question of validity; and 4) using the records of patients to assess the physician's performance-the most feasible method at present. The patient's record-what is done to the patient and the patient's response to it-then becomes the key. How the information it contains is organized and utilized to assess the performance and outcome of care will determine whether valid educational objectives are obtained. If it adheres to a sound system of logic, pertinent objectives can be produced; anything less than this from the process of planning and evaluation faces the charge of irrelevance. All these considerations are just and proper but note the big arrows in the figure. They are exaggerated to emphasize that the circumstance and environment (soil) of the practice have an important impact on new learning and performance. If the seeds of new knowledge, skills, and attitudes are sown on unreceptive soil they will not germinate and thrive. Consequently, it behooves both teacher and student to develop seeds adapted to the soil conditions; if the soil is totally unreceptive it must receive major attention and conditioning. For longer than we care to admit, this poor matching has been the despair and the frustraVol. 51, No. 6, June 1975

7 80

780

S. E. SIVERTSON

tion of both academia and the practice sector. A challenge for the future, then, is whether we can help the physician to study his practice so that he can identify his educational needs and use his limited time for learning more accurately to fulfill those needs. The Department of Continuing Medical Education at the University of Wisconsin has been moving slowly in that direction. In I968 it developed a program titled Individual Physician Profile,15 which assists a physician in the study of his practice so that he can perceive his educational needs more clearly. To date approximately I8o individual practices have been studied-I 24 in Wisconsin and 56 in 24 other states. All participants perceived their educational needs more clearly by the end of the program. In the process, both desirable and undesirable characteristics of the circumstances and environment of each practice were recognized. These could not be ignored in planning for the individual physician's CME. A commonality of characteristics emerged. Generally, these characteristics conveniently arrange themselves into two categories: i) encounters with patients, which included the number of encounters per day, location or type of encounters, the problems the patients brought to the physicians, and the service given; 2) organization and administration of the practice, which included the record of the patient and how it was used, delegation of tasks by the physician, his personal likes and dislikes, and the education of patients. Some of these matters will be discussed. It is probably still true that a physician learns best from his practice: i.e., what he does to the patient and the patient's response to it. Beyond a certain number of encounters with patients per day, however (which each physician must recognize for himself), learning from his practice is threatened. A useful bench-mark figure for family practitioners in our experience is about 50 encounters per day.'6 In terms of patients seen in the office, the number is about 30 for a family practitioner who also maintains a hospital practice. A family physician with a substantially greater number of encounters with patients-70 or more per dayprobably is in serious trouble with his CME as it relates to his daily work, but this also depends on how the practice is organized. Since individuals vary greatly in their abilities, these numbers are bench-marks only. The type of encounter was meaningful also; for example, when a physician was personally receiving more than i0 phone calls per day Bull. N. Y. Acad. Med.

EVALUATION OF CME PROGRAMS

78I

the educational consultant could always sense the frustration and resentment of the physician when the telephone interrupted his examination of a patient. The physician either refused or had not learned to delegate most of these calls to a properly trained non-M.D. associate. A practitioner in this dilemma needs a ranked ordering of the problems that patients call about and a written protocol for dealing with each. Such protocols would facilitate the training of non-M.D. associates and also could reduce exposure to malpractice suits. Further, uncontrolled telephone traffic which produces unnecessary harrassment and frustration abrogates the unwritten contract between a physician and the patient in his office. If the number of encounters with patients-either in total or of a specific type-is excessive for an individual physician, this is a problem which probably commands top priority in his educational planning before other learning programs can be successfully introduced. A few family practitioners in eastern states had, for various reasons, given up the care of hospitalized patients in their practices;17 their educational needs were obviously restricted to the care of ambulatory patients. An educational program planned for that kind of need is uncommon today and is currently best met with courses that focus on noninvasive techniques. Our experience at the University of Wisconsin further suggests that the emphasis for these physicians should be on learning the fine points of interviewing, physical diagnosis, and pharmacotherapy; for older physicians cardiac auscultation heads the list. Of the I 8o participants in the Individual Physician Profile none had, as yet, a system of patients' records that could classify, cross-index, and produce in ranked order all the problems his patients were bringing to him. In addition, none had a record system that could analyze the outcome of care delivered in his own office, which we found constituted about 8o% of the group's encounters with patients. And none had a record system that would identify a point in time when new learning was introduced into his practice so that any resultant change in performance-whether in the care of patients or in the organization and administration of the practice-could be evaluated months or years later. Currently, the most logical system for recording information in the patient's record (from which the delegation of tasks and job descriptions may also evolve for properly trained non-M.D. personnel) is represented by the Problem-Oriented Medical Record.18 From the beginning we have asked every participant in our Individual Physician Vol. 51, No. 6, June 1975

78 7

2

8z

S. E. SIVERTSON S.

Profile if he knew of the Problem-Oriented Medical Record. In addition, it was recommended that he become familiar with it and implement such a record system in his practice. From I968 to the present we have observed a significant transition in the knowledge about and the implementation of this type of recording system. During the first three years 54% of our 63 participants had never heard of the problemoriented record. Since 197i all participants have been familiar with it and many were beginning to use it. Several participants were using it extremely effectively; of them it could be said that there are some physicians who are remarkable independent learners whose practices become living textbooks of medicine; perhaps this should be the ultimate goal of all stages of medical education. The dimensions of circumstance and environment can be understood further from an historical perspective. World War II is usually identified as a milestone in medical research, education, and service. Stimulated by large governmental spending during that war and increasingly greater expenditures after it, the knowledge explosion reportedly reached megaton proportions. Medical schools and training programs struggled continuously to implant the new knowledge, skills, and attitudes into their students with resultant repercussions. The increasing gap between the fruits of research and its application to clinical medicine was an example which in large part stimulated the federal government's Regional Medical Program. Additionally, other reverberations found their expression at the community level. Impaction intervened when the new graduates began to blend their fresh knowledge and skills with the older generation of practitioners who generally saw their patients on a first-come, firstserved basis; convenience and availability of care reigned supreme. The new physician soon recognized that this system for meeting the demand for medical service did not permit enough time with a patient to apply the new knowledge and skills. The system of appointments then came to the fore. A conflict developed between the younger generation who fully embraced the system of appointments and the older practitioners who, although recognizing the wisdom of appointments, had great difficulty postponing the care of long-time friends and patients. In addition, the patient had to learn to wait his turn for nonemergency care. It took two to three decades for this transition to occur; in some areas it is still in progress. Undergraduate, graduate, and continuing mediBull. N. Y. Acad. Med.

EVALUATION OF CME PROGRAMS

7 8 3

cal education never formally recognized or directly dealt with the concept of a system in which enough time with the patient was an important characteristic of a soil receptive to the seeds of new knowledge, skills, and attitudes. Now we are joined in a new struggle between generations of physicians and consumers who demand accountability in health care. The new watershed in the history of medicine is the patient's record, which must contain all necessary information arranged in an accurate, thorough, and logical sequence to respond to these challenges. As such, the evaluation of CME must take its directions from the patient's record and, if necessary, devote more effort to its implementation to permit its use in the assessment of performance and identification of relevant learning objectives. In conclusion, the evaluation of programs of CME can be summed up as follows: No objective, not even one, can equal the deeds we have done. It is deeds, then, we must peruse, to find objectives we can use. Deeds of outcome large or small, will test the learning of us all. Environment and circumstance, shun them not with looks askance. For in that soil we brew the deeds that illumine our learning needs. Teacher! Learner! like you and me. This is the guts of CME. REFERENCES 1. Maimonides, quoted by Vollan, D. D.: Postgraduate Medical Education in the United States. Chicago, Amer. Med. Ass., 1955. 2. Youmans, J. B.: Experience with a postgraduate course for practitioners: Evaluation of results. J. Ass. Med. Coil. 10:154-73, 1935. 3. Commission on Graduate Medical Education: Graduate Medical Education. Chicago, University of Chicago Press, 1940.

Vol. 51, No. 6, June 1975

4. Vollan, D. D., op. cit. 5. Council on Medical Education: Essentials of Approved Programs in Continuing Medical Education. Chicago, Amer. Med. Ass., 1970. 6. Miller, G. E., Graser, H. P., Abrahamson, S., Harnack, R. S., Cohen, I. S., and Land, A.: Teaching and Learning in Medical School. Cambridge, Mass., Commonwealth Fund Harvard University Press, 1961. 7. Dryer, B. V.: Lifetime learning for

7 84

8.

9.

10.

11.

12.

13.

S. E. SIVERTSON

physicians. J. Med. Educ. 37 :120-27, 1962. Coggeshall, L. T.: Planning for Medical Progress Through Education. Evanston, Ill., Ass. Amer. Med. Coll., 1965. Miller, G. E.: Continuing education for what? J. Med. Educ. 42:320-26, 1967. Williamson, J. W., Alexander, M., and Miller, G. E.: Continuing education and patient care research, J.A.M.A. 201: 938-42, 1967. Howard, R. W.: Accreditation of continuing medical education at the state level. Rhode Island Med. J. 56:362-65, 1973. Walker, H. K., Hurst, J. W., and Wood, M. F.: Applying the Problem Oriented System. New York, MEDCOM, 1973. Schofield, W.: Research Studies of Medical Students and Physicians Utilizing Standard Personality Instruments.

Washington, D.C., Ass. Amer. Med. Coll. Div. Educ. Measurement Res., 1972. 14. Barro, A. R.: Survey and evaluation of approaches to physician performance measurement. J. Med. Educ. 48: 1051-93, 1973. 15. Sivertson, S. E., Meyer, T. C., Hansen, it., and Schoenenberger, A.: Individual physician profile: Continuing education related to medical practice. J. Med. Educ. 48:1006-12, 1973. 16. Sivertson, S. E., Hansen, R. H., Shropshire, R. W., and Schoenenberger, A. O.: Family practices in Wisconsin: Implications for medical education and delivery of health care. Wis. Med. J. 73:170-74, 1974. 17. Unpublished data. 18. Weed, L. L.: Medical Records, Medical Education and Patient Care. Cleveland, Press of Case Western Reserve University, 1969.

Bull. N. Y. Acad. Med.

The evaluation of continuing medical education programs.

776 THE EVALUATION OF CONTINUING MEDICAL EDUCATION PROGRAMS* S. E. SIVERTSON, M.D. Associate Chairman, Department of Continuing Medical Education Uni...
822KB Sizes 0 Downloads 0 Views