Advances in Health Sciences Education 1: 179-196, 1997. ) 1997 Kluwer Academic Publishers. Printed in the Netherlands.
The Fate of Medical Students With Different Levels of Knowledge: Are the Basic Medical Sciences Relevant to Physician Competence? MOHAMMADREZA HOJAT, JOSEPH S. GONNELLA, JAMES B. ERDMANN and J. JON VELOSKI (Address correspondence to: Mohammadreza Hojat, Ph.D., Jefferson Medical College, Centerfor Research in Medical Education and Health Care, 1025 Walnut Street, Philadelphia,Pennsylvania 19107, USA. E-mail: [email protected]
Abstract. Purpose - This study was designed to test the hypothesis that an early gap in knowledge of sciences basic to medicine could have a sustained negative effect throughout medical school and beyond. Method - A longitudinal prospective study of 4,437 students who entered Jefferson Medical College between 1972 and 1991 was conducted in which the students were divided into three groups. Group I consisted of 392 who failed at least one of the basic sciences courses in the first year of medical school, Group II was comprised of 398 who did not fail but had low first-year gradepoint averages; and 3,647 of the remaining sample were included in Group III. The groups were compared on retention and dismissal rates, medical school assessment measures, scores on medical licensing examinations, ratings of clinical competency in residency, board certification rates, and faculty appointments. Results - Significant differences were observed among the three groups confirming the hypothesis that students' level of knowledge in sciences basic to medicine early in medical school could predict later performance during medical school and beyond. Implications for early diagnosis of academic deficiencies, in better preparation of medical students, and in the assessment of clinical competency are discussed. Key words: academic performance, knowledge and competence, medical education
A challenging issue in medical education research is studying the overlap, or interrelations, between measures of knowledge in the sciences basic to medicine and assessments of clinical competence (Gonnella, Hojat, Erdmann and Veloski, 1993). There is no doubt that knowledge lays the foundation for its application in terms of execution of clinical skills. Acquired factual information is presumed to serve as the framework for the development and execution of particular skills (Royer, Cisero and Carlo, 1993). Because of the fundamental role of knowledge in skill development, its acquisition has been described as the primary (declarative) stage of skill development in human cognition (Anderson, 1982, 1983). Despite isolated negative findings (Price, Taylor, Richards, and Jacobsen, 1964a, 1964b; Taylor and Albo, 1993), many medical education and evaluation researchers assume
MOHAMMADREZA HOJAT ET AL.
that the role of knowledge in the development of competence is beyond question or proof (Miller, 1990; Hojat, Gonnella, Veloski and Erdmann, 1993; Lockyer, 1992; Hunt, Scott, Phillips, Yergen and Greig, 1987; Yindra, Rosenfeld and Donnelly, 1988). Although the argument regarding the relevance of factual information to professional performance seems tautological in other disciplines, such relevancy has long been a subject of debate in medical education research (Gonnella et al., 1993). There are those who argue that knowledge is transitory and changing, and therefore irrelevant to professional performance. For example, Billings, the founder of Index Medicus, reported in 1889, that nine-tenths of medical knowledge (basic and clinical) becomes worthless within a decade of publication, and much of it is so even when it is published (cited in Taylor, 1992). In addition, the cry is often heard that much of what is laboriously learned in school is forgotten very shortly thereafter (Bahrick, 1979; Higbee, 1977). But the situation is not nearly so bleak as some studies suggest. First, recent studies have indicated that the mind can recall a substantial amount of the material learned in school for as long as 50 years (Semb and Ellis, 1994). While high-ability students learn and remember more than others, no evidence was found to indicate that forgetting occurs as rapidly as commonly assumed or described (Semb and Ellis, 994). Second, it seems that the transitory nature of knowledge has been somewhat exaggerated. The factual information content in the sciences basic to medicine (body structure and function) has been far less the subject of rapid change than technological advancements in medicine. In addition, the rate of change in knowledge cannot be an argument against its acquisition, since the most current knowledge develops as an extension of the old. Therefore, teaching and learning current factual information cannot be stopped just because of a possibility of its future transformation (Gonnella, Hojat, Erdmann and Veloski, 1993c). Given these assurances, it is still difficult to identify what areas of factual information acquired in medical school are most "relevant" to which situations in practice (Vogel, 1993). Some of the material taught in medical school may never be directly applied in practice; some may change in the course of advancement of knowledge; and some may become obsolete even as the medical student begins to apply what has been learned. But we cannot know in advance which part will prove to be useful and which will not. In addition, in the practice of medicine, seemingly "irrelevant" factual knowledge may yet prove valuable in an unforeseen situation (Vogel, 1993). Based on this "uncertainty principle", a wider net is deliberately cast in teaching the broadest scope of information. The moderate relationships observed between physicians' knowledge attainment in medical school and later clinical competence can be explained by the fact that the activities of medical students in a controlled, supervised situation are unequal to demands of the professional tasks of physicians (Gonnella, Hojat, Erdmann and Veloski, 1993c), and therefore, any relationship between measures
THE FATE OF MEDICAL STUDENTS WITH DIFFERENT LEVELS OF KNOWLEDGE
obtained in medical school and those in the practice of medicine is not expected to be large. In addition to these demand differences is acquisition of knowledge in medical school, and application of knowledge in clinical practice certain methodological issues such as different methods of evaluations (e.g., objective examination of information recall, subjective ratings of competence) and the time interval between the evaluations can also spuriously reduce the true relationships. A combination of these conceptual and methodological issues, in fact, produces a "noise" that obscures the genuine "signal" in the relationships (Gonnella, Hojat, Erdmann and Veloski, 1993a). The apparent "shrinkage" of the overlap among evaluations of acquired knowledge, clinical competence (especially in the non-cognitive domain) has contributed to the development of a myth that basic sciences as taught in medical school are irrelevant to clinical practice (see Gonnella et al., 1993a, for literature review). The myth has been dispelled in a recently published document that provides empirical evidence from several medical schools in the United States and abroad, as well as from a medical licensing organization (Gonnella, Hojat, Erdmann and Veloski, 1993b). The document suggests that medical school faculty in the sciences basic to medicine should be confident that their evaluations do have significant validity in forecasting clinical competence in residency training, as well as in predicting certification status following residency training (Hojat, Gonnella, Veloski and Erdmann, 1993; Case and Swanson, 1993). It is true that in the continuum of medical education, assessments undergo a steady evolution from an early focus on knowledge acquisition, to a midstream preoccupation with clinical competence, and ultimately to concern for performance outcomes (Miller, 1993). If one accepts the premise that the current system of medical education is supported by a compelling logic (Gonnella, Hojat, Erdmann and Veloski, 1993b, p. xxv), then the evaluations of knowledge, and competence - despite skeptical critique - should support the principles of continuity and connections. These principles of continuity have been supported in findings that suggest a low level of knowledge or academic attainments prior to medical school (Hojat, Veloski and Zeleznik, 1985; Rosenfeld, Hojat, Veloski et al., 1992; Zeleznik, Hojat and Veloski, 1983; Glaser, Hojat, Veloski, Blacklow and Goepp, 1992) or during medical school (Veloski, Herman, Gonnella, Zeleznik and Kellow, 1979; Gonnella and Hojat, 1983) exerts a sustained and negative effect over the continuum of medical education and practice. The present study was designed to further support the proposition that a solid grounding in knowledge of the sciences basic to medicine can increase the likelihood of becoming a clinically competent physician, while conversely, weakness in basic sciences knowledge correspondingly decreases the likelihood of success in medical school and beyond. The following hypothesis was tested: Deficiency in knowledge of the sciences basic to medicine, detected early in medical school,
MOHAMMADREZA HOJAT ET AL.
would be associated with enduring negative consequences during medical school, residency, and practice.
Total sample of this study consisted of 4,437 students (26% women) who entered Jefferson Medical College in two decades between 1972-1991, excluding 79 students who participated in the M.D.-Ph.D. program or transferred to Jefferson from other medical schools. The total sample was classified into three mutually exclusive groups based on their performance in the first year of medical school. Group I consisted of 392 students (9% of the total) who had failed at least one of the basic sciences courses in the first year of medical school. Group II consisted of 398 students (9% of the total) who had not failed any course in their first year of medical school, but had obtained a first-year grade-point average at least one standard deviation below the mean of their respective classes. GroupIII consisted of the rest of the cohort (n = 3647, 82% of the total sample). VARIABLES:
The independent variable was the assigned membership in the three aforementioned groups. Three sets of dependent variables were applied. 1. Assessments in medical school: The first set consisted of measures of academic attainment in medical school including attrition, delayed graduation 3 , and on-time graduation rates. Also, grades and evaluations on objective examinations in the basic sciences (second-year examinations), and in the clinical sciences (thirdyear objective examinations) in six core clerkships (family medicine, internal medicine, obstetrics/gynecology, pediatrics, psychiatry, and surgery) were included in this set (the objective examinations were mostly multiple-choice format, with an internal consistency reliability of usually about 0.75, maximum scale score of 100, and failing cut-off of 70). Ratings of clinical competence in the six third-year core clerkships and medical school class rank were included in this set of dependent variables. Ratings of clinical competence in each clerkship were given on a 4-point scale from superior (high honors) to marginal (barely passing). The medical school class rank was calculated based on one-third weight given to combined basic sciences examination grades (in the first and second year of medical school), and two-thirds weight to the ratings of competence in the thirdyear core clinical clerkships (Blacklow, Goepp and Hojat, 1991a). Psychometrics and predictive validity of the class ranks correlated to clinical competence ratings in residency programs have been previously reported (Blacklow, Goepp and Hojat, 1991a, 1991b).
THE FATE OF MEDICAL STUDENTS WITH DIFFERENT LEVELS OF KNOWLEDGE
2. Scores on medical licensing examinations: The second set of dependent variables included scores on Parts I, II, and III of the examinations of the National Board of Medical Examiners (NBME, recently replaced by the United States Medical Licensing Examinations, USMLE Steps 1, 2, and 3, respectively'). For those who repeated the examinations, the first-attempt scores were used. 3. Measures beyond medical school: The third set of dependent variables included performance measures beyond medical school. These measures included the ratings given by the directors (or supervisors) of the residency programs 2 at the end of the first-year of residency on four clinical competency areas: medical knowledge, clinical judgment, data-gathering skills, and professional attitudes (each calculated from a single item on a 4-point Likert-type scale by comparing the residents against all graduates ever supervised by the director of the program: 1 = bottom quarter, 2 = lower middle quarter, 3 -=upper middle quarter; and 4 = top quarter), and on average ratings in three competency areas: data processing skills (16 items), interpersonal relationships (10 items), and socioeconomic aspects of patient care (7 items). Ratings on each of these items were given on the 4-point Likert-type scale as described, averaged for each of the three clinical competency areas. Psychometrics of the rating form have been reported and empirical evidence supports the factorial structure of these competency areas in the rating form (Hojat, Veloski and Borenstein, 1986; Hojat, Borenstein and Veloski, 1988). Willingness of the residency program director to extend residency past the first postgraduate year was included as one of the evaluation measures in this set of dependent variables, in which residency program directors (supervisors) were asked if they would be willing to offer further residency to the graduates. Further residency in the second year is usually offered to those who solidly meet the first-year training standards. Postgraduate clinical competence data were available for entering classes of 1972 through 1989 by written permission (about 73% of the cohort, n = 2897). Finally, board certification rates and full-time faculty appointments were factored in, comparing the practicing physicians in the three groups. Data on board certification rates and faculty appointments were analyzed only for physicians who graduated prior to 1987, to allow for completion of training for medical and surgical sub-specialties. PROCEDURES:
Data for the present study were retrieved from the database of the Jefferson Longitudinal Study of Medical Students and Graduates (for more description see Hojat, Gonnella, Veloski and Erdmann, 1996). Due to differing means and standard deviations of the grade-point averages, and of comprehensive examination scores in different years, all such grades or scores for each year were transformed to a uniform standard scale (T-scale with a mean of 50 and a standard deviation of 10) before dividing the sample into the three groups. Low grades or low scores were
MOHAMMADREZA HOJAT ET AL.
operationally defined as at least one standard deviation below the class mean, with ratings of postgraduate clinical competence of 2 or smaller (on a 4-point scale), also defined as "low" in the present study. With the exception of comparisons of the three groups on their ages by analysis of variance, in all other comparisons the chi-square test for contingency table was used. Since neither the magnitude of the obtained chi-square nor the statistical significance level (alpha) can reveal the practical significance of the group differences, the standardized effect size value was calculated for each obtained chi-square (Cohen, 1987, pp. 215-271). According to the operational definitions suggested by Cohen (1987, p. 227) any standardized effect size with a magnitude approaching 0.50 or greater is considered of practical importance. Effect size values around 0.30 are considered moderately important, with smaller values around 0.10 considered of no practical significance even when the obtained differences are statistically significant. The three groups were compared on the dependent variables. The results are presented in separate sections based on each set of dependent variables. The probability of Type I error (alpha) was set at 0.01 for all statistical analyses.
Results ASSESSMENTS IN MEDICAL SCHOOL:
Results of group comparisons on delayed graduation, attrition, examination grades, and ratings of clinical competence in medical school, are presented in the following sections: On-time and delayed graduationand attrition rates: The three groups were compared on their rates of on-time and delayed graduation and attrition (Table I). The lowest on-time graduation rate was observed for those in Group I (50%), and the highest rate for those in Group III (94%), with those in Group II in between (82%). The strikingly large differences in on-time graduation rates are in the predicted direction. Significant differences were also observed in the delayed graduation rates (over four years to graduate) among the three groups. These rates were 33%, 9% and 3% in Groups I, II and III, respectively (Table I). A higher delayed graduation rate in Group I was expected due to their failing courses although they were given a second chance to pass the course without delayed graduation. As shown in the table, the major reason for delayed graduation in Groups I and II was academic (either failing a course or failing the comprehensive examinations) but for those in Group III it was mostly nonacademic (personal, medical and graduate studies). The association of group membership and reasons for delayed graduation was statistically significant (X(4) = 84.1, p < 0.01), and the corresponding effect size was 0.56, considered large and of practical importance. Attrition rate was also highest in Group I (16%); 8% in Group II and 1% in Group III. The majority of those in Group I who did not complete medical school were
THE FATE OF MEDICAL STUDENTS WITH DIFFERENT LEVELS OF KNOWLEDGE Table I. Retention and Attrition Among Three Comparison Groups Groups II (n = 398)
I (n = 392) n
III (n = 3,647) n
Retention Graduated on timeb
Delayed graduationb Delayed by academic problems Delayedfor nonacademic reasons Mixed academic and non-academicc
131 107 9 15
(33) (27) (2) (4)
35 26 3 6
(9) (6) (1) (2)
107 37 61 9
(3) (1) (2) (