Letters to the Editor

References 1 Cruess RL, Cruess SR, Boudreau JD, Snell L, Steinert Y. A schematic representation of professional identity formation and socialization: A guide for medical educators. Acad Med. 2015;90: 718–725. 2 Boudreau JD, Macdonald ME, Steinert Y. Affirming professional identities through an apprenticeship: Insights from a fouryear longitudinal case study. Acad Med. 2014;89:1038–1045. 3 Young ME, Cruess SR, Cruess RL, Steinert Y. The professionalism assessment of clinical teachers (PACT). Adv Health Sci Educ. 2014;19:99–113.

Towards Robust Validity Evidence for Learning Environment Assessment Tools To the Editor: In their recent literature review exploring the validity evidence for the interpretation of scores from learning environment (LE) assessment tools, Colbert-Getz and colleagues’1 methodology is far from robust. They identified articles in ERIC, PsycINFO, and PubMed, then “conducted another search using SCOPUS for articles citing the original article assessing each LE tool. [They] identified any cited article not already included.” But in their time frame, through 2012, there are 20 more studies in SCOPUS reporting medical student responses to the Dundee Ready Education Environment Measure (DREEM) alone than there are in PubMed—half as many again as they report. The authors identify 28 “tools.” Seven were published before 1990; 21 “were not used in subsequent studies published through 2012.” Yet, in the abstract it is stated: “They also searched SCOPUS and the reference lists of included studies for subsequent studies that assessed the LE tools. From each study, the authors extracted descriptive, sample, and validity evidence (content, response process, internal structure, relationship to other variables) information. They calculated a total validity score for each tool.” Later, in the Method section, the authors describe how they “calculated the number of subsequent peer-reviewed publications in which each LE tool was used with a new sample of medical students and/or residents.” So how did they calculate the total validity evidence score for the 21 tools with no published studies using the tool after the primary publication?

698

The authors understand that “Any new tools would need robust validity evidence testing and sampling across multiple institutions with trainees at multiple levels to establish their utility.” Surely this is what the > 200 DREEM studies currently listed in SCOPUS provide, since they all report robust psychometrics. Is it really appropriate to suggest that DREEM was disseminated so widely because our “network” operated on uncritical “name recognition” “rather than conducting a literature review to find a different tool”? The Korean Society of Medical Education has recently used DREEM in 40 of the 41 medical schools in South Korea, including more than 9,000 respondents. In addition, the original publication of the Postgraduate Hospital Education Environment Measure (PHEEM) is not cited, but one of its subsequent uses is; a mini version as well as the primary Surgical Theatre Education Environment Measure are cited. Finally, we note that the authors must have a very narrow view of the educational climate if they think that “Graduate medical education is purely a job training environment.” If that were so, the content of educational environment measures would be different from those included in DREEM, PHEEM, and related instruments, as would the content of most residency contracts, which cite education as the main objective of residency training. Disclosures: S. Roff and S. McAleer were the primary researchers for the development of the Dundee Education Environment Measure, the Postgraduate Hospital Education Environment Measure, and related instruments. Sue Roff, MA Part-time tutor, Centre for Medical Education, Dundee University Medical School, Dundee, United Kingdom, and educational consultant; s.l.roff@ dundee.ac.uk.

Sean McAleer, PhD Programme director and senior lecturer, Centre for Medical Education, Dundee University Medical School, Dundee, United Kingdom.

Reference 1 Colbert-Getz JM, Kim S, Goode VH, Shochet RB, Wright SM. Assessing medical students’ and residents’ perceptions of the learning environment: Exploring validity evidence for the interpretation of scores from existing tools. Acad Med. 2014;89:1687–1693.

To the Editor: Colbert-Getz and colleagues’1 review of learning environment (LE) assessment tools is both timely and relevant. The authors use four of the five categories of validity evidence from the American Psychological and Educational Research Associations to arrive at a total validity evidence score for each tool they reviewed to judge the quality of both medical student and resident LE perceptions. We wonder, however, if the authors used only the initial publications to arrive at their total validity evidence scores. For example, while the Medical Student Learning Environment Survey’s (MSLES) original publication received a total score of 3/8 (38%), subsequent publications from Australia and Canada examined the Internal Structure and Relationship to Other Variables criteria.2,3 Both studies used factor analysis to independently confirm that the individual scales of the MSLES represent one dominant factor. Clarke et al2 also examined the retest reliability and internal consistency, while Rusticus et al3 correlated the MSLES to student satisfaction and academic performance. Applying the authors’ validity criteria, we would have given an additional rating of 2 (“strong” evidence) for Internal Structure and a score of 1 (“weak” evidence) for Relationship to Other Variables. The total validity evidence score of the MSLES would therefore increase to 6/8 (75%). Similarly, for measuring the resident LE, the authors give the VA Learners’ Perception Survey (LPS) a total validity score of 2/8 (25%). The original publication by Keitz et al4 used focus groups of medical students and residents in the initial development of the LPS, and factor analysis was used to collapse the original 57 questions into four major domains. Internal consistency using a mixed-effects model was further verified in a subsequent publication by Cannon et al.5 We would have given an additional rating of 1 for Response Process and 2 for Internal Structure, increasing the total validity evidence score to 5/8 (63%). Both of these updated scores for the MSLES and LPS would be the highest scores listed for validity evidence in undergraduate and graduate medical education, respectively. We also wonder

Academic Medicine, Vol. 90, No. 6 / June 2015

Copyright © by the Association of American Medical Colleges. Unauthorized reproduction of this article is prohibited.

Towards robust validity evidence for learning environment assessment tools.

Towards robust validity evidence for learning environment assessment tools. - PDF Download Free
177KB Sizes 3 Downloads 8 Views