Letters to the Editor

References 1 Cruess RL, Cruess SR, Boudreau JD, Snell L, Steinert Y. A schematic representation of professional identity formation and socialization: A guide for medical educators. Acad Med. 2015;90: 718–725. 2 Boudreau JD, Macdonald ME, Steinert Y. Affirming professional identities through an apprenticeship: Insights from a fouryear longitudinal case study. Acad Med. 2014;89:1038–1045. 3 Young ME, Cruess SR, Cruess RL, Steinert Y. The professionalism assessment of clinical teachers (PACT). Adv Health Sci Educ. 2014;19:99–113.

Towards Robust Validity Evidence for Learning Environment Assessment Tools To the Editor: In their recent literature review exploring the validity evidence for the interpretation of scores from learning environment (LE) assessment tools, Colbert-Getz and colleagues’1 methodology is far from robust. They identified articles in ERIC, PsycINFO, and PubMed, then “conducted another search using SCOPUS for articles citing the original article assessing each LE tool. [They] identified any cited article not already included.” But in their time frame, through 2012, there are 20 more studies in SCOPUS reporting medical student responses to the Dundee Ready Education Environment Measure (DREEM) alone than there are in PubMed—half as many again as they report. The authors identify 28 “tools.” Seven were published before 1990; 21 “were not used in subsequent studies published through 2012.” Yet, in the abstract it is stated: “They also searched SCOPUS and the reference lists of included studies for subsequent studies that assessed the LE tools. From each study, the authors extracted descriptive, sample, and validity evidence (content, response process, internal structure, relationship to other variables) information. They calculated a total validity score for each tool.” Later, in the Method section, the authors describe how they “calculated the number of subsequent peer-reviewed publications in which each LE tool was used with a new sample of medical students and/or residents.” So how did they calculate the total validity evidence score for the 21 tools with no published studies using the tool after the primary publication?

698

The authors understand that “Any new tools would need robust validity evidence testing and sampling across multiple institutions with trainees at multiple levels to establish their utility.” Surely this is what the > 200 DREEM studies currently listed in SCOPUS provide, since they all report robust psychometrics. Is it really appropriate to suggest that DREEM was disseminated so widely because our “network” operated on uncritical “name recognition” “rather than conducting a literature review to find a different tool”? The Korean Society of Medical Education has recently used DREEM in 40 of the 41 medical schools in South Korea, including more than 9,000 respondents. In addition, the original publication of the Postgraduate Hospital Education Environment Measure (PHEEM) is not cited, but one of its subsequent uses is; a mini version as well as the primary Surgical Theatre Education Environment Measure are cited. Finally, we note that the authors must have a very narrow view of the educational climate if they think that “Graduate medical education is purely a job training environment.” If that were so, the content of educational environment measures would be different from those included in DREEM, PHEEM, and related instruments, as would the content of most residency contracts, which cite education as the main objective of residency training. Disclosures: S. Roff and S. McAleer were the primary researchers for the development of the Dundee Education Environment Measure, the Postgraduate Hospital Education Environment Measure, and related instruments. Sue Roff, MA Part-time tutor, Centre for Medical Education, Dundee University Medical School, Dundee, United Kingdom, and educational consultant; s.l.roff@ dundee.ac.uk.

Sean McAleer, PhD Programme director and senior lecturer, Centre for Medical Education, Dundee University Medical School, Dundee, United Kingdom.

Reference 1 Colbert-Getz JM, Kim S, Goode VH, Shochet RB, Wright SM. Assessing medical students’ and residents’ perceptions of the learning environment: Exploring validity evidence for the interpretation of scores from existing tools. Acad Med. 2014;89:1687–1693.

To the Editor: Colbert-Getz and colleagues’1 review of learning environment (LE) assessment tools is both timely and relevant. The authors use four of the five categories of validity evidence from the American Psychological and Educational Research Associations to arrive at a total validity evidence score for each tool they reviewed to judge the quality of both medical student and resident LE perceptions. We wonder, however, if the authors used only the initial publications to arrive at their total validity evidence scores. For example, while the Medical Student Learning Environment Survey’s (MSLES) original publication received a total score of 3/8 (38%), subsequent publications from Australia and Canada examined the Internal Structure and Relationship to Other Variables criteria.2,3 Both studies used factor analysis to independently confirm that the individual scales of the MSLES represent one dominant factor. Clarke et al2 also examined the retest reliability and internal consistency, while Rusticus et al3 correlated the MSLES to student satisfaction and academic performance. Applying the authors’ validity criteria, we would have given an additional rating of 2 (“strong” evidence) for Internal Structure and a score of 1 (“weak” evidence) for Relationship to Other Variables. The total validity evidence score of the MSLES would therefore increase to 6/8 (75%). Similarly, for measuring the resident LE, the authors give the VA Learners’ Perception Survey (LPS) a total validity score of 2/8 (25%). The original publication by Keitz et al4 used focus groups of medical students and residents in the initial development of the LPS, and factor analysis was used to collapse the original 57 questions into four major domains. Internal consistency using a mixed-effects model was further verified in a subsequent publication by Cannon et al.5 We would have given an additional rating of 1 for Response Process and 2 for Internal Structure, increasing the total validity evidence score to 5/8 (63%). Both of these updated scores for the MSLES and LPS would be the highest scores listed for validity evidence in undergraduate and graduate medical education, respectively. We also wonder

Academic Medicine, Vol. 90, No. 6 / June 2015

Copyright © by the Association of American Medical Colleges. Unauthorized reproduction of this article is prohibited.

Letters to the Editor

if the authors assessed the interrater reliability used to assess the validity evidence, since their checklist was adapted from Beckman et al,6 which found kappa values ranging from −0.10 to 0.96 and was particularly poor for rating the Response Process criteria. Disclosures: None reported. Lawrence K. Loo, MD Vice chair, Education and Faculty Development, Department of Medicine, and professor of medicine, Loma Linda University School of Medicine, Loma Linda, California; [email protected].

John M. Byrne, DO Associate chief of staff, Education, VA Loma Linda Healthcare System, and associate professor of medicine, Loma Linda University School of Medicine, Loma Linda, California.

References 1 Colbert-Getz JM, Kim S, Goode VH, Shochet RB, Wright SM. Assessing medical students’ and residents’ perceptions of the learning environment: Exploring validity evidence for the interpretation of scores from existing tools. Acad Med. 2014;89:1687–1693. 2 Clarke RM, Feletti GI, Engel CE. Student perceptions of the learning environment in a new medical school. Med Educ. 1984;18:321–325. 3 Rusticus S, Worthington A, Wilson D, Joughin K. The medical school learning environment survey: An examination of its factor structure and relationship to student performance and satisfaction. Learn Environ Res. 2014;17:423–435. 4 Keitz SA, Holland GJ, Melander EH, Bosworth HB, Pincus SH; VA Learners’ Perceptions Working Group. The Veterans Affairs Learners’ Perceptions Survey: The foundation for educational quality improvement. Acad Med. 2003;78:910–917. 5 Cannon GW, Keitz SA, Holland GJ, et al. Factors determining medical students’ and residents’ satisfaction during VA-based training; findings from the VA Learner’s Perceptions Survey. Acad Med. 2008;83:611–620. 6 Beckman TJ, Cook DA, Mandrekar JN. What is the validity evidence for assessments of clinical teaching? J Gen Intern Med. 2005;20:1159–1164.

In Reply to Roff and McAleer and to Loo and Byrne: We appreciate the interest in our article from Roff and McAleer and from Loo and Byrne. Here, we will attempt to clarify a few points. First, we did not assess the interrater reliability of coding; however, in the few instances when there were differences, discussions took place until consensus. Second, we hoped that readers would not interpret our comments as implying that job training in residency is not an

Academic Medicine, Vol. 90, No. 6 / June 2015

educational pursuit. Instead, we wished to emphasize that residents have evolved from exclusively learners to trainees who are also employees with professional responsibilities. Third, many of the articles that have cited Roff and colleagues’ Dundee Ready Education Environment Measure (DREEM) were excluded from the review because they did not meet inclusion criteria, which were carefully considered by the authorship team and appreciated during the peer review of our article. Only 45 articles about DREEM were published in English in peer-reviewed journals and provided new quantitative data from medical students or residents. Repeated citations of DREEM suggest that it fulfilled an unmet need since its development in the 1990s, and use of the DREEM may reflect the high esteem that the community has for the accomplishments in medical education at Dundee. If validity evidence for the tool were more robust and compelling, it may have been included in more studies. Inclusion criteria also explain why the first article to mention the Postgraduate Hospital Educational Environment Measure (PHEEM)1 was not included in the review. Specifically, no quantitative data were in the article’s Results section, and thus validity evidence could not be established. Therefore, we elected to use the first publication of the PHEEM in which quantitative data were provided for the review.2 Finally, subsequent-use articles did not factor into the validity evidence scores that we generated. Instead, the score was calculated for each tool based upon evidence provided in the first publication with quantitative data. It is not unreasonable to suggest that subsequent publications may provide additional validation and that these may be a proxy for acceptability within the educational community. Another review could be conducted to investigate the entire body of validity evidence associated with each scale. Although it would be interesting to see if validity evidence is strengthened with subsequent publications, some threshold of evidence is needed in the first article presenting a new tool to justify publication. Additionally, if ample

validity evidence is not provided in the initial publication, how might scholars and educators know whether the tool should be subsequently used or not? Disclosures: None reported. Jorie M. Colbert-Getz, PhD Assistant professor, Department of Internal Medicine Administration, and director, Medical Education Research, University of Utah School of Medicine, Salt Lake City, Utah; [email protected].

Robert B. Shochet, MD Associate professor, Department of Medicine, and director, Colleges Advisory Program, Johns Hopkins University School of Medicine, Baltimore, Maryland.

Scott M. Wright, MD Professor and division chief, Division of General Internal Medicine, Johns Hopkins Bayview Medical Center, Johns Hopkins University School of Medicine, Baltimore, Maryland.

References 1 Roff S, McAleer S, Skinner A. Development and validation of an instrument to measure the postgraduate clinical learning and teaching educational environment for hospital-based junior doctors in the UK. Med Teach. 2005;27:326–331. 2 Clapham M, Wall D, Batchelor A. Educational environment in intensive care medicine—use of Postgraduate Hospital Educational Environment Measure (PHEEM). Med Teach. 2007;29:e184–e191.

How Do U.S. and Canadian Medical Schools Teach About the Role of Physicians in the Holocaust? To the Editor: Almost every aspect of contemporary medical ethics is influenced by the history of physician involvement in the Holocaust. Most notorious is the unethical research by Nazi doctors, which led to the Nuremburg Code,1 but physicians and their organizations played many other roles, including in rationalizing and implementing programs of forced sterilization and “euthanasia” of disabled individuals, and in developing, testing, and refining the killing, cremation, and camouflage technologies used in the death camps.2,3 This history informs modern debates about economic and social forces in medical practice, genetic testing and therapies, public health research and practice, physician involvement in prisoner interrogations and executions, end-oflife decision making, and many other issues.1–4

699

Copyright © by the Association of American Medical Colleges. Unauthorized reproduction of this article is prohibited.

Towards robust validity evidence for learning environment assessment tools.

Towards robust validity evidence for learning environment assessment tools. - PDF Download Free
187KB Sizes 3 Downloads 9 Views