S p e c i a l A r t i c l e s • R ev i ew Gaetke-Udager and Yablon How to Develop a Medical Education Research Project

Downloaded from www.ajronline.org by SUNY Downstate Medical Center on 03/27/15 from IP address Copyright ARRS. For personal use only; all rights reserved

Special Articles Review

Medical Education Research for Radiologists: A Road Map for Developing a Project Kara Gaetke-Udager 1 Corrie M. Yablon Gaetke-Udager K, Yablon CM

OBJECTIVE. Medical education research is challenging to do well, but researchers can develop a robust project with knowledge of basic principles. Thoughtful creation of a study question, development of a conceptual framework, and attention to study design are crucial to developing a successful project. CONCLUSION. A thorough understanding of research methods and elements of survey design is necessary. Projects that result in changes to behavior, clinical practice, and patient outcomes have the most potential for success.


Keywords: conceptual framework, medical education research, pre- and posttest results, qualitative methods, quantitative methods, survey design DOI:10.2214/AJR.14.13675 Received August 19, 2014; accepted after revision September 30, 2014. 1 Both authors: Department of Radiology, University of Michigan, 1500 E Medical Center Dr, B1 D520, Ann Arbor, MI 48109. Address correspondence to K. Gaetke-Udager ([email protected]).

This article is available for credit. AJR 2015; 204:692–697 0361–803X/15/2044–692 © American Roentgen Ray Society


edical educational research is a field that has historically been neglected by researchers and educators. In 1994, the amount of federal spending on health professions education research was less than 0.001% of the total amount of federal spending on graduate medical education [1]. A 2005 study in the Journal of the American Medical Association found that 75% of medical education studies were self-funded [2]. Although many universities provide faculty development programs and financial support for small pilot medical education projects, funding for larger education research projects involving multiple years or multiple institutions is more difficult to achieve [3, 4]. It is important to note that most research in radiology is performed by academic radiology departments and that this research is heavily funded by clinical revenue [5]. Thus, the recent cuts in clinical reimbursement diminish the available resources to fund research in academic departments. In light of the recent cuts in government funding to Medicare and thereby to graduate medical education, researchers face additional significant challenges to find funding for their work [6]. If the lack of funding were not barrier enough to performing medical educational research, medical education projects are challenging to perform and even more difficult to perform well. There are inherent obstacles faced by researchers pursuing medical educational research, such as small sample sizes and a lack of a compar-

ison or control group, factors that limit statistical power of the study (i.e., the ability of the test to detect an effect that actually exists), and the ability to generalize findings to larger groups of learners [7, 8]. The significance and the strength of medical educational research studies can be undermined when a curricular intervention is implemented in the absence of a rigorous study design, and then post hoc inferences are made about how learners respond to it. Additional problems described in medical educational research include inappropriate methods, lack of clearly defined interventions, and interventions that have many components [3, 9]. All of these factors can confound the study’s results and conclusions. Although randomized controlled trials (RCTs) are considered the reference standard of medical research, they might not be the optimal study design for many medical education projects. By necessity, alternate study designs, such as case-control, observational, or case series, may be better suited to address medical educational research questions. If these medical education studies are evaluated with the RCT method as the reference standard, then reviewers may be reluctant to recommend these works for publication [7]. Perhaps more difficult for medical education researchers to overcome are criticisms of the importance and generalizability of their projects [10]. Medical education research by definition deals with qualitative entities, such as knowledge, conduct, proficiencies, and attitudes, and thus shares some aspects

AJR:204, April 2015

Downloaded from www.ajronline.org by SUNY Downstate Medical Center on 03/27/15 from IP address Copyright ARRS. For personal use only; all rights reserved

How to Develop a Medical Education Research Project with social sciences research. Although these qualitative entities can be quantified, many medical educational research projects fail to do so. This work may then be judged by those in the biomedical field, who favor quantitative research techniques and apply inappropriate evaluation criteria to medical educational research. Another common criticism of medical education research has been that the research is too concentrated on learners, focusing on narrow issues at the home institution that may not be applicable to a larger population. Instead, researchers should shift their focus from subjective learner perceptions to measurable behaviors and patient-oriented outcomes [7, 10, 11]. Although direct interactions between the learner and the patient may be limited in radiology, there is still potential for using medical educational research to change radiologists’ performance and thereby improve patient outcome. To address these concerns, medical education researchers, especially those in radiology, must take a focused approach when designing their projects. If evidence-based projects show clear outcomes and measurable results, they will have a better chance of receiving funding and a more favorable response from reviewers. Researchers must not only show that learners can learn. In addition, they must prove that a proposed intervention, program, or curriculum will change a behavior or attitude and, more important, translate into improved patient care. By following these suggestions, researchers will no doubt improve both the quality and the reputation of medical education research. Planning Your Project Literature Review A comprehensive literature review serves as the basis for developing a project, and time spent on this step of the process will help the researcher develop a good study question and the defining concepts for the project. The literature search shows what research has already been published on the topic and also allows the researcher to appreciate what has yet to be studied. In other words, the literature review helps the researcher to decide what questions should be posed in the study and what research methods may work best to obtain the answers [3]. It may seem elemental that one would perform a thorough literature review before commencing a project, but it is surprising to see how many submitted manuscripts overlook this first step. Incomplete literature review is a major reason for manuscript fail-

ure [12], and medical education research articles in particular often have a limited literature review [13]. The results of the literature review will then form the basis of the introduction of the manuscript. When performing the literature search, having a method of scoring the results helps to organize the information. Each study can be analyzed for such factors as study size, inclusion and exclusion criteria, and variables analyzed, and the results can be compiled. This allows the researcher a more thorough critical analysis of the existing literature, which will help him or her develop a more targeted relevant purpose for the survey. Needs Assessment A needs assessment is useful to help develop the research question or survey design, if a survey is being contemplated. Needs assessments are used in every profession to identify strategic priorities and results to be accomplished and to guide decisions as to what actions to take [14]. A need is actually a gap in results; it is the difference between what we currently have and what we desire to achieve. In medical educational research, the needs assessment helps the researcher define the solution, such as better performance, elimination of a knowledge gap, or improved patient care. A needs assessment is outcomes oriented in that it provides justification for the study design and a basis for a survey. The needs assessment outlines the evidence for the intervention, identifies the results to be accomplished, and shows the researcher how to achieve those results, turning facts into data. The needs assessment can be proactive (e.g., looking for opportunities to improve physician performance or patient care), retroactive (e.g., responding to a negative result or event), or part of a continuous quality improvement effort. In the end, the researcher must create a project that provides information that will create a meaningful change in the future; in many cases, this means a meaningful change in patient care or patient outcome [3]. After the needs assessment has been performed and the knowledge gap has been defined, one should then decide on the target study population, what is to be asked and studied, and how it is to be measured [15]. The researcher should establish evaluation criteria for determining success or failure with the results; this is crucial, for example, when studying the assessment of resident competency or knowledge. It is important to

create a theoretic framework and define the research question; this helps create the appropriate study method [16]. Researchers might be able to find existing tools or survey instruments using a literature review. Develop the Research Question A successful research project is defined by a good research question. Whether the matter under study arises from reviewing cases at the workstation, lecturing to radiology residents, or reviewing the current literature, it then needs to be developed [17]. The researcher should consider whether the proposed study will produce a significant result that changes practice. When embarking on a new project, one must always contemplate whether the projected outcome merits the time and effort required to bear the expected results. Once the hypothesis is developed, then well-defined objectives can be created for the study [3]. Conceptual Framework Shields and Rangarajan [18] define a conceptual framework as “the way ideas are organized to achieve a research project’s purpose.” The conceptual framework for the project serves to unify the paper [17, 19]. It provides the means for moving from the identification of a problem requiring study to the formulation of a research question and a hypothesis. The research question and hypothesis then determine the means of data collection and data analysis. Once the research question is identified, the framework links the objective of the project to the literature review, research method, data gathering, and data evaluation. All these components are then placed into the proper context. A clear conceptual framework will greatly improve the chances of executing a successful project [3]. Research Study Design Once the researcher has developed the research question and a viable conceptual framework for the study, the appropriate study design to answer the question should be selected. There are many types of qualitative and quantitative research designs (Fig. 1). It should be noted that medical educational research studies often benefit from using elements of both qualitative and quantitative study designs. Qualitative Versus Quantitative Research Qualitative studies are common in medical educational research and use surveys or

AJR:204, April 2015 693

Downloaded from www.ajronline.org by SUNY Downstate Medical Center on 03/27/15 from IP address Copyright ARRS. For personal use only; all rights reserved

Gaetke-Udager and Yablon nonnumeric data to investigate the subjective aspects of events [16, 20]. The study question usually deals with the relationships among more complex sociologic phenomena and health care, such as social exchanges, thoughts and perceptions, ideas, and values [21]. For example, a qualitative study in radiology might ask, “What are the attitudes of practicing radiologists toward the process of Maintenance of Certification?” Qualitative research may use interviews, focus group discussion, and observation of behavior as research tools [3]. Qualitative methods can be combined with quantitative methods in research studies; this is one way to increase the legitimacy of a medical educational research project. Quantitative studies involve the analysis of numeric data to provide results that can be generalized to a larger group, and these types of studies form the backbone of most medical research. Quantitative studies include experimental or observational study designs. The main goal of experimental studies is to see whether an intervention actually has an intended effect on the target study population [16], whereas observational studies can use either retrospective or prospective techniques to describe outcomes. Quantitative research questions are usually closed ended, such as, “Do radiology resident physics board scores improve after a new series of dedicated physics lectures?” RCTs are considered the pinnacle of quantitative research design in the medical field [3]. Types of Qualitative Research Qualitative research methods all focus on gathering information from people, groups, or systems that share a certain experience in common. Descriptive studies often describe a new curriculum, program, or testing method without a true investigative component; thus, they are not usually considered to be true research [16]. Many descriptive studies are nonetheless submitted as medical educational research articles; the addition of a qualitative or quantitative research component should increase the chances for publication. The grounded theory method involves the creation of a hypothesis after data are collected and examined. In other words, the experiences of study participants are documented and analyzed to form a more general theory of a phenomenon. An example of this type of study would be to conduct an observational study to evaluate the workflow of an on-call reading room, analyze the data, and


Fig. 1—Types of qualitative and quantitative research designs.


Qualitative: how, what, why

Field work

Case studies

Quantitative: numeric data, generalizable

Experimental Randomized controlled trials




Case control




create a theory about how to better organize the workflow [22, 23]. This type of study could be easily applied in medical educational research, because learners’ experiences can be quite informative as to the usefulness of an intervention. Field work similarly involves studying and observing local environments, such as reading rooms or procedure rooms, followed by analysis of the observations or interviews. Case studies are more limited in scope in that they analyze a particular radiology group or hospital system, for example, to better understand a broader phenomenon. A potential case study example in radiology would be to evaluate the experiences of chairpersons and fellowship directors in radiology to understand the complex factors underlying the current fellowship application process. Phenomenology is a similar method that assesses the viewpoints of those with a shared experience to better describe a certain social phenomenon or practice [24]. Thus, residents applying to fellowship in a given year could be placed in a focus group to gain insight into the factors that affect residents’ decisions about fellowship. Likewise, narrative research analyzes personal statements or reflections to help explain a certain situation; in this way, personal experiences can be used to enhance understanding of events, curricula, or programs. Similarly, program directors could be interviewed about their personal experiences or reflections to better understand

potential barriers to implementing the Accreditation Council for Graduate Medical Education milestones in their programs. Types of Quantitative Research Randomized Controlled Trials RCTs use controlled variables to evaluate the effectiveness of a certain intervention [16]. To minimize selection bias, researchers randomly assign the study participants to either receive or not receive an intervention. Although this method cannot account for all variables, such as compliance, it prevents confounding factors (i.e., those factors not being studied but that have an effect on the outcome) as much as possible [3]. Despite their widespread use in clinical trials, RCTs are not always applicable to medical education research [7]. In an RCT for an educational intervention, for example, comparisons between the intervention and nonintervention group will almost always yield better performance by the intervention group [16]. Measuring baseline knowledge with a pretest can also influence the posttest result, because learners can remember the test questions or even study for them. The same holds true for an intervention immediately followed by a posttest, because this tests for immediate recall rather than longterm retention [16], and it does not account for inevitable skills and knowledge decay after a lapse in time. Likewise, it can be difficult to administer different interventions

AJR:204, April 2015

Downloaded from www.ajronline.org by SUNY Downstate Medical Center on 03/27/15 from IP address Copyright ARRS. For personal use only; all rights reserved

How to Develop a Medical Education Research Project for the experimental and control groups, because learners might have preexisting attitudes about the strengths or weaknesses of the proposed intervention [7]. It can be difficult to recruit enough subjects to participate in medical education RCTs, and those subjects found in training programs might not represent other groups in an educational setting [16]. Even when enough willing subjects are available, true randomization is difficult to accomplish in medical education research, and studies must often be altered slightly from the true RCT structure [17]. Other Quantitative Studies Observational studies include cohort and case-control studies, survey studies, and reviews. These studies use existing groups of people, such as residency classes, rather than the randomized cohorts used in RCTs [16]. Although these studies are easier to design and execute than RCTs, researchers might confront more difficulty with bias and confounding factors. Along the same lines, a cause-andeffect phenomenon is more difficult to prove than when using an RCT. Working with a statistician in the early stages of the study can help minimize these problems. The results of an observational study can potentially suggest future pathways for study that might use an RCT or other controlled study. A cohort study is popular in medical educational research and uses groups that have either received or not received an intervention and compares outcomes. The exposure or intervention serves as a predictor variable with the potential to cause a certain outcome; the study can be prospective or retrospective. In a cohort study, subjects become part of a cohort by their membership in a group, rather than by selection (e.g., if two consecutive residency classes become the cohort for a study that compares problembased learning vs lecture-based curricula in radiology residency) [25]. On the other hand, a case-control study selects subjects according to particular outcomes. Case subjects are those with a desired outcome to be examined, such as learners who passed an examination or medical students who chose radiology as a career. The study is then performed in a backward manner, in which factors are identified that might have caused the result. Control subjects, the other group in the study, are those without the desired result. This type of study is most useful when the outcome is binary (yes or no), infrequent, and uncommon [3]. Medical

educational research can use a case-control study to show how educational interventions result in better patient outcomes [16]. Reviews can be especially helpful in medical educational research, because studies often have small numbers of subjects but address similar issues. Collating data from these studies refines the understanding of the phenomenon of interest and directs future research [16]. Systematic reviews rely on a thorough search for all relevant literature regarding an issue, a systematic method of selecting worthy articles, and statistical analysis to best answer the researchers’ question [17, 26]. Critical reviews are another type of review in which researchers critically analyze the existing literature. The reviewers identify both what has been proven and what is not well understood with the hopes of identifying avenues for future work. Meta-analyses go a step further by accounting for the sample size in each article to get a more accurate total effect of an intervention. However, because of the vast differences in study designs, interventions, and outcome measures, meta-analyses can be difficult to manage in medical education research [16]. Funnel plots and forest plots are statistical graphics that can help assess for publication bias by plotting treatment effect against study size [27, 28]. These tools help manage and interpret the studies used for a medical educational research meta-analysis. Much medical education is derived from survey questionnaires that allow the researcher to search for relationships among variables. Correlational studies, which evaluate for the association between two variables, often start with qualitative questions (the “what, how, and why”) and use survey data to come to a conclusion; this results in a hybrid of quantitative and qualitative methods, in which the relationship between qualitative variables can be established statistically. Pre- and Posttest Design Pre- and posttest randomized design is often held as a reference standard in medical education research [16, 17]. Using this method, participants are randomized to two or more conditions (e.g., one group of residents receives a case-based curriculum and the others receive only lectures). Then all participants take a pretest, undergo their intervention, and then take a posttest. Randomized posttest only is another possible design, with the potential of having all subjects ben-

efit from the intervention [7]. Single-group pre- and posttest participants can also act as their own control group; this method is frequently used in medical education research and can be useful when the study group is too small to be divided. Pretests can likewise be useful when the test itself is an important part of the intervention or when participants are expected to be lost to follow-up [3]. Although pre- and posttests are an easily accessible way to evaluate educational methods, there are some drawbacks. Just as the use of similar pre- and posttests can influence performance by allowing studying for the posttest, the use of different pre- and posttests may confound the outcome if they have a different level of difficulty and also allow measurement error twice. In addition, pretests do not always adequately correct for differences among study participants. There are many possible internal threats to validity of the study, including the attitudes and background of the learners and the subjectivity of the testing itself, all of which limit the ability to draw appropriate conclusions [7, 17]. Surveys Surveys are useful for analyzing both quantitative entities, such as radiology knowledge or interventional skills, and qualitative entities, such as behavior, attitudes, and beliefs. For this reason, surveys are useful in medical educational research to create a hybrid of quantitative and qualitative methods; although medical educational research often deals with qualitative entities such as teaching methods, professionalism, or career choices, the quantitative aspect of the survey can be used to obtain meaningful data. That said, sometimes the qualitative data are the most interesting data discovered by the survey, and free-text comments or questionnaires can be a key component of the survey. Designing a good survey requires the same thoughtful planning as designing the research study, and a pilot survey can help in planning. Pilot testing for a survey would usually involve a qualitative study with a focus group and interviews. For example, one could send a survey to a test group to see whether participants think the survey is organized and easy to complete. Consider respondents’ answers to the following questions: Is the survey too long or too short? Are the questions clear? Are the response options appropriate or sufficient for

AJR:204, April 2015 695

Downloaded from www.ajronline.org by SUNY Downstate Medical Center on 03/27/15 from IP address Copyright ARRS. For personal use only; all rights reserved

Gaetke-Udager and Yablon the question? Then the researcher can make appropriate modifications and implement the survey in the larger study [29]. Keeping the target population in mind helps the writer to gear the questions appropriately for either medical students or faculty, for example. The survey writer should put himself or herself in the respondents’ position when writing. When designing a survey, it is also important to anticipate how the data will be evaluated. Survey answers can be converted to numeric form and analyzed, and the type of statistical analysis will dictate the type of answer scale and the number of options [29]. A nominal scale has nonnumeric answers with no inherent logical order, such as yes-or-no questions, but that can be assigned numeric value (e.g., yes = 1, no = 0), whereas an interval scale has alternatives that are equally spaced across a spectrum (a Likert scale is a type of interval scale). Nominal questions apply well to demographic data, whereas an interval scale might be used to rate the difficulty from 1 to 10 in interpreting a knee MRI [16]. An ordinal scale is similar to an interval scale but is used to order data using a nonnumeric scale, such as “strongly agree, agree, neutral, disagree, and strongly disagree.” Ordinal scales can be helpful in assessing attitudes. Survey questions should be aligned with the research objectives, and extraneous questions should be removed for the sake of the respondent. The survey can ask both quantitative and qualitative questions regarding a mix of facts, knowledge, attitudes, and beliefs, and all of these can be converted into usable data. When writing survey questions, language choice is crucial. Simple language without double negatives helps to ensure that each respondent interprets the question in a similar fashion. Survey writers should ask only one question at a time without biasing the respondent [29]. Giving the survey takers an empty comment box allows respondents to discuss issues or ask questions, which can potentially form the basis of future research. A survey should be accompanied by a brief introductory paragraph stating the five “Ws” of the survey: who is sponsoring the study, what is the topic being studied, why the study is being done, where the information will go (e.g., whether the study is confidential and anonymous and whether it will be published), and when the survey will


close. Respondents should be informed up front how they were chosen to participate, whether they may opt out, and how long the survey will take to complete [16]. This establishes trust with the respondent at the outset of the survey, which in turn helps to maximize the response rate. Some researchers advocate a goal of more than 60% response rate for mail and Internet surveys [17], although this can be difficult to achieve in reality, and the normal response rate is usually in the 20–30% range. To encourage participation, simplification and clarification of the survey are crucial. This can be achieved by organizing questions into themes and by placing demographics and controversial items at the end. It is important to create a visually appealing survey with an uncluttered appearance. Because respondents often have “survey fatigue,” keep the survey short (less than 5 minutes). It is also very important to make the respondents feel that they have a stake in the outcome of the research by telling them how you plan to use the data [3]. It should go without saying that an application should be made to the institutional review board (IRB) of one’s home institution before a survey is sent to a study population. It is surprising how many researchers forget this step. Although most surveys are anonymous by nature, the respondents are still human subjects requiring protection, and thus the IRB must be notified. Most survey research projects will qualify for an exempt status from the IRB. Conclusion Medical education research in radiology is a challenging and important, but often neglected, field. Mindful development of a study question and conceptual framework help to form a solid foundation for the project. Careful attention to study design in the planning stages, including defining the intervention, the target population, the research method, and the outcome, will also increase the chances for success. Authors should experience more success with publications and grants if the proposed study results in changes to behavior, practice, or patient outcome. References 1. Goldstein B. Where do we go from here? Acad Med 1994; 69:625–626 2. Reed DA, Kern DE, Levine RB, Wright SM. Costs and funding for published medical education research. JAMA 2005; 294:1052–1057

3. Yablon C. Types of educational research. Association of University Radiologists website. www.aur. org/uploadedFiles/Alliances/AMSER/Reserach/ Yablon-Types-of-Educational-Research-AUR-2012. pdf. Published 2012. Accessed August 3, 2014 4. Carline JD. Funding medical education research: opportunities and issues. Acad Med 2004; 79:918–924 5. Dodd GD 3rd. Financing research and education: current challenges and future solutions—a summary of the 2009 Intersociety Conference. J Am Coll Radiol 2010; 7:684–689 6. Rich EC, Liebow M, Srinivasan M, et al. Medicare financing of graduate medical education. J Gen Intern Med 2002; 17:283–292 7. Gruppen LD. Improving medical education research. Teach Learn Med 2007; 19:331–335 8. Collins J. Medical education research: challenges and opportunities. Radiology 2006; 240:639–647 9. Cook DA, Beckman TJ. Reflections on experimental research in medical education. Adv Health Sci Educ Theory Pract 2010; 15:455–464 10. Chen FM, Burstin H, Huntington J. The importance of clinical outcomes in medical education research. Med Educ 2005; 39:350–351 11. Whitcomb ME. Research in medical education: what do we know about the link between what doctors are taught and what they do? Acad Med 2002; 77:1067–1068 12. Bordage G. Reasons reviewers reject and accept manuscripts: the strengths and weaknesses in medical education reports. Acad Med 2001; 76:889–896 13. Cook DA, Beckman TJ, Bordage G. Quality of reporting of experimental studies in medical education: a systematic review. Med Educ 2007; 41:737–745 14. Watkins R, West-Meiers M, Visser YL. A guide to assessing needs: essential tools for collecting information, making decisions, and achieving development results. Washington, DC: World Bank, 2012 15. Rattray J, Jones MC. Essential elements of questionnaire design and development. J Clin Nurs 2007; 16:234–243 16. Ringsted C, Hodges B, Scherpbier A. ‘The research compass’: an introduction to research in medical education—AMEE Guide no. 56. Med Teach 2011; 33:695–709 17. Beckman TJ, Cook DA. Developing scholarly projects in education: a primer for medical teachers. Med Teach 2007; 29:210–218 18. Shields PR, Rangarajan N. A playbook for research methods: integrating conceptual frameworks and project management. Stillwater, OK: New Forums Press, 2013:22–26 19. Ravitch SM, Riggan M. Reason & rigor: how conceptual frameworks guide research. Thousand Oaks, CA: Sage Publications, 2012

AJR:204, April 2015

Downloaded from www.ajronline.org by SUNY Downstate Medical Center on 03/27/15 from IP address Copyright ARRS. For personal use only; all rights reserved

How to Develop a Medical Education Research Project 20. Pope C, Mays N. Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research. BMJ 1995; 311:42–45 21. Giacomini MK, Cook DJ. Users’ guides to the medical literature. Part XXIII. Qualitative research in health care B: what are the results and how do they help me care for my patients? Evidence-Based Medicine Working Group. JAMA 2000; 284:478–482 22. Ginsburg S, Regehr G, Lingard L. Basing the evaluation of professionalism on observable be-

haviors: a cautionary tale. Acad Med 2004; 79(suppl 10):S1–S4 23. Glaser BG, Strauss AL. The discovery of grounded theory: strategies for qualitative research. Chicago, IL: Aldine Publishing Company, 1967 24. Goldenberg MJ. On evidence and evidence-based medicine: lessons from the philosophy of science. Soc Sci Med 2006; 62:2621–2632 25. Norman GR, Wenghofer E, Klass D. Predicting doctor performance outcomes of curriculum interventions: problem-based learning and continuing competence. Med Educ 2008; 42:794–799

26. Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review: a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy 2005; 10(suppl 1):21–34 27. Sterne JA, Egger M. Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. J Clin Epidemiol 2001; 54:1046–1055 28. Lewis, S. Clarke M. Forest plots: trying to see the wood and the trees. BMJ 2001; 322:1479–1480 29. Fowler FJ. Improving survey questions: design and evaluation. Thousand Oaks, CA: Sage Publications, 1995


This article is available for CME and Self-Assessment (SA-CME) credit that satisfies Part II requirements for maintenance of certification (MOC). To access the examination for this article, follow the prompts associated with the online version of the article.

AJR:204, April 2015 697

Medical education research for radiologists: a road map for developing a project.

Medical education research is challenging to do well, but researchers can develop a robust project with knowledge of basic principles. Thoughtful crea...
531KB Sizes 4 Downloads 7 Views