Nurse Education Today 34 (2014) 667–669

Contents lists available at ScienceDirect

Nurse Education Today journal homepage: www.elsevier.com/nedt

Contemporary Issues

Mind the gap: Quantifying the performance gap between observed and required clinical competencies in undergraduate nursing students Introduction As nurse educators we sometimes hear nurses in practice areas suggest that graduates of today's nursing programs do not ‘hit the ground running.’ Assumptions aside that this, in fact, was ever the case, there certainly are many misconceptions about the ‘readiness to practice’ of senior nursing students and newly graduated nurses. Nursing faculty and nurse managers may find themselves at loggerheads over what entails being ‘ready’ to practice at a novice level. Ultimately, the ‘readiness to practice’ debate entails a veritable philosophical quagmire in terms of whose interests nursing education should serve and to what degree. In practical terms, however, the degree to which nursing students are prepared for nursing practice can be re-framed as a question of the degree to which stakeholder (faculty, students, patients, and employers) expectations are being met in regards to demonstrated competence. By quantifying the expectation gap between students' or graduates' observed level of competence and stakeholders' expectations of competence, all stakeholders will be better positioned to understand the nature of the gap and how that gap can be bridged through curricular enhancements and provisions in the clinical site and workplace. In this paper, we propose the use of gap analysis in nursing education to determine areas of curricular strength and weakness in the preparation of competent nurses. In this paper the terms ‘school of nursing’ and ‘nursing programs’ refer to undergraduate, pre-registration nursing programs preparing students for a baccalaureate degree as entry to practice. Gap Analysis Gap analysis arises out of performance analysis, the purpose of which is to identify discrepancies between observed and expected performance levels, often in the area of customer service and more recently, in the evaluation of competence and educational performance (Avila, 2011; Matzler et al., 2004; Rothwell et al., 2007). In essence, a performance gap is identified through the gathering of Likert scale data regarding the degree to which certain attributes or competencies are observed versus desired or expected. The performance gap for each item is calculated by subtracting observed performance level from the expected level of the item in question (Rothwell et al., 2007). If observed performance exceeds expectations, satisfaction or positive confirmation results; if observed performance is lower than expectations, dissatisfaction or negative disconfirmation results; and, if observed performance meets expectations, moderate satisfaction or indifference results (Matzler et al., 2004). An important type of gap analysis is importance-performance analysis (IPA) (Bacon, 2003). IPA is frequently used in service industry, tourism, management, education, and customer satisfaction studies 0260-6917/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.nedt.2013.09.017

(Abalo et al., 2007; Lopes and Maia, 2012). IPA can help programs identify areas of curricular strength and priority areas for improvement by comparing how important certain attributes are to stakeholders with how well they are actually performed (Deng et al., 2008; Siniscalchi et al., 2008). IPA is used to analyze two dimensions of product/service attributes: performance level (the perceived state that the attributes are being demonstrated, for example, by a graduate) and importance to the workplace (the perceived worth or value of the attributes in the practice environment). IPA can assist schools of nursing to identify improvement priorities for graduate competence as well as determining areas of over-performance (Abalo et al., 2007; Siniscalchi et al., 2008). It is important to note, however, that while IPA assumes that attribute performance and attribute importance are independent variables and that the relationship between the two is linear and symmetrical, it is not entirely so (Arbore and Busacca, 2011; Bacon, 2003; Eskildsen and Kristensen, 2006; Matzler et al., 2004). As such, a basic gap analysis that compares similar constructs may yield less complicated results (i.e. comparing expected or desired levels of performance with those actually observed). Using Gap Analysis in Nursing Education The placement of nursing students and new graduates within real health care delivery institutions presents a unique evaluation need. Evaluation of student and graduate performance—evaluations that examine the ‘product’ of the educational program—can provide useful information for all stakeholders in nursing education. Nursing students progress through their pre-registration programs, completing a variety of competence checkpoints and benchmarks, including academic coursework, clinical practice competence evaluations, graduation requirements, and licensure examinations. Stakeholder feedback is often garnered concerning how well the students' education has prepared them for practice, through program evaluation surveys, as well as through informal and formal communications between schools of nursing and their affiliated health care institutions. However, to accurately describe nursing student and new graduate performance requires more than anecdotal evidence and disjointed surveying of stakeholders, if we are to accurately pinpoint areas for improvement. Gap analysis is one method to quantify the nature of the gap between observed and expected competency achievement of new graduates, one that provides important information for curricular enhancement and workplace supports for students and new graduates. Clark and Estes (2008) describe a gap analysis model consisting of seven steps. The first step in conducting a gap analysis for assessing the gap between desired and observed levels of competence is to define a performance goal—in the case of nursing education, a set of competencies that accurately describe and reflect the construct of

668

Contemporary Issues

competence at a particular level such as a new graduate. Developing a valid and reliable instrument can be facilitated through the use of existing competence frameworks and assessment tools, such as those developed by nursing regulatory bodies for credentialing (International Council of Nurses, 2009; World Health Organization, 2009). Established competencies “provide a sound basis on which to build curricula for initial education” and graduates should be able to demonstrate established competencies (World Health Organization, 2009, p. 19). There are a number of groups involved in credentialing, including regulatory bodies for entry into the profession, government requirements for nursing practice, and professional nursing organizations and association, which have a wide range of activities that often include the setting and validation of standards and competencies (International Council of Nurses, 2009). Using established competencies for a gap analysis promotes uniformity between educational preparation and workplace expectations, and provides a common language between stakeholders. It may also be beneficial for all stakeholders to reflect on their values with regards to graduate nurse competence and to understand the values of other stakeholders. For example, there may be differing opinions regarding the degree to which nursing programs should be preparing students for practice—an issue perceived differently by each stakeholder (students, faculty, new graduates, government, practising RNs, and health care managers). This second step of the gap analysis process (Clark and Estes, 2008) is the determination of how students are performing now in relation to the performance goal. The competency statements can be listed in the form of a questionnaire that employs a five- or seven-point Likert scale for stakeholders (faculty, students, health care institutions, etc.) to respond with their perceptions of both the observed and the desired performance levels of the identified competencies in clinical settings (Deng et al., 2008); in IPA, respondents would comment on the performance and the importance of the competencies (Arbore and Busacca, 2011). Construct validity of a gap analysis tool can be assessed using factor analysis on each competency domain set (Bacon, 2003). Pilot testing should be conducted to ensure the tool is sensitive regarding the separation levels of the measurement scales (Arbore and Busacca, 2011).

The third step in Clark and Estes (2008) gap analysis process model is establishing the size of the gap. With a gap analysis comparing observed and desired performance, results are best displayed in a bar chart that reveals gaps, or utilizing side-by-side comparisons of observed and desired levels of competence (Anitha, 2011). Some studies report findings in the form of a report card with letter grades assigned to the size of the gap. In IPA, the mean importance measure is plotted on the vertical axis and the mean performance measure on the horizontal axis of a two-dimensional graph—and the resulting importance-performance (IP) space is represented on four quadrants (Abalo et al., 2007; Bacon, 2003; Martilla and James, 1977), as shown in Fig. 1. Data is thus presented on a quadrant, and the crosshairs placed meaningfully using a method transparently articulated (Bacon, 2003; Deng et al., 2008). Items in the top left quadrant indicate underperformance and the need for remedial attention; items in the bottom right quadrant indicate over-performance (Abalo et al., 2007; Bacon, 2003). Different stakeholders' satisfaction with nursing program graduates can be seen as related to both the expectations regarding certain important competencies and the judgements of competence as observed in new nurses. Factor analysis can be used to initially discern patterns in responses concerning performance and importance of competencies—for example, to understand overarching patterns of over- or under-performance, or concerning patterns of observed versus expected competence level. Analyses also include t-tests and repeated measures analysis of variance for comparing groups (such as comparing observed competencies with importance competencies to find statistically significant differences) and simple correlation and regression models (Bacon, 2003). An adequate sample size is necessary to ensure statistical power so that accurate estimates can be made regarding the impact of negative competency performance on dissatisfied stakeholders, as well as the impact of exceptional competency performance on satisfied stakeholders (Arbore and Busacca, 2011). Several excellent resources exist on how to maximize the value and accuracy of gap analysis (Abalo et al., 2007; Arbore and Busacca, 2011; Bacon, 2003; Deng, 2008; Deng et al., 2008; Matzler et al., 2004) and the use of these expertise would be prudent. There are several potential

Fig. 1. Sample gap or quadrant analysis of hypothetical nursing competencies.

Contemporary Issues

issues with gap analysis that are important to bear in mind. First, the placement of the grid, by which X and Y are determined, entails some degree of subjectivity (Bacon, 2003; Mount, 2000; Oh, 2001). As well, gap analysis of any kind that involves assessments of performance and ratings of importance involves subjectivity associated with perception (Anitha, 2011; Deng, 2008; Deng et al., 2008), which can be problematic. Third, a list of competencies must be meaningful, valid and reliable tools for describing nursing practice, while not being so long that they are daunting for respondents (Abalo et al., 2007; Siniscalchi et al., 2008). Ultimately, gap analysis and IPA remain useful tools and results should be interpreted and validated in close collaboration with key stakeholders before significant remedial adjustments are implemented. Maximizing Gap Analysis for Nursing Education Improvement There are several significant differences in the use of gap analysis in nursing education, as opposed to in customer satisfaction studies. In nursing education, there are more stakeholders, with a greater diversity of perspectives on ‘readiness to practice.’ There is a greater necessity to collaborate with all stakeholders in the pursuit of excellence, and cost and competitiveness are not the primary concerns but rather patient safety. For these reasons, gap analysis is best carried out in consultation with all stakeholders associated with nursing education. Operationalizing gap analysis and IPA is somewhat complicated in health care services due in part to the diverse interest of the stakeholders, who may have a wide range of agendas and performance expectations (Gomes and Yasin, 2013). This diversity, the need for patient safety, and the fact that nursing programs work so closely with clinical sites, amplify the need for close collaboration with stakeholders as assessment tools are being developed and results interpreted. Once the data has been collected and patterns identified, causes and solutions need to be identified. Clark and Estes (2008) categorize causes into knowledge and skills, motivation, and organizational barriers (including materials, policies and procedures). Interpreting the findings and developing solutions in consultation with stakeholders are important for nursing education because schools of nursing, regulatory bodies, students, patients, and health care agencies often value different things, including different competencies, while patient safety is the primary concern. The data from a competence gap analysis can then lead to strategic adjustments in current practices of the school of nursing or the health care institution in its student and new graduate preparation, enhancing efforts of both schools of nursing and practice areas to foster smoother transitions from student to graduate nurse. Clark and Estes (2008) describe this phase as the implementation phase of gap analysis Furthermore, openly sharing feedback results with all stakeholders to collaboratively arrive at workable solutions can help maximize student and new graduate opportunities to develop competence. Gap analysis in general, and its variant importance-performance analysis, has the potential to help stakeholders in nursing education understand and amend the gap between nursing education and competent nursing practice. By using an established competency framework as the basis for assessment, the gap between students' or graduates' observed competence and desired or expected competence can be measured and analyzed, and remedial interventions developed appropriately. Gap analysis and importance-performance analysis, when used in consultation and collaboration with stakeholders, may provide valuable information for both schools of nursing and practice areas regarding what both should expect from graduate nurses and how both can facilitate new graduate competence. References Abalo, J., Varela, J., Manzano, V., 2007. Importance values for importance-performance analysis: a formula for spreading out values derived from preference rankings. J. Bus. Res. 60 (2), 115–121. http://dx.doi.org/10.1016/j.jbusres.2006.10.009.

669

Anitha, N., 2011. Competency assessment—a gap analysis. Interdiscip. J. Contemp. Res. Bus. 3 (4), 784–794. Arbore, A., Busacca, B., 2011. Rejuvenating importance-performance analysis. J. Serv. Sci. Manag. 22 (3), 409–429. http://dx.doi.org/10.1108/09564231111136890. Avila, C.R., 2011. Examining the implementation of district reforms through gap analysis: addressing the performance gap at two high schools (Dissertation). University of Southern California (Retrieved from http://search.proquest.com/docview/884233723? accountid=12063 ProQuest Dissertations & Theses (PQDT); ProQuest Dissertations & Theses A&I database). Bacon, D.R., 2003. A comparison of approaches to importance-performance analysis. Int. J. Mark. Res. 45 (1), 55–71. Clark, R.E., Estes, F., 2008. Turning Research Into Results: A Guide to Selecting the Right Performance Solutions, 2nd ed. Information Age Publishing, Scottsdale, AZ. Deng, W.-J., 2008. Fuzzy importance-performance analysis for determining critical service attributes. Int. J. Serv. Ind. Manag. 19 (2), 252–270. http://dx.doi.org/10.1108/ 09564230810869766. Deng, W.-J., Kuo, Y.-F., Chen, W.-C., 2008. Revised importance-performance analysis: three-factor theory and benchmarking. Serv. Ind. J. 28 (1), 37–51. http://dx.doi.org/ 10.1080/02642060701725412. Eskildsen, J.K., Kristensen, K., 2006. Enhancing importance-performance analysis. Int. J. Product. Perform. Manag. 55 (1/2), 40–60. http://dx.doi.org/10.1108/1741040061 0635499. Gomes, C.F., Yasin, M.M., 2013. An assessment of performance-related practices in service operational settings: measures and utilization patterns. Serv. Ind. J. 33 (1), 73–97. http://dx.doi.org/10.1080/02642069.2011.600441. International Council of Nurses, 2009. Credentialing. Retrieved 08/14/13, 2013, from http://www.icn.ch/images/stories/documents/publications/fact_sheets/1a_FSCredentialing.pdf. Lopes, S.D.-F., Maia, S.C.F., 2012. Applying importance-performance analysis to the management of health care services. China-USA Bus. Rev. 11 (2), 275–282. Martilla, J.A., James, J.C., 1977. Importance-performance analysis. J. Mark. 41 (1), 77–79. http://dx.doi.org/10.2307/1250495. Matzler, K., Bailom, F., Hinterhuber, H.H., Renzl, B., Pichler, J., 2004. The asymmetric relationship between attribute-level performance and overall customer satisfaction: a reconsideration of the importance–performance analysis. Ind. Mark. Manag. 33 (4), 271–277. http://dx.doi.org/10.1016/S0019-8501(03)00055-5. Mount, D.J., 2000. Determination of significant issues: applying a quantitative method to importance-performance analysis. J. Qual. Assur. Hosp. Tour. 1 (3), 49–63. http://dx.doi.org/10.1300/J162v01n03_03. Oh, H., 2001. Revisiting importance–performance analysis. Tour. Manag. 22 (6), 617–627. http://dx.doi.org/10.1016/S0261-5177(01)00036-X. Rothwell, W.J., Hohne, C.K., King, S.B., 2007. Human Performance Improvement: Building Practitioner Performance. Routledge, New York, NY. Siniscalchi, J.M., Beale, E.K., Fortuna, A., 2008. Using importance-performance analysis to evaluate training. Perform. Improv. 47 (10), 30–35. http://dx.doi.org/10.1002/ pfi.20037. World Health Organization, 2009. Global standards for the initial education of professional nurses and midwives. Retrieved 08/14/13, 2013, from http://www. who.int/hrh/nursing_midwifery/hrh_global_standards_education.pdf.

Em M. Pijl-Zieber⁎ Faculty of Health Sciences, University of Lethbridge, 4401 University Drive, Lethbridge, Alberta T1K 3M4 Canada Corresponding author. Tel.: +1 403 332 5232. E-mail address: [email protected]. Sylvia Barton Faculty of Nursing, University of Alberta, Level 3, Edmonton Clinic Health Academy, 11405 87 Avenue, Edmonton, Alberta T6G 1C9 Canada Jill Konkin Faculty of Medicine, University of Alberta, WC Mackenzie Health Sciences Centre, Edmonton, Alberta, T6G 2R7 Canada Olu Awosoga Faculty of Health Sciences, University of Lethbridge, 4401 University Drive, Lethbridge, Alberta T1K 3M4 Canada Vera Caine Faculty of Nursing, University of Alberta, Level 3, Edmonton Clinic Health Academy, 11405 87 Avenue, Edmonton, Alberta T6G 1C9 Canada

Mind the gap: quantifying the performance gap between observed and required clinical competencies in undergraduate nursing students.

Mind the gap: quantifying the performance gap between observed and required clinical competencies in undergraduate nursing students. - PDF Download Free
281KB Sizes 0 Downloads 3 Views