Human Movement Science 42 (2015) 225–231

Contents lists available at ScienceDirect

Human Movement Science journal homepage: www.elsevier.com/locate/humov

On the continuing problem of inappropriate learning measures: Comment on Wulf et al. (2014) and Wulf et al. (2015) Mark G. Fischman ⇑ Auburn University, USA

a r t i c l e

i n f o

PsycINFO classification: 2330 2343 2240 Keywords: Assessment Variability Throwing Motor learning

a b s t r a c t Two recent studies in this journal (Wulf, Chiviacowsky, & Cardozo, 2014; Wulf, Chiviacowsky, & Drews, 2015) assessed the additive effects of autonomy support and enhanced expectancies (Wulf et al., 2014) and autonomy support and external focus (Wulf et al., 2015) on learning a novel throwing skill. Participants learned to throw with their non-dominant arm at a target consisting of nine concentric circles with a center bull’s eye. More points were awarded for throws landing closer to the bull’s eye, but the precise landing location within each circle was ignored. All throws landing anywhere within a specific circle received the same score. I comment on the inappropriateness of this assessment for determining performance variability, which is an important characteristic of skill learning. The standard errors reported by Wulf et al. (2014, 2015) are confusing or ambiguous to performance as measured in the task. They do not reflect the precision of performance that one might expect. This problem is not limited to the two studies in this commentary, but remains a continuing one in many studies of motor learning. Questions are also raised concerning the absence of any kinematic or kinetic measures of throwing performance in Wulf et al. (2014, 2015). Ó 2015 Elsevier B.V. All rights reserved.

⇑ Address: Auburn University, School of Kinesiology, 301 Wire Road, Auburn, AL 36849-5323, USA. Tel.: +1 (334) 826 1427. E-mail address: fi[email protected] http://dx.doi.org/10.1016/j.humov.2015.05.011 0167-9457/Ó 2015 Elsevier B.V. All rights reserved.

226

M.G. Fischman / Human Movement Science 42 (2015) 225–231

1. Introduction An important goal of motor learning research is to identify key variables that affect how rapidly people learn motor skills, how well they learn them, and how well those skills are retained over time and possibly transferred to performance contexts that are different than those under which they were initially practiced. In the search for these variables, studies may differ in the extent to which theoretical issues are emphasized versus emphasis on more applied, practical issues (see Christina, 1987, 1989 for a discussion of different levels of research in motor learning, and Christina & Bjork, 1991 for a discussion of variables affecting retention and transfer). While it is vital to select important independent variables to manipulate for studying motor skill learning, it is equally important to validly measure the relevant dependent variables if we are to draw reasonable conclusions. The purpose of this commentary is to highlight a motor skill assessment issue that was initially raised over 20 years ago by Reeve, Fischman, Christina, and Cauraugh (1994), but has persisted over time and, unfortunately, continues to plague the field. 2. The problem My focus is on two recent articles in this journal (Wulf, Chiviacowsky, & Cardozo, 2014; Wulf, Chiviacowsky, & Drews, 2015), although as I will show, the issues are not limited to only those studies. Wulf et al. (2014) studied the individual and combined influences of autonomy support and enhanced expectancies in novice participants learning to throw overhand with their non-dominant arm. Autonomy support was manipulated by giving participants a choice about the ball color during practice, and enhanced expectancies involved giving, or not giving, participants bogus positive social comparative feedback during practice. These variables were shown in previous studies to individually facilitate motor learning, possibly by enhancing motivation and increasing self-efficacy (e.g., Sanli, Patterson, Bray, & Lee, 2013; Wulf, 2007; Wulf, Chiviacowsky, & Lewthwaite, 2012), so it made sense to study their possible combined effects to determine if even greater learning advantages might accrue. In a similar study, Wulf et al. (2015) investigated the individual and combined effects of an external focus of attention and autonomy support, also in the overhand throwing task with the non-dominant arm. External focus was manipulated by having participants either focus, or not focus, externally on the target. Autonomy support was manipulated by giving, or not giving, participants a choice of practicing the throwing skill with their dominant arm on several practice blocks. These variables were also shown in previous research to individually facilitate learning, again by possibly enhancing motivation (autonomy support), or promoting automaticity (external focus; see Wulf, 2013 for a review), so it again made sense to study whether greater learning advantages might accrue from their combined effects. The task involved throwing beach-tennis balls (blue, red, yellow in Wulf et al., 2014; color unspecified in Wulf et al., 2015) at a target consisting of a circular bull’s eye (radius = 10 cm) surrounded by 9 concentric circles where each circle’s radius increased from 20 to 100 cm in 10-cm increments. Performance accuracy was crudely assessed by a point system where the bull’s eye was awarded 100 points, with each subsequent circle receiving 10 points less. Any throws that completely missed the target received zero points. In a two-dimensional task such as this, one numerical value was assigned to represent every point on each circular band around the bull’s eye. As noted by Reeve et al. (1994), such an assignment disregards the fact that individual trials could vary in 360° around the bull’s eye as well as in the distance from the center of the bull’s eye. Thus, the same score could be assigned to many different actual performances. There is no isomorphic relationship between the participant’s score on a given trial and the actual performance response. At best, these scores may roughly approximate the absolute deviation from the bull’s eye (i.e., absolute error), but they cannot accurately represent the locations of the responses to determine any potential bias, nor can they be used to represent the trial-to-trial variability of the responses. In Fig. 2 of Wulf et al. (2014) and Wulf et al. (2015), the standard error bars do not reflect variability in participants’ actual throws (i.e., in their performance). According to Thomas (2014, p. 447) ‘‘SE is a population estimate of the likelihood that the M value estimates the population value, not the variability of the M.’’ Thus, the SE of this

M.G. Fischman / Human Movement Science 42 (2015) 225–231

227

‘‘points’’ dimension gives the expected standard deviation between sample means, but tells little about performance in the task as participants/samples could be performing completely differently, but points scored would look the same. But even if Wulf et al. (2014) and Wulf et al. (2015) reported their sample standard deviations, these still would not reflect the variability of participants’ actual throws. They would simply reflect the variability of an imprecise scoring metric. Even worse, because the exact location of each throw is not precisely measured, the conclusion that one group learned the task better than another group may be suspect. For example, suppose that hypothetically all the throws for one group of participants were tightly clustered (i.e., low variability) in the ‘‘50’’ circle, while all the throws for another group were spread out (i.e., high variability) all around the ‘‘70’’ circle. In Wulf et al.’s scoring system, the ‘‘70’’ circle group would be judged to have learned the task more effectively than the ‘‘50’’ circle group. Such a conclusion could be very misleading, especially from a practical teaching or coaching viewpoint as highly consistent performance, even though not very accurate, may be easier to ‘‘fix’’ than performance characterized by high variability. See Schmidt and Lee (2011) for elaboration of this argument. In studies of motor learning where participants practice a skill over many trials or several sessions, is performance variability, or consistency, an important variable? Contemporary accounts of motor learning suggest that it is a crucial variable in the study of skill learning. For example, Schmidt and Lee (2011) state that ‘‘The study of motor learning will show that the measure of error that is most sensitive to the effects of practice is consistency. . .’’ (p. 28). Magill and Anderson (2014) acknowledge consistency as one of six general performance characteristics of skill learning. They state ‘‘. . .as learning progresses, performance becomes increasingly more consistent’’ (p. 258). In a recent review of principles of sensorimotor learning, Wolpert, Diedrichsen, and Flanagan (2011) state ‘‘A reduction in the variability for a given movement speed can be considered the hallmark of skill learning’’ (p. 743). Therefore, even if an independent variable is shown to affect performance accuracy, it is still important to accurately assess variability in order to obtain a richer view of learning. And even though there may be no reason to expect a performance bias in one direction or another, how can one know for sure without measuring and testing for it? As I noted earlier, the issues raised here are not limited to Wulf et al. (2014) and Wulf et al. (2015). Over nearly the past 20 years there have been numerous studies of different ‘‘target accuracy’’ tasks where performance variability was either not measured at all, or standard deviations and standard errors reflect only the variability of scores that are not isomorphically related to actual performances. Some representative examples of studies using these tasks include throwing various objects such as beanbags (Ávila, Chiviacowsky, Wulf, & Lewthwaite, 2012; Chiviacowsky & Wulf, 2007; Chiviacowsky, Wulf, Medeiros, Kaefer, & Tani, 2008; Chiviacowsky, Wulf, Wally, & Borges, 2009; Saemi, Wulf, Varzaneh, & Zarghami, 2011), darts (Marchant, Clough, & Crawshaw, 2007; Marchant, Clough, Crawshaw, & Levy, 2009; Radlo, Steinberg, Singer, Barba, & Melnikov, 2002), cricket balls (Hooyman, Wulf, & Lewthwaite, 2014), tennis balls (Pascua, Wulf, & Lewthwaite, 2015; Saemi, Porter, Ghotbi-Varzaneh, Zarghami, & Maleki, 2012), soccer throw-in (Weeks & Kordus, 1998) hitting tennis balls (Wulf, McNevin, Fuchs, Ritter, & Toole, 2000, Experiment 1), golf pitch shots (Wulf, Lauterbach, & Toole, 1999; Wulf et al., 2000, Experiment 2; Wulf & Su, 2007), and golf putting (Badami, VaezMousavi, Wulf, & Namazizadeh, 2011, 2012). And these studies have investigated topics in skill learning that have both theoretical and practical importance, such as attentional focus, self-controlled practice and feedback, and of course, autonomy support and enhanced expectancies. It is important to note that my critique here does not apply to studies that used timing tasks and balance tasks, of which there are many (see Chen, Hendrick, & Lidor, 2002; Chiviacowsky & Wulf, 2002; Chiviacowsky, Wulf, Lewthwaite, & Campos, 2012; Hartman, 2007; McNevin, Shea, & Wulf, 2003; Wulf, Clauss, Shea, & Whitacre, 2001; Wulf, McNevin, & Shea, 2001 for examples). For these tasks, one-dimensional error measures are appropriate and can yield valuable information about skill learning. 3. A solution After the issues were first raised by Reeve et al. (1994), a solution was proposed by Hancock, Butler, and Fischman (1995). They introduced a set of formulae for calculating and statistically analyzing accuracy, bias, and consistency of performance for two-dimensional tasks such as those using

228

M.G. Fischman / Human Movement Science 42 (2015) 225–231

concentric circle targets, both for single individuals and for groups. They also explain how specific information regarding the learning process may be missed if one uses only an accuracy measure with these types of tasks. The interested reader is referred to the Hancock et al. article for details. Their methods are based on multivariate statistics and involve imposing a set of perpendicular X- and Y-axes with a desirable measurement scale onto the target surface, with the origin (0, 0) set at the target center. In most cases this would be the bull’s eye, although setting the center of the target at other coordinates would not compromise the error measures and analyses described by Hancock et al. To be sure, the Hancock et al. methods are far more labor intensive and not as convenient as the ‘‘point’’ system because the precise location of each throw, putt, pitch, shot, etc. has to be captured, but this does not appear to be a valid reason for not using them. Unless there are severe logistical constraints that make the collection of two-dimensional error data impossible, two-dimensional error data should always be collected. And with current technology, it is difficult to imagine how such constraints could arise in most laboratory environments. Interestingly, and perhaps a bit ironically, the measures and analyses proposed by Hancock et al. (1995) have been advocated by a number of researchers but who have not applied them in their own studies. For example, Wulf et al. (2000), who used hitting tennis balls and golf pitch shots, stated in their discussion One question that should be addressed in future research is to what extent the performer’s attentional focus affects the accuracy (bias) and variability of his or her performance. While the scoring system we used in these experiments has often been used for two-dimensional aiming tasks (e.g., Janelle, Barba, Frehlich, Tennant, & Cauraugh, 1997; Weeks & Kordus, 1998; Wulf et al., 1999), this system fails to capture the participants’ actual performance characteristics (see Reeve et al., 1994, for a discussion of this problem). Future studies using two-dimensional aiming tasks should adopt the measures suggested by Hancock et al. (1995) to provide further insights into which aspects of performance are influenced by the attentional focus manipulation [pp. 237–238] Marchant et al. (2007), who studied novice dart throwing performance, stated in their discussion A limitation to the analysis of these findings is the simplistic nature of the scoring system. As suggested by Wulf et al. (2000)1 and also Reeve et al. (1994), this simple assessment may miss vital clues to performance differences. Although the accuracy values obtained here reflect sport-specific assessment in accuracy sports, they miss valuable information regarding the location of darts hitting the target around the full 360° of the target. Future research should address such an approach [p. 300] But then, in a ‘‘future’’ study by these authors (Marchant et al., 2009), the same dart throwing task was used along with the same scoring system. The discussion contains the following passage . . .the measurement of accuracy employed here represents absolute error and does not fully reflect the 2-dimensional nature of this aiming task. Key performance information is missing (e.g., direction of error and variability) and this therefore limits the conclusions that can be made (see Reeve et al., 1994), in particular, specific aspects of the throwing movement that may have been affected. Hancock et al. (1995) highlight methods of quantifying 2-dimensional directional bias and variability of scores that would allow a discussion of how attentional focusing instructions influence performance in 2-dimensional tasks such as this, and future research should utilize such methods [p. 500] Finally, from the discussion by Badami et al. (2012, p. 201), who used a golf-putting task with 14 concentric circles: ‘‘Future studies should consider using the two-dimensional performance variability error measure as well as overall accuracy. This assessment technique ensures more valid and accurate measures of performance (Hancock et al., 1995).’’ I completely agree with the four quoted passages. But one can only wonder if the ‘‘future’’ is anywhere on the horizon. Researchers have acknowledged the limitations of the point scoring system, 1 Marchant et al. (2007) cited this quote as coming from ‘‘Wulf et al. (2001).’’ However, the quote actually appears in Wulf et al. (2000), which is cited correctly here.

M.G. Fischman / Human Movement Science 42 (2015) 225–231

229

called for adopting the measures developed by Hancock et al. (1995), but continue to use the flawed measures. Is there any place for the point system in motor learning studies? I believe it can still be used for providing feedback to participants. Specifically, in Wulf et al. (2014) it could be used for the enhanced expectancies manipulation since participants had to make a social comparison of their performance against others’ performance, but it should not be used as a main dependent variable to assess performance or learning. Work by Janelle et al. (1997) provides a nice example here. In their study of feedback effectiveness in a self-controlled learning environment, right-handed participants learned to throw with their left hand to a target with 15 concentric circles. They used a point system to provide knowledge of results to participants, but they calculated two-dimensional error measures to quantify accuracy, bias, and consistency of performance. A more recent example of the appropriate use of two-dimensional error measures can be found in a golf-putting study by Land, Frank, and Schack (2014) where attentional focus was the variable of interest. 4. Learning the overhand throw My final comment addresses the absence of precise measures of the overhand throw in Wulf et al. (2014) and Wulf et al. (2015). Participants were charged with learning to throw overhand with their non-dominant arm so as to achieve a high point total. Thus, throwing accuracy, a performance outcome measure, was the goal. Practice, retention, and transfer phases were included, which are appropriate components in motor learning research. Participants received only minimal basic instructions for the overhand throw. They were instructed to stay behind a starting line, throw with the left arm, and take a step forward with the right foot. However, the overhand throw is an extremely complex skill with a highly coordinated movement pattern (e.g., Urbin, Stodden, Fischman, & Weimar, 2011). In the absence of any kinematic measures of the movement pattern, or kinetic measures such as force production or muscle activation patterns via EMG, we know very little from Wulf et al. (2014) and Wulf et al. (2015) regarding how well participants actually learned this skill. To be sure, I am not suggesting that all studies of motor skill learning should perform kinematic and kinetic analyses in order to be valid or useful. However, for skills that are as complex as the overhand throw, something more than the most rudimentary assessment of performance accuracy would seem necessary for evaluating learning. Work by Zachry, Wulf, Mercer, and Bezodis (2005) supports my point. These authors criticized previous studies of attentional focus for almost exclusively using performance outcome measures, such as accuracy. Zachry et al. used EMG in a basketball free throw shooting task to determine neuromuscular correlates of external versus internal foci of attention. They found that shooting performance was more accurate under external focus of attention, and more importantly, EMG revealed that the external focus also enhanced movement economy. 5. Conclusion Thirty years ago, in a critique of statistical analyses in science, Cooke and Brown (1985) stated ‘‘. . .the application of statistics must always be subordinate to the application of principled scientific thinking. Good statistics can never rescue bad science.’’ (p. 492). However, the converse is also true; that is, faulty statistics can often mask the true meaning of good science. I will take the liberty here of paraphrasing Cooke and Brown’s admonition by replacing ‘‘statistics’’ with ‘‘measurement’’ or ‘‘assessment.’’ That is, poor, imprecise measurement can also hide the true meaning of good science. The ideas investigated by Wulf et al. (2014) and Wulf et al. (2015), as well as by many others cited in this commentary, may be conceptually rich, theoretically sound, and have potential value for practitioners—certainly worthy of study, but if they are tested with flawed measures, can the conclusions be trusted, and do they truly advance our knowledge of motor skill learning? Acknowledgements I thank Keith Lohse, Matt Miller, and members of Auburn University’s Performance and Exercise Psychophysiology Lab for helpful discussions of the issues raised in this commentary, and Robert Christina and two anonymous reviewers for comments on a previous draft of the manuscript.

230

M.G. Fischman / Human Movement Science 42 (2015) 225–231

References Ávila, L. T. G., Chiviacowsky, S., Wulf, G., & Lewthwaite, R. (2012). Positive social-comparative feedback enhances motor learning in children. Psychology of Sport and Exercise, 13, 849–853. Badami, R., VaezMousavi, M., Wulf, G., & Namazizadeh, M. (2011). Feedback after good versus poor trials affects intrinsic motivation. Research Quarterly for Exercise and Sport, 82, 360–364. Badami, R., VaezMousavi, M., Wulf, G., & Namazizadeh, M. (2012). Feedback about more accurate versus less accurate trials: Differential effects on self-confidence and activation. Research Quarterly for Exercise and Sport, 83, 196–203. Chen, D. D., Hendrick, J. L., & Lidor, R. (2002). Enhancing self-controlled learning environments: The use of self-regulated feedback information. Journal of Human Movement Studies, 43, 69–86. Chiviacowsky, S., & Wulf, G. (2002). Self-controlled feedback: Does it enhance learning because performers get feedback when they need it? Research Quarterly for Exercise and Sport, 73, 408–415. Chiviacowsky, S., & Wulf, G. (2007). Feedback after good trials enhances learning. Research Quarterly for Exercise and Sport, 78, 40–47. Chiviacowsky, S., Wulf, G., Lewthwaite, R., & Campos, T. (2012). Motor learning benefits of self-controlled practice in person’s with Parkinson’s disease. Gait & Posture, 35, 601–605. Chiviacowsky, S., Wulf, G., Medeiros, F., Kaefer, A., & Tani, G. (2008). Learning benefits of self-controlled knowledge of results in 10-year old children. Research Quarterly for Exercise and Sport, 79, 405–410. Chiviacowsky, S., Wulf, G., Wally, R., & Borges, T. (2009). Knowledge of results after good trials enhances learning in older adults. Research Quarterly for Exercise and Sport, 80, 663–668. Christina, R. W. (1987). Motor learning: Future lines of research. In M. Safrit & H. Eckert (Eds.), The cutting edge in physical education and exercise science research. Academy Papers No. 20 (pp. 26–41). Champaign, IL: Human Kinetics. Christina, R. W. (1989). Whatever happened to applied research in motor learning? In J. S. Skinner, C. B. Corbin, D. M. Landers, P. E. Martin, & C. L. Wells (Eds.), Future directions in exercise and sport science research (pp. 411–422). Champaign, IL: Human Kinetics. Christina, R. W., & Bjork, R. A. (1991). Optimizing long-term retention and transfer. In D. Druckman & R. A. Bjork (Eds.), In the mind’s eye: Enhancing human performance (pp. 23–56). Washington, DC: National Academy Press. Cooke, J. D., & Brown, S. H. (1985). Science and statistics in motor physiology. Journal of Motor Behavior, 17, 489–492. Hancock, G. R., Butler, M. S., & Fischman, M. G. (1995). On the problem of two-dimensional error scores: Measures and analyses of accuracy, bias, and consistency. Journal of Motor Behavior, 27, 241–250. Hartman, J. M. (2007). Self-controlled use of a perceived physical assistance device during a balancing task. Perceptual and Motor Skills, 104, 1005–1016. Hooyman, A., Wulf, G., & Lewthwaite, R. (2014). Impacts of autonomy-supportive versus controlling instructional language on motor learning. Human Movement Science, 36, 190–198. Janelle, C. M., Barba, D. A., Frehlich, S. G., Tennant, L. K., & Cauraugh, J. H. (1997). Maximizing performance feedback effectiveness through videotape replay and a self-controlled learning environment. Research Quarterly for Exercise and Sport, 68, 269–279. Land, W. M., Frank, C., & Schack, T. (2014). The influence of attentional focus on the development of skill representation in a complex action. Psychology of Sport and Exercise, 15, 30–38. Magill, R. A., & Anderson, D. I. (2014). Motor learning and control: Concepts and applications (10th ed.). New York, NY: McGrawHill. Marchant, D. C., Clough, P. J., & Crawshaw, M. (2007). The effects of attentional focusing strategies on novice dart throwing performance and their task experiences. International Journal of Sport and Exercise Psychology, 5, 291–303. Marchant, D. C., Clough, P. J., Crawshaw, M., & Levy, A. (2009). Novice motor skill performance and task experience is influenced by attentional focusing instructions and instruction preferences. International Journal of Sport and Exercise Psychology, 7, 488–502. McNevin, N. H., Shea, C. H., & Wulf, G. (2003). Increasing the distance of an external focus of attention enhances learning. Psychological Research, 67, 22–29. Pascua, L. A. M., Wulf, G., & Lewthwaite, R. (2015). Additive benefits of external focus and enhanced performance expectancy for motor learning. Journal of Sports Sciences, 33, 58–66. Radlo, S. J., Steinberg, G. M., Singer, R. N., Barba, D. A., & Melnikov, A. (2002). The influence of an attentional focus strategy on alpha brain wave activity, heart rate, and dart-throwing performance. International Journal of Sport Psychology, 33, 205–217. Reeve, T. G., Fischman, M. G., Christina, R. W., & Cauraugh, J. H. (1994). Using one-dimensional task error measures to assess performance on two-dimensional tasks: Comment on ‘‘attentional control, distractors, and motor performance’’. Human Performance, 7, 315–319. Saemi, E., Porter, J. M., Ghotbi-Varzaneh, A., Zarghami, M., & Maleki, F. (2012). Knowledge of results after relatively good trials enhances self-efficacy and motor learning. Psychology of Sport and Exercise, 13, 378–382. Saemi, E., Wulf, G., Varzaneh, A. G., & Zarghami, M. (2011). Feedback after good versus poor trials enhances motor learning in children. Brazilian Journal of Physical Education and Sport, 25, 673–681. Sanli, E. A., Patterson, J. T., Bray, S. R., & Lee, T. D. (2013). Understanding self-controlled motor learning protocols through the self-determination theory. Frontiers in Psychology, 3(Article 611), 1–17. Schmidt, R. A., & Lee, T. D. (2011). Motor control and learning: A behavioral emphasis (5th ed.). Champaign, IL: Human Kinetics. Thomas, J. R. (2014). Improved data reporting in RQES: From volumes 49, 59, to 84. Research Quarterly for Exercise and Sport, 85, 446–448. Urbin, M. A., Stodden, D. F., Fischman, M. G., & Weimar, W. H. (2011). Impulse-variability theory: Implications for ballistic, multijoint motor skill performance. Journal of Motor Behavior, 43, 275–283. Weeks, D. L., & Kordus, R. N. (1998). Relative frequency of knowledge of performance and motor skill learning. Research Quarterly for Exercise and Sport, 69, 224–230. Wolpert, D. M., Diedrichsen, J., & Flanagan, J. R. (2011). Principles of sensorimotor learning. Nature Reviews Neuroscience, 12, 739–751. Wulf, G. (2007). Self-controlled practice enhances motor learning: Implications for physiotherapy. Physiotherapy, 93, 96–101.

M.G. Fischman / Human Movement Science 42 (2015) 225–231

231

Wulf, G. (2013). Attentional focus and motor learning: A review of 15 years. International Review of Sport and Exercise Psychology, 6, 77–104. Wulf, G., Chiviacowsky, S., & Cardozo, P. L. (2014). Additive benefits of autonomy support and enhanced expectancies for motor learning. Human Movement Science, 37, 12–20. Wulf, G., Chiviacowsky, S., & Drews, R. (2015). External focus and autonomy support: Two important factors in motor learning have additive benefits. Human Movement Science, 40, 176–184. Wulf, G., Chiviacowsky, S., & Lewthwaite, R. (2012). Altering mindset can enhance motor learning in older adults. Psychology and Aging, 27, 14–21. Wulf, G., Clauss, A., Shea, C. H., & Whitacre, C. (2001). Benefits of self-control in dyad practice. Research Quarterly for Exercise and Sport, 72, 299–303. Wulf, G., Lauterbach, B., & Toole, T. (1999). The learning advantages of an external focus of attention in golf. Research Quarterly for Exercise and Sport, 70, 120–126. Wulf, G., McNevin, N. H., Fuchs, T., Ritter, F., & Toole, T. (2000). Attentional focus in complex skill learning. Research Quarterly for Exercise and Sport, 71, 229–239. Wulf, G., McNevin, N. H., & Shea, C. H. (2001). The automaticity of complex motor skill learning as a function of attentional focus. The Quarterly Journal of Experimental Psychology, 54A(4), 1143–1154. Wulf, G., & Su, J. (2007). An external focus of attention enhances golf shot accuracy in beginners and experts. Research Quarterly for Exercise and Sport, 78, 384–389. Zachry, T., Wulf, G., Mercer, J., & Bezodis, N. (2005). Increased movement accuracy and reduced EMG activity as the result of adopting an external focus of attention. Brain Research Bulletin, 67, 304–309.

On the continuing problem of inappropriate learning measures: Comment on Wulf et al. (2014) and Wulf et al. (2015).

Two recent studies in this journal (Wulf, Chiviacowsky, & Cardozo, 2014; Wulf, Chiviacowsky, & Drews, 2015) assessed the additive effects of autonomy ...
320KB Sizes 1 Downloads 12 Views