Brain Stimulation 8 (2015) 493e498

Contents lists available at ScienceDirect

Brain Stimulation journal homepage: www.brainstimjrnl.com

Original Articles

Modulation of Gestural-verbal Semantic Integration by tDCS Tania Cohen-Maximov a, Keren Avirame b, Agnes Flöel b, Michal Lavidor a, c, * a

Department of Psychology, Bar Ilan University, Ramat Gan, Israel Department of Neurology, Charité e Universitätsmedizin Berlin, Germany c The Gonda Brain Research Center, Bar Ilan University, Israel b

a r t i c l e i n f o

a b s t r a c t

Article history: Received 18 June 2014 Received in revised form 28 September 2014 Accepted 4 December 2014 Available online 1 January 2015

Background: Understanding actions based on either language or observation of gestures is presumed to involve the motor system, and reflect the engagement of an embodied conceptual network. The role of the left inferior frontal gyrus (IFG) in language tasks is well established, but the role of the right hemisphere is unclear with some imaging evidence suggesting right IFG activation when gestures mismatch speech. Objective: Using transcranial direct current stimulation (tDCS), we explored the hemispheric asymmetries in the assumed cognitive embodiment required for gestural-verbal integration. Methods: Symbolic gestures served as primes for verbal targets. Primes were clips of symbolic gestures taken from a rich set of emblems and pantomimes. Participants responded by performing a semantic relatedness-judgment under 3 stimulation conditions e anodal tDCS (atDCS) over the left IFG, atDCS over the right IFG, and sham. There was also a non-semantic control task of attentional load. Results: AtDCS of the right IFG generated faster responses to symbolic gestures than atDCS over the left IFG or sham stimulation. For the attentional load task, no differences were observed across the three stimulation conditions. These results support a right-lateralization bias of the human mirror neuron system in processing gestural-verbal stimuli. Conclusion: Gesture comprehension may be enhanced by improved gesture and language integration. Ó 2015 Elsevier Inc. All rights reserved.

Keywords: Symbolic gestures Semantic priming tDCS Lateralization Mirror neuron system

Introduction Theories of language evolution and brain asymmetry draw on the relationships between motor and language systems. As early as 1865, Broca, who investigated individuals with a damaged left inferior frontal lobe, concluded that the lateralization of language is related to left hemisphere (LH) control over the dominant right hand [1]. More recently, it was suggested that the correlated lateralization of handedness and language may indicate that the neural circuit subserving gestural communication is a precursor for the evolvement of speech [2,3]. According to Decety and Grèzes [4], LH dominance during the perception of actions can be interpreted as the activation of semantic representations related to language, a

This study was supported by the I-CORE Program of the Planning and Budgeting Committee and the Israel Science Foundation (grant No. 51/11) and the Israel Academy of Sciences grant no. 100/10 awarded to ML. * Corresponding author. Department of Psychology, Bar Ilan University, Ramat Gan 52900, Israel. Tel.: þ972 3 5318171. E-mail address: [email protected] (M. Lavidor). http://dx.doi.org/10.1016/j.brs.2014.12.001 1935-861X/Ó 2015 Elsevier Inc. All rights reserved.

conclusion that is consistent with LH specialization for language and motor control (in most individuals, [5]). The pars opercularis region (Brodmann’s area44, BA44) in Broca’s area in the left inferior frontal gyrus (IFG) is considered the human homolog of the ventral premotor cortex (area F5) of the Macaque monkey ([6,7], but see an alternative proposal, [8]). Neurons in F5 were found to discharge during both observation and execution of the same grasping action [9]. The ‘mirror neurons’ have been hypothesized to mediate action understanding by mapping input onto output representations [6,10]. For instance, Mukamel et al. [11] found cells with properties of mirror neurons in the supplementary motor area and the hippocampus, using singlecell recordings in human brains. Since electrode placement was determined by clinical considerations in patients suffering from intractable epilepsy, no recordings were made from more lateral areas such as the Inferior Parietal Lobule or the Inferior Frontal Gyrus, the proposed mirror neuron system (MNS) in humans [12]. Several works support the claim of LH predominance in MNS functions. For example, PET (Positron emission tomography) studies have shown that grasp observation led to activation in the Inferior parietal lobule (IPL) and IFG of the LH [13]. In a recent study,

494

T. Cohen-Maximov et al. / Brain Stimulation 8 (2015) 493e498

Lindenberg et al. [14] conducted fMRI assessments while subjects watched clips of emblematic hand gestures. Common areas of activation were found in the inferior frontal, medial frontal, and posterior temporal cortices with left-hemispheric predominance. Direct support for the role of the left IFG in language-gesture integration was previously obtained with an inhibitory Transcranial Magnetic Stimulation (TMS) protocol applied over Broca’s area (left BA44). Magnetic pulses were delivered online while subjects observed and were asked to name a visual stimulus of an actress speaking or gesturing, an actress speaking and gesturing simultaneously, or a printed word [15]. Under right hemisphere (RH) or sham stimulation, voicing had a greater effect in the unimodal speech and gesture conditions compared to reading. This was amplified in the bimodal speech-gesture condition. In contrast, rTMS over the LH abolished the behavioral enhancement effect. For this reason anodal stimulation of the left IFG was selected here as the main stimulation site. However there is also fMRI evidence suggesting that the right IFG could be involved in gesture processing [16]; hence the most parsimonious montage for us was one that involved the bilateral stimulation of the IFG, with reversed polarities. Nevertheless, studies have challenged the premise of MNS left lateralization or the concept of the human MNS as a whole. Certain findings from IFG damaged patients do not show deficits in understanding actions [17]. However, most studies support the human MNS, but not necessarily its left lateralization. For example, a carefully controlled fMRI study in which both left and right hand stimuli and subjects’ responses were lateralized, found activation that was consistent with MNS properties in the pars opercularis of the IFG bilaterally, while participants observed, executed or imitated finger movements [18]. However, in a magnetoencephalography (MEG) study, Nakamura et al. [19] contrasted symbolic with meaningless gestures and found that the meaningful gestures primarily engaged the RH, which is consistent with suggestions of RH dominance in emotional and social recognition (e.g., [20]). Various bilateral activation patterns were found recently in gesture processing and speech processing [21]. In sum, neuroimaging research has yet to provide a definitive answer regarding hemispheric asymmetry of MNS functions. We addressed this question by using transcranial direct current stimulation (tDCS) to modulate gesture recognition in healthy subjects. Our goal was to identify the lateralization of gestural e verbal integration. We used a semantic decision task with symbolic hand gestures as the main task, and an attentional load paradigm as a control task, where performance was not supposed to be influenced by the stimulation in Broca’s (BA 44) area. There were 3 experimental groups in a between-subjects design: tDCS with the anode placed on the left IFG and the cathode on its right homolog, a reversed montage, and a sham (placebo) condition. Methods Participants A total of 40 (21 females) healthy normal participants took part in the study (mean age ¼ 24.7, SD ¼ 2.4). All participants fulfilled the following inclusion criteria: healthy, right handed, with normal or corrected to normal vision, no psychiatric history and native Hebrew speakers. Prior to the tDCS study, all participants completed the Edinburgh Handedness Inventory [22]. Exclusion criteria comprised metallic implants, skin disease, neurological history, major head trauma, learning disabilities, attention deficit hyperactivity disorder (ADHD) and a first degree relative with a psychotic disorder. All participants provided written informed consent. The study was approved by the local Ethics Committee.

The participants were randomly assigned to 3 groups: 14 in the sham condition (7 females, mean age 24), 13 in the anodal stimulation of the right IFG (7 females, mean age 24.7) and 13 with anodal stimulation of the left IFG (7 females, mean age 25.5). Age means and gender frequencies did not differ between the three groups. Stimuli Symbolic gestures By scanning popular and scientific media we gathered a large set of candidate symbolic gestural stimuli containing about 130 gestures. Using this material we filmed short (1520 ms.) clips of hand movements together with the upper half of the actor’s body, using a facial mask to neutralize facial expressions. Pantomimes The same procedure and parameters were used to generate a set of (130) candidate pantomimes. We generated a set of nonmetaphoric, action e like gestures clips, such as cutting, dusting, sewing, tooth brushing, and other examples of tool use (in pantomime, without an actual object). Video shooting Hand-arm movements together with the upper half of the body were recorded using a single actor, who manipulated the material while sitting at a table. He wore a mask to conceal facial expressions, and a long-sleeved black shirt (see Fig. 1). Gestures were edited as short video clips lasting 1520 ms each using Windows Movie Maker software with the following setting: Audio Video Interleave (AVI) file type, 30.0 Mbps bit rate, 25 frames per second. Gesture ratings Twenty-five healthy volunteers (16 women) were recruited to serve as judges in a norming study, aged 19e45 (M ¼ 27, SD ¼ 5.7). All were native speakers of Hebrew who had lived in Israel since their infancy. For each video clip, the question “to what extent does this gesture have a conventional meaning?” was presented, and the subjects were asked to rate the degree of conventionality of the gesture on a 1e5 Likert scale (1 ¼ complete absence of conventional meaning, 5 ¼ very conventional). Next, if the gesture was rated 4 or 5, the judge was asked to type in a word that best described the meaning of the gesture. Gestures were selected for the main experiments only if at least 75% of judges rated it at least 4 on the conventionality scale. The terms suggested by the judges were matched for the following variables: (a) Linguistic category (i.e., noun, verb, adjective, etc.) (b) Length (c) Frequency: Log10 of the number of search results in the Google website [23]. Denotations comprised of more than a single word were searched within parentheses, to obtain the frequency of the full expression. Next, unrelated words were matched to each congruent clipword pair, such that the word did not match the gesture (for example, “tired” associated with a stop gesture clip). The unrelated words were identical to the congruent words for initial letter, phonological pattern, linguistic category and length (e.g.,/ MECHOAR/, “ugly”, was matched to/MESHOGA/, “crazy”). Two independent judges examined each unrelated word while observing the associated gesture, to verify that a semantic connotation did not exist. Gesture and word validation We conducted a semantic priming experiment to select the most suitable pairs of gesture clips (that served as primes) and words (targets); that is, pairs that were most congruent as measured by response latencies.

T. Cohen-Maximov et al. / Brain Stimulation 8 (2015) 493e498

495

Figure 1. Examples of gestural and verbal stimuli: A) A symbolic gesture with a word matched for meaning. B) A symbolic gesture with a word unmatched for meaning. C) A pantomime gesture with a corresponding word. D) A pantomime gesture with an unrelated word. The images are screen shots of the video clips.

Twenty-three healthy, right handed subjects (17 females, mean age ¼ 23, S.D. ¼ 3.24) participated in the validation study. Following the presentation of a gestural clip and a word (prime and target), the subjects were asked to decide whether the word matched the clip or not. The reaction times and accuracy levels for each item were calculated, and only clips with mean accuracy levels above 75% were included in the final database. Final gesture stimuli Based on the mean accuracy and mean reaction time for each clip we selected the most congruent pairs of gesture clips and words to be used in the main tDCS experiment. We used 44 instrumental and 32 symbolic gesture clips, half of which appeared with a congruent word and the other half with an incongruent word (see Fig. 1). Attentional load task We used the flanker task paradigm described elsewhere [24] as a control task, where we predicted that IFG stimulation would not interfere with this task. Another reason this task for selecting was that similar to the gesture task, it also involves compatible and incompatible presentation conditions, which thus enabled a direct statistical comparison between the tasks. Participants searched for

two possible target letters (“X” or “N”) among central non-target letters (L, M, W, Z, V). Participants were asked to indicate whether one of the letters was “X” or “N” by pressing a mouse button. Attentional load was manipulated randomly between trials. In the low-load condition, the circle was composed of the target letter with no competing central letters (low competition condition). In the high-load condition, the circle was composed of the target letter and five additional competing letters (high competition condition). A flanker appeared to the right or left of the circle in equal probabilities. The flankers were X or N and could be compatible with the target letter or not. Participants were instructed to ignore the flankers. The task was administered with E-prime version 1.1 software (Psychology Software Tools, Pittsburgh, PA) on a computer Cathode Ray Tube (CRT) monitor (34 cm  27 cm). Participants were seated 60 cm from the monitor. The central letters were in Miriam fixed font, 22 points, in white. The flanker letters were light gray, 26 points. Letters were presented in uppercase. Circle letters subtended 0.9 vertically and 0.6 horizontally. The flanker subtended 1.1 vertically and 0.9 horizontally. The distance from fixation to the circle subtended 2.1, and from fixation to the flanker, 4.3 . Stable viewing was supported by a chin rest. Target position (1e6), target identity, and distractor compatibility were counterbalanced when the trials were constructed.

496

T. Cohen-Maximov et al. / Brain Stimulation 8 (2015) 493e498

tDCS

Results

tDCS was delivered by a battery driven, constant current stimulator (Magstim, Carmathenshire, UK), using a pair of saline e soaked synthetic sponge electrodes. The size of the cathodal electrode was 5  7 cm and the anodal electrode size was 4  4 cm. The larger size of the reference electrode renders the stimulation over the contralateral orbitofrontal cortex less effective without compromising the stimulation effect beneath the active electrode [25]. In the active tDCS condition the current was applied for 26 min with a fade in, fade out ramp of 30 s. The current intensity was 1.5 mA [25] and impedance was kept below 5 kU, and the density was 0.09 mA/cm2. During the sham tDCS condition the constant current lasted only 30 s with a fade in, fade out ramp of 10 s. Therefore, the participants felt the initial itching sensation in the beginning but received no further current. This procedure allowed us to blind participants to their respective stimulation condition [26]. The experimental tasks (gestures first or the attentional load task first in a counterbalanced order) were conducted online and started after 8 min of active or sham stimulation. The mean duration of the tasks were 10 min for the gestures and 8 min for the attentional load task.

Main experimental task (gestures)

Procedure Each subject was tested once, with one of the three stimulation montages. Subjects were seated with their eyes approximately 60 cm from a computer monitor (75 Hz refresh rate). For right IFG stimulation, the anodal electrode (atDCS) was placed over the right IFG and the reference (cathodal) electrode (ctDCS) was placed over the left IFG. Localization was established using a 10e20 EEG system, where the lIFG was identified as the crossing point between T3-Fz and F7-Cz [27] and the rIFG was identified as the crossing point between T4-Fz and F8-Cz. The reversed montage was employed for the left IFG group. The same montages were applied for the sham condition: 7 subjects with sham right IFG and seven with sham left IFG. Gesture task Trial presentation was randomized within blocks. Each subject observed 76 experimental trials. Subjects were informed that a short video clip of an actor would be shown, followed by the brief presentation of a word. Their task was to make a semantic decision, by pressing one of two keys with the right index finger to indicate if the word was related to the gesture clip or not. Each trial began with the presentation of a fixation cross for 1000 ms, followed by the 1520 ms video clip (the prime). The fixation cross was then represented for 30 ms, followed by a 180 ms presentation of the target word, and then by a dark screen presented for 2000 ms. Subjects were asked to indicate whether the target word was related or unrelated to the clip by pressing a mouse button, and reaction time (RT) and accuracy of the response were recorded. Prior to the experiments the subjects completed a short (8 trial) practice session and received feedback concerning their reaction time and accuracy. Attentional load Trial presentation was randomized within blocks. Each experiment had 96 experimental trials. Each trial in the attentional load task began with 1000 ms of a central white fixation cross. The stimulus was presented for 100 ms. A blank response screen was presented until the response or for 2000 ms. A response after 2000 ms was encoded as incorrect. Before the experiment began, the subjects completed a short (12 trial) practice session and received feedback concerning their reaction time and accuracy.

Data analysis Only correct answers with RT below 2 s were included in the RT analysis (total of 98.7% of correct responses). A mixed design analysis of variance (ANOVA) with stimulation type (right IFG, left IFG, sham) as the between-subjects factor, and gesture type (symbolic, pantomime) and compatibility (compatible, incompatible) as the within-subject factors was conducted. Accuracy measures There were no main or interaction effects for the accuracy measures, probably due to a ceiling effect (mean accuracy ¼ 94.2%). Accuracy for all gesture types under right IFG stimulation was 95.2%, under left IFG stimulation 93.8%, and 93.3% under sham stimulation. Reaction time measures There was a significant main effect for congruency in the RT measures (F (1, 37) ¼ 41.5, P < 0.005, partial eta squared ¼ 0.52). Subjects were slower in the incongruent condition (mean ¼ 707 ms, SD ¼ 35) than in the congruent condition (mean ¼ 566 ms, SD ¼ 32), regardless of gesture type. There was a significant main effect for gesture type, (F (1, 37) ¼ 55.6, P < 0.001, partial eta squared ¼ 0.60), in that subjects responded faster to symbolic gestures (mean ¼ 590 ms, SD ¼ 30) compared to instrumental gestures (mean ¼ 684 ms, SD ¼ 34), regardless of congruency. Crucially, there was a significant stimulation effect (F (2, 37) ¼ 3.49, P < 0.05, partial eta squared ¼ 0.22). Posthoc Bonferroni comparisons (P ¼ 0.05) revealed that RT under atDCS of the right IFG (541 ms, SD ¼ 56) was significantly faster than the left IFG (660 ms, SD ¼ 56) or sham conditions (711 ms, SD ¼ 54). The stimulation effect did not interact with the within-subjects factors, as faster performance following right IFG stimulation was found for the two gestures types, whether presented with a congruent word or not. We discuss this when comparing stimulation effects in the control and the gesture tasks. Control task (attentional load) Data analysis Only correct answers with RTs below 2 s were included in the RT analysis (total of 98.6% of correct responses). A mixed design ANOVA with stimulation type (right IFG, left IFG, sham) as the between-subjects factor, and attentional load (high, low) and compatibility (incompatible, compatible) as the within-subject factors was conducted. Accuracy measures There was a main effect of load (F (1, 37) ¼ 50.3, P < 0.001, partial eta squared ¼ 0.592); subjects were more accurate in the low load condition (mean ¼ 77%, SD ¼ 4) than in the high load condition (mean ¼ 58%, SD ¼ 3) and a main effect of compatibility (F (1, 37) ¼ 12.9, P < 0.01, partial eta squared ¼ 0.292); subjects were more accurate in the compatible condition (mean ¼ 79%, SD ¼ 3) than in the incompatible condition (mean ¼ 59%, SD ¼ 7). There were no main or interaction effects for tDCS stimulation. Reaction time measures There was a main effect of load (F (1, 37) ¼ 205.6, P < 0.0001, partial eta squared ¼ 0.850); subjects were slower in the high load condition (mean ¼ 1066, SD ¼ 28) than in the low load condition (mean ¼ 842, SD ¼ 24). In addition, a main effect of compatibility

T. Cohen-Maximov et al. / Brain Stimulation 8 (2015) 493e498

emerged (F (1, 37) ¼ 12.55, P < 0.001, partial eta squared ¼ 0.423). Participants responded more slowly in the incompatible condition than in the compatible condition. The expected load  compatibility interaction was also found (F (1, 37) ¼ 7.24, P < 0.001, partial eta squared ¼ 0.293). Participants responded more slowly to low load with an incompatible flanker. There were no main or interaction effects for tDCS stimulation. Comparing the two tasks e gesture and control tasks In order to compare the stimulation effects between the two tasks directly we calculated Z scores for accuracy and RT for each task separately. For the gestures, we computed the Z score for the mean performance of all gestures (since there were no interactions with gesture type or congruency) to generate a single Z score per stimulation condition. For the attentional load task, we used the congruency differences between low and high load as the typical attentional load index, separately per stimulation condition. Figure 2 presents the mean Z scores for accuracy and RT for the two tasks and the 3 stimulation conditions. A mixed ANOVA with task as the within-subjects factor and stimulation condition as the between-subject factor revealed no effects on Z-scores for accuracy. For the RT z-scores, there was a significant interaction between task (gestures, control) and stimulation (right IFG, left IFG and sham) (F (2, 37) ¼ 5.2, P < 0.01, partial eta squared ¼ 0.171). Posthoc Bonferroni comparisons (P ¼ 0.05) indicated that the right IFG atDCS significantly improved the RTs of gesture processing, but did not affect attentional load. The left IFG stimulation did not differ from sham on both tasks. Discussion The main aim of the current study was to explore the lateralization of gestural-verbal integration. We compared left to right IFG

Figure 2. Summary accuracy and RT Z-scores for gestures and the control task under the 3 stimulation conditions. The * denotes a significant difference (P < .05) between the right IFG stimulation condition and the other two stimulation conditions.

497

anodal stimulation and found that only right IFG stimulation modulated gestural-verbal processing. A control task of attentional load was not affected by the stimulation, and the selective stimulation effects differed from sham condition, supporting the main finding of right-lateralization bias of the human MNS in processing gestural-verbal stimuli. Recognition of hand gestures, as assessed by reaction time, was facilitated. The subjects who received anodal stimulation over the right IFG were significantly faster than subjects who received sham or left IFG stimulation, but this was observed only for the gesture task, and not for the attentional load task. Thus, the stimulation affected gesture recognition selectively, and did not affect the performance of an unrelated (control) task. Brain lateralization in the processing of gestures is still unclear, though previous findings have provided the first indications of left hemisphere dominance in the processing of symbolic gestures [15]. However, these studies differed from the current study in crucial respects, thus rendering comparability difficult, and were also less comprehensive in their assessments. The stimuli used in previous studies are usually restricted to a small number of basic symbolic or meaningless gestures. In contrast, our study was based on a much more extensive database of gestures, both symbolic and pantomimes. Moreover, previously employed methods are inhomogeneous, with some studies using naming, and others imitation as their dependent variables, whereas we used semantic priming, measured by reaction times. We recently showed [28] that semantic priming, compared to naming and lexical decision, requires controlled processing. The interaction between gesture and language under controlled and attentive processing involves an “active” form of processing, and operates at a post-lexical level [28]. These interactions may enable the interplay of analytical and motor thinking, an idea put forward by Kita [29] in the study of production. Here we reached a similar conclusion with respect to language comprehension that can be supported through either speech or, as shown in the present study, with emblematic actions. The results of this experiment suggest that the right IFG could be more crucial for hand gesture processing than the left IFG (Broca’s region). Certain studies lend indirect weight to this possibility. Ross & Mesilla [30] reported that the ability to convey emotional affect by means of supplementary hand gestures while speaking was lacking in right-hemisphere-damaged patients. Nakamura et al. [19] contrasted symbolic with meaningless gestures and found that meaningful gestures primarily engaged the RH, in line with suggestions of RH predominance in emotional and social recognition (e.g., Ref. [20]). The neural network associated with the processing of hand gestures may be an evolutionary precursor of the neural systems associated with language. However, language is predominantly lateralized to the left hemisphere (but aphasic patients might activate right hemisphere areas homotopic to the LH language network, see Ref. [31]), whereas the degree of lateralization of hand gesture processing in humans is unclear. In monkeys, the mirror neurons used in processing hand gestures have been found in the F5 region of both hemispheres [6]. In humans, functional imaging studies on the mirror neuron system have not properly controlled for laterality [18]. We suggest that instrumental gestures are processed predominantly in the right hemisphere, which specializes in analyzing spatial location and visual-spatial properties of stimuli [32]. However, this possible lateralization requires further experiments to determine the direction of this bias. The significant facilitation of gesture processing we found following stimulation of the IFG was observed for all gestures, whether the target words matched the gesture or not. We interpret this general boosting effect for gesture as reflecting the fundamental role of the IFG in gesture observation, an argument that is

498

T. Cohen-Maximov et al. / Brain Stimulation 8 (2015) 493e498

consistent with Liew et al.’s findings [16]. Finer semantic processing of gestures might be similar in semantic judgment tasks as a whole; hence we predict that stimulation of Wernicke’s area might generate different effects for congruent and incongruent gestureword pairs, similar to what we found for semantic priming with words [33]. Given the low spatial resolution of the tDCS, future studies should verify the contribution of IFG to gesture processing (and action observation in general) with complementary methods such as EEG recording during stimulation or measurement of the excitability changes with TMS, both of which are beyond the scope of the present study. Moreover, the focality of stimulation in combination with a specific task is most likely enhanced [34,35]. For example, an electrode covering both the ventral and dorsal IFG in combination with a task specifically engaging the ventral IFG (semantic word generation) induced activity-modulations specifically in ventral but not dorsal IFG [35]. In future studies, gesture comprehension should be assessed while concomitantly stimulating another brain region, BA40 (bilaterally or unilaterally). This region is also a part of the MNS interface which presumably controls gesture processing. Interestingly, a recent study applied TMS over the left motor cortex and concluded that it is automatically activated when processing meaningful gestures [36]. It is conceivable that motor systems are co-activated with prefrontal and parietal regions to integrate symbolic gestures and words. The next step would then be to test auditory-gesture integration since many hand gestures are observed in association with speech. What are the implications of these results for a more complete understanding of gesture-language integration? Kelly, Creigh, and Bartolotti [37] argued that gesture and speech interact to enhance language comprehension in a reciprocal and obligatory manner. Here we show for the first time that it is possible to enhance this integration. Given that there are individuals from the autistic spectrum who experience difficulties in processing symbolic gestures [38] these findings might have important therapeutic extensions. References [1] Saffran EM. Evidence from language breakdown: Implications for the neural and functional organization of language. In: Banich MT, Mack MA, editors. Mind, brain, and language: multidisciplinary perspectives. Lawrence Erlbaum; 2003. p. 251e82. [2] Corballis MC. Cerebral asymmetry: motoring on. Trends Cogn Sci 1998;2:152e8. [3] Rizzolatti G, Arbib MA. Language within our grasp. Trends Neurosci 1998;21:188e94. [4] Decety J, Grèzes J. Neural mechanisms subserving the perception of human actions. Trends Cogn Sci 1999;3:172e8. [5] Flöel A, Ellger T, Breitenstein C, Knecht S. Language perception activates the hand motor cortex: implications for motor theories of speech perception. Eur J Neurosci 2003;18:704e8. [6] Gallese V, Fadiga L, Fogassi L, Rizzolatti G. Action recognition in the premotor cortex. Brain 1996;119:593e609. [7] Rizzolatti G, Fogassi L, Gallese V. Motor and cognitive functions of the ventral premotor cortex. Curr Opin Neurobiol 2002;12:149e54. [8] Morin O, Grezes J. What is “mirror” in the premotor cortex? A review. Clin Neurophysiol 2008;38:189e95. [9] di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G. Understanding motor events: a neurophysiological study. Exp Brain Res 1992;91:176e80.

[10] Rizzolatti G, Fogassi L, Gallese V. Neurophysiological mechanisms underlying the understanding and imitation of action. Nat Rev Neurosci 2001;2:661e70. [11] Mukamel R, Ekstrom AD, Kaplan J, Iacoboni M, Fried I. Single-neuron responses in humans during execution and observation of actions. Curr Biol 2010;20:750e6. [12] Rizzolatti G, Fabbri-Destro M, Cattaneo L. Mirror neurons and their clinical relevance. Nat Clin Pract Neurol 2009;5:24e34. [13] Grafton ST, Arbib MA, Fadiga L, Rizzolatti G. Localization of grasp representations in humans by positron emission tomography. Exp Brain Res 1996;112:103e11. [14] Lindenberg R, Uhlig M, Scherfeld D, Schlaug G, Seitz RJ. Communication with emblematic gestures: shared and distinct neural correlates of expression and reception. Hum Brain Mapp 2012;33:812e23. [15] Gentilucci M, Bernardis P, Crisi G, Volta RD. Repetitive transcranial magnetic stimulation of Broca’s area affects verbal responses to gesture observation. J Cogn Neurosci 2006;18:1059e74. [16] Liew SL, Sheng T, Margetis JL, Aziz-Zadeh L. Both novelty and expertise increase action observation network activity. Front Hum Neurosci 2013;7. [17] Hickok G. Eight problems for the mirror neuron theory of action understanding in monkeys and humans. J Cogn Neurosci 2009;21:1229e43. [18] Aziz-Zadeh L, Koski L, Zaidel E, Mazziotta J, Iacoboni M. Lateralization of the human mirror neuron system. J Neurosci 2006;26:2964e70. [19] Nakamura A, Maess B, Knosche TR, Gunter TC, Bach P, Friederici AD. Cooperation of different neuronal systems during hand sign recognition. Neuroimage 2004;23:25e34. [20] Adolphs R. Social cognition and the human brain. Trends Cogn Sci 1999;3:469e79. [21] Andric M, Solodkin A, Buccino G, Goldin-Meadow S, Rizzolatti G, Small SL. Brain function overlaps when people observe emblems, speech, and grasping. Neuropsychologia 2013;51:1619e29. [22] Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 1971;9:97e113. [23] Blair IV, Urland GR, Ma JE. Using Internet search engines to estimate word frequency. Behav Res Methods Instrum Comput 2002;34:286e90. [24] Lavie N, Cox S. On the efficiency of visual selective attention: efficient visual search leads to inefficient distractor rejection. Psychol Sci 1997;8:395e6. [25] Nitsche MA, Paulus W. Transcranial direct current stimulationeupdate 2011. Restor Neurol Neurosci 2011;29:463e92. [26] Gandiga PC, Hummel FC, Cohen LG. Transcranial DC stimulation (tDCS): a tool for double-blind sham-controlled clinical studies in brain stimulation. Clin Neurophysiol 2006;117:845e50. [27] Monti A, Cogiamanian F, Marceglia S, et al. Improved naming after transcranial direct current stimulation in aphasia. J Neurol Neurosurg Psychiatry 2008;79:451e3. [28] Vainiger D, Labruna L, Ivry RB, Lavidor M. Beyond words: evidence for automatic languageegesture integration of symbolic gestures but not dynamic landscapes. Psychol Res 2014;78:55e69. [29] Kita S. How representational gestures help speaking. In: McNeill D, editor. Language and gesture. Cambridge: Cambridge University Press; 2000. p. 162e85. [30] Ross ED, Mesulam MM. Dominant language functions of the right hemisphere? Prosody and emotional gesturing. Arch Neurol 1979;36:144e8. [31] Turkeltaub PE, Messing S, Norise C, Hamilton RH. Are networks for residual language function and recovery consistent across aphasic patients? Neurology 2011;76:1726e34. [32] Beaumont JG. Introduction to neuropsychology. 2nd ed. The Guilford Press; 2008. [33] Weltman K, Lavidor M. Modulating lexical and semantic processing by transcranial direct current stimulation. Exp Brain Res 2013;226:121e35. [34] Holland R, Leff AP, Josephs O, et al. Speech facilitation by left inferior frontal cortex stimulation. Curr Biol 2011;21:1403e7. [35] Meinzer M, Antonenko D, Lindenberg R, et al. Electrical brain stimulation improves cognitive performance by modulating functional connectivity and task-specific activation. J Neurosci 2012;32:1859e66. [36] Campione GC, De Stefani E, Innocenti A, et al. Does comprehension of symbolic gestures and corresponding-in-meaning words make use of motor simulation? Behav Brain Res 2014;259:297e301. [37] Kelly SD, Creigh P, Bartolotti J. Integrating speech and iconic gestures in a stroop-like task: evidence for automatic processing. J Cogn Neurosci 2010;22:683e94. [38] Baron-Cohen S. Social and pragmatic deficits in autism: cognitive or affective? J Autism Dev Disord 1988;18:379e402.

Modulation of Gestural-verbal Semantic Integration by tDCS.

Understanding actions based on either language or observation of gestures is presumed to involve the motor system, and reflect the engagement of an em...
568KB Sizes 2 Downloads 7 Views