Neuroscience Letters 585 (2015) 43–47

Contents lists available at ScienceDirect

Neuroscience Letters journal homepage: www.elsevier.com/locate/neulet

Short communication

Emotional content of stimuli improves visuospatial working memory Andrés Antonio González-Garrido a,b,∗ , Adriana Liset López-Franco a , Fabiola Reveca Gómez- Velázquez a , Julieta Ramos-Loyo a , Henrique Sequeira c a b c

Instituto de Neurociencias (Universidad de Guadalajara), Mexico O.P.D. Hospital Civil de Guadalajara, Mexico Université de Lille I & Laboratoire de Neurosciences Fonctionnelles et Pathologies, France

h i g h l i g h t s • • • •

Visuospatial working memory processing showed a facial emotion-related effect. Spatial working memory maintenance is affected by visual processing of happy faces. N170 is sensitive to happy faces while performing a VSWM task. P2 and LPP components showed happy face-triggered effects in a VSWM task.

a r t i c l e

i n f o

Article history: Received 2 June 2014 Received in revised form 17 October 2014 Accepted 9 November 2014 Available online 13 November 2014 Keywords: Visuospatial working memory Emotional face Memory load ERP

a b s t r a c t Processing and storage in visuospatial working memory (VSWM) seem to depend on attention-based mechanisms. In order to explore the effect of attention-attractive stimuli, such as emotional faces on VSWM performance, ERPs were obtained from 20 young adults while reproducing spatial sequences of six facial (happy and neutral) and non-facial control stimuli in inverse order. Behavioral performances revealed that trials with happy facial expressions resulted in a significantly higher amount of correct responses. For positive emotional facial stimuli, N170 amplitude was higher over right temporo–parietal regions, while P2 amplitude was higher over frontal and lower over parietal regions. In addition, LPP amplitude was also significantly higher for this type of stimuli. Both behavioral and electrophysiological results support the notion of the domain-general attention-based mechanism of VSWM maintenance, in which spatial to-be-remembered locations might be influenced by the emotional content of the stimuli. © 2014 Elsevier Ireland Ltd. All rights reserved.

1. Introduction The on-line manipulation of different environmental visual stimuli involves working memory. Working memory (WM) is a theoretical construct used to refer to a limited capacity system or mechanism underlying the maintenance of task-relevant information during the performance of a cognitive task [1,2]. In brief, WM allows us to use information that is not currently available in the immediate environment. The WM visuospatial sketchpad, a so-called slave system postulated by Baddeley and Hitch [1], was originally conceived as a spatial storage. However, opposing perspectives have been

∗ Corresponding author at: Instituto de Neurociencias, Universidad de Guadalajara, Francisco de Quevedo 180. Col. Arcos Vallarta, Guadalajara, Jalisco 44130, Mexico. Tel.: +52 33 38 18 07 40/1718681; fax: +52 3338180740. E-mail address: [email protected] (A.A. González-Garrido). http://dx.doi.org/10.1016/j.neulet.2014.11.014 0304-3940/© 2014 Elsevier Ireland Ltd. All rights reserved.

assumed by Logie [3] and several other authors who have hypothesized that visuospatial WM includes both object-based and spatial types of information, but that they are processed by different visual and spatial components [4]. Alternatively, Barrouillet et al. [5] postulated the time-based resource-sharing (TBRS) model of WM, which states that a general attention resource has to be shared between processing and storage activities. According to this model, forgetting could be explained as an effect of central interference caused by processing activities that divert attention from refreshing decaying memory traces. In fact, it has been found that spatial WM maintenance could be disrupted by visual processing, and that visual maintenance could be disrupted by spatial processing [6], which contradicts the notion of the domain-based fractionation of the visuospatial system into a visual component and a spatial component. Since attention seems to be crucial for maintenance and load-related visuospatial working memory processes, the present study focuses on the effects that the attention-attractiveness

44

A.A. González-Garrido et al. / Neuroscience Letters 585 (2015) 43–47

of the content of the stimuli has on their spatial manipulation in WM. Specifically, we aimed to evaluate whether strong attention-attractive stimuli, such as emotional faces, could influence visuospatial WM maintenance, and operation. Emotional facial expressions are particularly salient stimuli for conveying other people’s affective dispositions. Due to their social importance, emotional faces are identified accurately and faster than other changing objects [7]. Evidence suggests that affective information contained in facial expression is perceived involuntarily [8] and is capable of modifying the focus of attention [9]. In addition, several findings put forward that responses to facial emotional stimuli, especially fear and happiness, are modulated by attentional processes [10–12]. Furthermore, happy faces have been found to significantly affect the allocation of spatial attentional resources [13]. In this context, an experimental paradigm was designed, in which stimuli were presented at different locations on a touch screen video monitor and the participants had to remember the sequential order and location of each stimulus. Then, after a delay period, subjects had to reproduce the spatial location of each stimulus in the opposite order of the original presentation. Trials consisted of 6 happy faces, 6 neutral faces, or 6 neutral squares with simultaneous EEG recording. In order to better evaluate the temporal order of the cognitive-related events, event-related potentials (ERP) were obtained for the appearance of the last stimulus in each sequence category. Event-related potentials have been successfully used to evaluate working memory and face processing, where several components have been associated with different stages of face evaluation, encoding, and retrieval. In the present study, due to their relation with facial emotions and attentional processing, the analyses will focus on early and late ERP components: N170, P2, and the late positive potential (LPP). N170, a negative component that is thought to originate from posterior-lateral occipito-temporal cortex [14], is considered to reflect structural facial encoding. Even though face-specificity of N170 seems undeniable, the emotional modulation of this component appears to depend on specifications of the experimental setup. P2 is an attention-related positivity peaking at approximately 200 ms; it might be sensitive to facial emotion and has been documented as reflecting learning and a deeper processing of stimuli [15]. LPP, which has been shown to be prominent over midline centro-parietal regions during the attentive processing of positive emotional facial expressions, is thought to index sustained processing and encoding of emotional stimuli in fronto–parietal brain networks, as a probable expression of a continued and deeper evaluation of emotional stimuli [16]. Following the TBRS model, we hypothesized that attentionattractive stimuli as emotional faces might unsettle the attentional processing required to maintain spatial information in WM, thus leading to poorer behavioral performances. Specifically, we predicted that when performing a visuospatial WM task, trials including faces as stimuli would impoverish the execution by leading to a decrease in the number of correct responses and longer response times with respect to trials with non-facial stimuli. In addition, we expected this effect to be more pronounced when using emotional, easily recognizable facial stimuli, such as happy faces. With regard to the ERP components, higher voltage amplitude of N170 for emotional faces, as well as higher P2 and LPP were initially expected, due to the task-related need to encode and retain the stimuli locations then affecting N170 [17], along with the effect of the attentional attractiveness of the emotional facial content on P2 and LPP [15,16].

2. Methods 2.1. Subjects A total of 20 healthy, university volunteers (13 males) participated in the experiment (mean age = 25.5, SD = 5.62 years). Inclusion criteria were right-handedness and normal or correctedto-normal vision. Exclusion criteria were a personal or family history of drug abuse or psychopathology, epilepsy, head injury, and drug or alcohol use (within 24 h prior to testing), all of which were assessed through clinical interviews. All subjects gave their written consent to participate in the study after they were fully instructed of the experimental procedures. The study was previously approved by the ethics committee of the Neuroscience Institute. 2.2. Design and procedure 2.2.1. Behavioral data and experimental task The task consisted of 90 trials corresponding to 3 experimental categories of visual stimuli: (1) neutral faces (30 trials); (2) happy faces (30 trials), and (3) squares (control condition, 30 trials). Facial categories consisted of 20 full-color, 16 × 13 cm photographs of Hispanic models (5 males, 5 females) with neutral and happy facial expressions. These facial expressions had been categorized correctly with a hit rate above 90% by a pool of 20 similar subjects in a previous pilot study, and subsequently used in other experiments [18]. Neutral squares (control; 10 images) were built by randomizing the pixels of all the facial image stimuli from neutral and happy faces. Each image was presented 18 times to allow a pseudo-random assembly of 30 trials per condition. Each trial comprised 6 images from the same category (e.g., 6 neutral faces). The presentation order of the 90 trials was randomized and divided into 2 blocks of 45 trials each. After each block, subjects were allowed a brief resting period. The presentation order of the blocks was counterbalanced. A spatial WM experimental design was used, consisting in the sequential presentation of a series of 6 stimuli, all shown in different areas of a 21” touch-screen monitor with a central white dot as the fixation point. Viewing distance was 60 cm. Each stimulus within a trial belonged to the same category (happy, neutral, or control). Unbeknownst to the participants, the screen area was divided by the software into 2 rows and 3 columns to define 6 identical regions where the stimuli could appear at random. Subjects were previously instructed and trained to inversely reproduce the sequence presented by pressing – as quickly as possible via the touch-screen device – the corresponding screen locations with their right index finger, as soon as the cue “RESPOND” appeared in the center of the screen. Participants were seated comfortably in a quiet, dimly-lit room. Visual stimuli were presented on an SVGA monitor (refresh rate: 100 Hz). Each stimulus was presented during 2000 ms with an inter-stimulus interval of 1500 ms. The command “RESPOND” lasted 1500 ms and was followed by a screen-in-black period with maximum duration of 5000 ms while responses were submitted. If the response was completed before this time period, then the next trial was triggered automatically. Fig. 1 illustrates the experimental flow chart. 2.2.2. ERP acquisition EEGs were recorded and ERPs obtained in all the experimental design categories from 100 ms before the onset of the stimuli until 1000 ms after it. Stimulus-locked ERPs were recorded, only for the last stimulus in each sequence, from the Fp1, Fp2, F7, F8, F3, F4, C3, C4, P3, P4, O1, O2, T3, T4, T5, T6, Fz, Cz, and Pz scalp electrode sites, according to the 10–20 system. Electrooculograms (EOGs)

A.A. González-Garrido et al. / Neuroscience Letters 585 (2015) 43–47

45

Fig. 1. Experimental flow chart showing the stimuli sequence trial.

were recorded from the outer canthus and infraocular orbital ridge of the right eye. Electrophysiological recordings were made using standard caps (Electro-Cap International, Inc.). All recording sites were referred to linked mastoids. Interelectrode impedances were below 5 k. EEG and EOG signals were amplified at a bandpass of 0.05–30 Hz (3-dB cutoff points of 6 dB/octave roll-off) with a sampling period of 5 ms on the MEDICID-04 system. Single trial data were examined off-line for averaging and analysis. Epochs of data on all channels were automatically excluded from averages when voltage in a given recording epoch exceeded 100 ␮V on any EEG or EOG channel. Twenty correct, artifact-free trials were selected for the analysis from each category and subject. Data were recalculated to the common average reference prior to groupaveraging, and baseline correction was applied to the 100 ms epoch preceding the stimulus. All scoring was conducted baseline-to-peak through visual inspection. 2.2.3. Data analysis Repeated measure analyses of variance (RM-ANOVAs) were applied to the behavioral data. Allowing for the appearance of the stimulus as the initial time instant (t0 ), ERP components were scored by locating the highest peak within predetermined timeframes: N170 (150–200 ms) and P2 (150–250 ms). The LPP amplitude was scored by computing the average amplitude from 500 to 1000 ms. We first visually inspected the grand averages to determine whether this epoch effectively captured the LPP. Electrophysiological measures were assessed using repeated measures general linear model (GLM) analysis with the withinsubject factors type of stimuli and electrode. The analysis of recording sites was tested with respect to the topographic distribution of the main ERPs components: N170 (P3, P4, T5, and T6); P2 (Fz, Cz, Pz, F3, F4, C3, C4, P3, P4); and LPP (Fz, Cz, F3, F4, C3, and C4). Greenhouse-Geisser corrections of the df were applied as needed. Post-hoc Tukey’s HSD tests were used to explore any trends in the differences found.

3. Results 3.1. Behavioral results The analysis of the correct responses showed that there were significant differences between the experimental design categories (F(1.939,36.835) = 5.975, p < .01), 2 p = .239). Post hoc analyses showed that the amount of correct responses was higher for trials with happy faces, with respect to those with neutral faces (t(19) = 2.76, p < .001, r = .536) and control stimuli (t(19) = 3.04, p < .001, r = .573), but there were not significant differences either between those two latter categories or for the response times. Table 1 shows the behavioral performances during the task. 3.2. ERP data N170. The amplitude was larger over right temporo-parietal areas, and there was a significant interaction between the type of stimuli and electrode (F(3.3,62.7) = 3.131, p < .05, 2 p = .141). N170 amplitude was larger when trials contained happy faces with respect to those with neutral faces (t(19) = −6.84, p < .001, r = .26) or control stimuli (t(19) = −5.73, p < .001, r = .43). N170 latency did not show any significant effect across the experimental design categories. Fig. 2A shows the group-averaged ERPs at locations T5 and T6, as well as the topographic maps obtained for each condition at 170 ms. P2. The amplitude analysis showed a significant interaction between the type of stimuli and electrode (F(2.72,51.7) = 13.397, p < .001, 2 p = .414). At Fz, P2 was significantly larger when stimuli were happy faces in comparison to neutral faces (t(19) = 4.18, p < .01, r ≥ .69) and control (t(19) ≥ 4.18, p < .01, r ≥ .69). However, post-hoc analysis showed that happy faces lead to lower P2 amplitude at Pz with respect to the other two conditions. The latency of P2 was not significantly different across conditions. Fig. 2B shows the groupaveraged ERPs at midline, and the topographic maps obtained for each condition at P2 and LPP maxima.

Table 1 Behavioral performances in the VSWM task.

Correct responses Response times M: mean; SD: standard deviation.

Happy faces M (SD)

Neutral faces M (SD)

Control stimuli M (SD)

F value

p value

24.6 (0.9) 3252.52 (178.7)

23.1 (0.8) 3284.02 (199.2)

23.0 (1.1) 3270.18 (279.5)

5.975 0.88

< .001 > .05

46

A.A. González-Garrido et al. / Neuroscience Letters 585 (2015) 43–47

Fig. 2. A. Grand-mean ERP waveforms at T5 and T6, and topographical maps obtained at 170 ms. B. Grand-mean midline ERP waveforms and the topographical maps obtained in the indicated latencies.

LPP. The amplitude showed significant differences between conditions (F(1.821,34.6) = 3.477, p < .05, 2 p = .155) but no relevant interactions. LPP amplitude tended to be larger when trials had happy faces with respect to control stimuli [happy (mean voltage) = 1.56 ␮V, and control (mean voltage) = .44 ␮V], but they were not when compared to those with neutral faces [neutral (mean voltage) = .717 ␮V]. The analysis of the LPP latency did not show any significant effects.

4. Discussion Contrary to what we expected, performances on trials involving happy faces were significantly better than those achieved while processing sequences of either neutral faces or simple squares. Reaction times were not significantly different across conditions probably due to the experimental design, in which the response was delayed in order to avoid the abundant artifacts recorded in previous pilot studies. The issue on how emotion might modulate spatial memory remains to be elucidated. However, a possible mechanism involving attention often thought of as the gatekeeper for shortterm memory, might be implicated [19]. Recent findings suggest that attentional refreshing is the sole mechanism for visuospatial information maintenance [20]. Therefore, even the maintenance of any amount of visuo-spatial information should disrupt any kind of attention-demanding processing. Most previous studies on sequential presentation have used verbal memoranda, such as words, letters, or digits, which might be verbally rehearsed during retention. When nonverbal items as unconventional characters have been used as stimuli a loss of features results due to the fact that they cannot be refreshed or verbally rehearsed [21]. In the present experiment we used happy faces, neutral faces, and neutral squares in a visuospatial WM task that could be considered a highly demanding task. The emotional faces appear to be particularly good at capturing attention even when they are taskirrelevant [22]; thus, it is possible to conceive that such faces could attract more attention to the spatial location and therefore influence spatial memory. In fact, emotional faces might subsequently be encoded more efficiently than neutral faces given that emotion seems to facilitate sensory processing [23]. Previous findings on a

happiness detection advantage relative to both angry [24] and sad faces [25] could justify the present results. The facial distracting effect could also be elucidated in light of the load theory of selective attention [26], which assumes that higher cognitive demands may trigger perceptual adaptation mechanisms that seek to reduce distractor perception. Actually, there is sufficient evidence to assume that when visual attention is taxed by perceptually-demanding tasks, irrelevant visual distractors are strongly suppressed [27]. This is not surprising due to the fact that the type of WM load manipulated could be critical in determining the consequences of distractor processing [28,29]. In the present study, the modality of the WM load did not completely overlap with that of the visual distractors, thus their perceptual suppression was not facilitated. In addition, stimuli presentation lasted sufficient time to allow further perceptual analysis. Due to dissimilarities observed while processing happy versus neutral facial emotional contents, these results must be explained on the basis of an emotion-related facial processing effect, since the simple appearance of facial stimuli does not account for the differences found. Separate memory systems involving parahippocampal gyri, the inferior parietal gyrus, the anterior cingulate gyrus, and the right caudate nucleus all seem to participate when spatial information is stored [30], or to depict a right frontoparietal network when objectbased processing is carried out [31]. Evidence from several fMRI studies using dual-task designs have shown that top-down signals related to WM may amplify neuronal responses associated with a perceptual task while suppressing task-irrelevant neural responses [32]; thus, supporting the above assumption. On the other hand, electrophysiological results depicted the temporal course of VSWM operations. The characteristics of the stimuli determined early ERP variations that likely reflect the encoding stage of new incoming stimuli, as well as their liaison with predecessors. In this context, no significant differences either for response time between conditions or for main ERP peak latencies were found, thus stimuli-driven behavioral and ERP effects seem to disagree with what the TBRS model postulates. We found N170 to be sensitive to the emotional facial content of the stimuli. However, the sensitivity of N170 to facial expression is controversial (for a review, see Rellecke et al. [33]). While some reports have shown increased N170 amplitude for emotional faces

A.A. González-Garrido et al. / Neuroscience Letters 585 (2015) 43–47

[34], others have reported no effects (e.g., Eimer and Holmes [35]). N170 facial emotional effects though have been mainly reported for negative expressions (i.e., angry), but other studies have not found N170 amplitude differences between negative and happy facial stimuli [36]. In the present study, N170 higher amplitudes for happy faces indexed a clear emotional effect, possibly boosted by the experimental setup. It has been suggested that the P2 amplitude reflects the capture of attentional resources from stimuli being processed [37]. In the present experiment, P2 was significantly higher over the frontal locations when most attention-attractive stimuli-emotional faces- were present. This also seems to correspond with the previous notion that this component reflects the evaluation of the emotional relevance of a visual stimulus and attentional reorientation [38] while being involved in the implicit configurational facial processing. Finally, the later slow positivity was significantly higher for the emotional faces. In general, late positive waveforms have been associated with late executive processes, particularly those involving memory [39], in which target identification does not provide cognitive closure but, rather, prompts the performance of a second task. These previous findings might provide a reasonable explanation for the changes in the amplitude of the present LPP component, particularly considering that main variations predominantly occurred over frontal regions, and that more pronounced positive changes corresponded to happy faces. The latter results might be interpreted as part of a frontally-controlled processes, which tends to facilitate direct forgetting of faces, as recently reported by Hauswald et al. [40]. 5. Conclusions The present results seem to support the assumption that VSWM maintenance relies on a domain-general attention-based mechanism, in which spatial to-be-remembered locations might be influenced by the content of the stimuli, as it happened when they were happy faces. Resource-sharing abilities might allow the system to recruit additional resources when attention-attractive stimuli are concurrently processed, what seems to contradict the initial TBRS model predictions. References [1] A.D. Baddeley, G. Hitch, Working memory, in: G.H. Bower (Ed.), The Psychology of Learning and Motivation: Advances in Research and Theory, New York: Academic Press, 1974, pp. 47–90. [2] A.D. Baddeley, Working memory: looking back and looking forward, Nat. Rev. Neurosci. 4 (2003) 829–839. [3] R.H. Logie, Visuo-spatial processing in working memory, Q. J. Exp. Psychol. 38A (1986) 229–247. [4] A.D. Baddeley, R.H. Logie, Working memory: the multiple component model, in: A. Miyake, P. Shah (Eds.), Models of Working Memory, New York: Cambridge University Press, 1999, pp. 28–61. [5] P. Barrouillet, S. Bernardin, C. Camos, Time constraints and resource sharing in adults’ working memory spans, J. Exp. Psychol. Gen. 133 (2004) 83–100. [6] E. Vergauwe, P. Barrouillet, V. Camos, Visual and spatial working memory are not that dissociated after all: a time-based resource-sharing account, J. Exp. Psychol. Learn. Mem. Cognit. 35 (2009) 1012–1028. [7] A.A. Reinders, J.A. den Boer, C. Büchel, The robustness of perception, Eur. J. Neurosci. 22 (2005) 524–530. [8] J.D. Eastwood, D. Smilek, P.M. Merikle, Differential attentional guidance by unattended faces expressing positive and negative emotion, Percept. Psychophys. 63 (2001) 1004–1013. [9] M.J. Fenske, J.D. Eastwood, Modulation of focused attention by faces expressing emotion: evidence from flanker tasks, Emotion 3 (2003) 327–343.

47

[10] A. Holmes, P. Vuilleumier, M. Eimer, The processing of emotional facial expression is gated by spatial attention: evidence from event-related brain potentials, Brain Res. Cognit. Brain Res. 16 (2003) 174–184. [11] L. Pessoa, M. McKenna, E. Gutierrez, L.G. Ungerleider, Neural processing of emotional faces requires attention, Proc. Natl. Acad. Sci. U. S. A. 99 (2002) 11458–11463. [12] G. Pourtois, S. Schwartz, M.L. Seghier, F. Lazeyras, P. Vuilleumier, Neural systems for orienting attention to the location of threat signals: an event-related fMRI study, Neuroimage 31 (2006) 920–933. [13] Z. Cattaneo, C. Lega, J. Boehringer, M. Gallucci, L. Girelli, C.C. Carbon, Happiness takes you right: the effect of emotional stimuli on line bisection, Cognit. Emot. 26 (2014) 325–344. [14] S.R. Schweinberger, E.C. Pickering, I. Jentzsch, A.M. Burton, J.M. Kaufmann, Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions, Brain Res. Cognit. Brain Res. 14 (2002) 398–409. [15] M. Latinus, M.J. Taylor, Holistic processing of faces: learning effects with Mooney faces, J. Cognit. Neurosci. 17 (2005) 1316–1327. [16] M. Eimer, A. Holmes, F.P. McGlone, The role of spatial attention in the processing of facial expression: an ERP study of rapid brain responses to six basic emotions, Cognit. Affect. Behav. Neurosci. 3 (2003) 97–110. [17] C.A. Brenner, S.P. Rumak, A.M. Burns, P.D. Kieffaber, The role of encoding and attention in facial emotion memory: an EEG investigation, Int. J. Psychophysiol. 93 (2014) 398–410. [18] A.A. González-Garrido, J. Ramos-Loyo, A.L. López-Franco, F.R. Gómez-Velázquez, Visual processing in a facial emotional context: an ERP study, Int. J. Psychophysiol. 71 (2009) 25–30. [19] J. Duncan, G. Humphreys, Visual search and stimulus similarity, Psychol. Rev. 96 (1989) 433–458. [20] E. Vergauwe, P. Barrouillet, V. Camos, Do mental processes share a domain general resource? Psychol Sci. 21 (2010) 384–390. [21] T.J. Ricker, N. Cowan, Loss of visual working memory within seconds: the combined use of refreshable and non-refreshable features, J. Exp. Psychol. Learn. Mem. Cognit. 36 (2010) 1355–1368. [22] S. Hodsoll, E. Viding, N. Lavie, Attentional capture by irrelevant emotional distractor faces, Emotion 11 (2011) 346–353. [23] H.T. Schupp, M. Junghöfer, A.I. Weike, A.O. Hamm, Emotional facilitation of sensory processing in the visual cortex, Psychol. Sci. 14 (2003) 7–13. [24] D.V. Becker, R. Neel, N. Srinivasan, S. Neufeld, D. Kumar, S. Fouse, The vividness of happiness in dynamic facial displays of emotion, PLos One 7 (2012) e26551. [25] P. Srivastava, N. Srinivasan, Time course of visual attention with emotional faces, Atten. Percept. Psychophys. 72 (2010) 369–377. [26] N. Lavie, A. Hirst, J.W. de Fockert, E. Viding, Load theory of selective attention and cognitive control, J. Exp. Psychol. Gen. 133 (2004) 339–354. [27] D.J. Yi, G.F. Woodman, D. Widders, R. Marois, M.M. Chun, Neural fate of ignored stimuli: dissociable effects of perceptual and working memory load, Nat. Neurosci. 7 (2004) 992–996. [28] S.Y. Kim, M.S. Kim, M.M. Chun, Concurrent working memory load can reduce distraction, Proc. Natl. Acad. Sci. U. S. A. 102 (2005) 16524–16529. [29] S. Park, M.S. Kim, M.M. Chun, Concurrent working memory load can facilitate selective attention: evidence for specialized load, J. Exp. Psychol. Hum. Percept. Perform. 33 (2007) 1062–1075. [30] G. Janzen, C.G. Weststeijn, Neural representation of object location and route direction: an event-related fMRI study, Brain Res. 1165 (2007) 116–125. [31] G. Galati, E. Lobel, G. Vallar, A. Berthoz, L. Pizzamiglio, D. Le Bihan, The neural basis of egocentric and allocentric coding of space in humans: a functional magnetic resonance study, Exp. Brain Res. 133 (2000) 156–164. [32] J. Rissman, A. Gazzaley, M. D’Esposito, The effect of non-visual working memory load on top–down modulation of visual processing, Neuropsychologia 47 (2009) 1637–1646. [33] J. Rellecke, W. Sommer, A. Schacht, Emotion effects on the N170: a question of reference? Brain Topogr. 26 (2013) 62–71. [34] M. Batty, M.J. Taylor, Early processing of the six basic facial emotional expressions, Brain Res. Cognit. Brain Res. 17 (2003) 613–620. [35] M. Eimer, A. Holmes, Event-related brain potential correlates of emotional face processing, Neuropsychologia 45 (2007) 15–31. [36] M.G. Calvo, D. Beltran, Recognition advantage of happy faces: tracing the neurocognitive processes, Neuropsychologia 51 (2013) 2051–2060. [37] H.T. Schupp, M. Junghofer, A.I. Weike, A.O. Hamm, Attention and emotion: an ERP analysis of facilitated emotional stimulus processing, NeuroReport 14 (2003) 1107–1110. [38] L. Carretié, M. Martín-Loeches, J.A. Hinojosa, F. Mercado, Emotion and attention interaction studied through event-related potentials, J. Cognit. Neurosci. 13 (2001) 1109–1128. [39] J. Folstein, C. Van Petten, After the P3: late executive processes in stimulus categorization, Psychophysiology 48 (2011) 825–841. [40] A. Hauswald, H. Schulz, T. Iordanov, J. Kissler, ERP dynamics underlying successful directed forgetting of neutral but not negative pictures, Soc. Cognit. Affective Neurosci. 6 (2011) 450–459.

Emotional content of stimuli improves visuospatial working memory.

Processing and storage in visuospatial working memory (VSWM) seem to depend on attention-based mechanisms. In order to explore the effect of attention...
775KB Sizes 0 Downloads 12 Views