Exp Brain Res DOI 10.1007/s00221-015-4280-2
Visuospatial transformations and personality: evidence of a relationship between visuospatial perspective taking and self‑reported emotional empathy Valentina Sulpizio1,3 · Giorgia Committeri2 · Emilia Metta1 · Simon Lambrey4,5 · Alain Berthoz4 · Gaspare Galati1,3
Received: 15 July 2014 / Accepted: 6 April 2015 © Springer-Verlag Berlin Heidelberg 2015
Abstract In the visuospatial domain, perspective taking is the ability to imagine how a visual scene appears from an external observer’s viewpoint, and can be studied by asking subjects to encode object locations in a visual scene where another individual is present and then detecting their displacement when seeing the scene from the other’s viewpoint. In the current study, we explored the relationship between visuospatial perspective taking and self-report measures of the cognitive and emotional components of empathy in young adults. To this aim, we employed a priming paradigm, in which the presence of an avatar allowed to anticipate the next perceived perspective on the visual scene. We found that the emotional dimension of empathy was positively correlated with the behavioral advantage provided by the presence of the avatar, relative to unprimed perspective changes. These data suggest a link between the tendency to vicariously experience the others’ emotions and the ability to perform self–other spatial transformations.
Electronic supplementary material The online version of this article (doi:10.1007/s00221-015-4280-2) contains supplementary material, which is available to authorized users. * Valentina Sulpizio [email protected]
Department of Psychology, Sapienza University, Rome, Italy
Department of Neuroscience, Imaging and Clinical Sciences, University G. d’Annunzio, and ITAB, Institute for Advanced Biomedical Technologies, Chieti, Italy
Laboratory of Neuropsychology, Foundation Santa Lucia IRCCS, Rome, Italy
LPPA, Collège de France-CNRS, Paris, France
Service de Psychiatrie adulte, Groupe Hospitalier Pitié-Salpêtrière, Paris, France
Keywords Spatial memory · Mental transformations · Perspective taking · Empathy · Personality traits · Virtual reality
Introduction Perspective taking, in its broadest sense, refers to the process of projecting oneself “into the shoes” of another person and is considered as a fundamental ability for monitoring social intentions in every day life. In a more restricted sense, perspective taking refers to the visuospatial ability of imagining a spatial transformation of the self (Zacks and Michelon 2005; Wraga and Shepard 2005) into the other person’s perspective. It has been suggested that adopting the other person’s perspective in a visuospatial sense is closely related to the empathy (Baron-Cohen and Wheelwright 2004), which similarly involves the capacity to project one’s own body into the place of another (using a heterocentered framework: see Thirioux et al. 2014). This observation is supported by neuroimaging reports showing the involvement of overlapping brain structures for these functions. For example, the involvement of the temporoparietal junction has been previously reported during both cognitive (Blanke et al. 2005; Ruby and Decety 2003) and visuospatial perspective taking (Thirioux et al. 2014) and empathic demands (Decety and Lamm 2007; Lawrence et al. 2006). Several studies have tested the putative link between visuospatial perspective taking and self-reported empathy. Most have used the “own-body transformation” (OBT) task to capture the mental processes involved in the spatial transformation of own one’s perspective (e.g., Blanke et al. 2005; Mohr et al. 2006). This task requires participants to mentally rotate themselves into the position of a
human figure on a screen and to make judgments about the location of his body parts. Participants are faster and more accurate when deciding about back-facing rather than front-facing figures because their own-body position matches the one of the back-facing figures (Thakkar et al. 2009; Thakkar and Park 2010; Mohr et al. 2010). In Mohr et al. (2010), the OBT and self-report measures of empathy (Baron-Cohen and Wheelwright 2004) were administered to test the hypothesis that performance improves with increasing empathic scores. Higher empathy was associated with faster performance to the OBT task in females, although males performed better in the spatial task overall. Thakkar et al. (2009) observed an association, restricted to the females, whereby higher scores on empathy questionnaires were associated with decreased efficiency on the OBT task, in contrast to Mohr et al. (2010). In Thakkar and Park (2010), no relationship between empathy and speed of imagined perspective transformations was found either across or within gender. A later study outlined the role of the subject’s strategy, finding a positive relationship between visuospatial perspective taking and empathy only in those subjects who solved the OBT task by adopting an empathy-oriented strategy (by mentally transforming their own body to align with the body of the figure) rather than a spatially oriented (left–right transposition) strategy, regardless of the gender (Gronholm et al. 2012). Beyond these discrepancies in the literature, one further constraint in studying empathy comes from its multifaceted nature. According to a multidimensional model (Davis 1980; Berthoz 2004), empathy incorporates at least emotional and cognitive components. Cognitive empathy refers to a controlled process in which individuals understand the mental states of others while adopting their psychological point of view. Emotional empathy commonly refers to the more automatic affective response to the experience of others, which can motivate concern and subsequent helping behavior. It has been argued that a sense of shared interpersonal space, or self–other equivalence, is a basic prerequisite for empathy (Gallese 2003). In a shared social space, visuospatial perspective is best understood as the perspective on the scene. Amorim (2003) provided a conceptual link between social perception and the act of understanding what another individual sees. Imagining how the scene appears from another person’s indeed requires the coordination of one’s own perspective and the perspective of the other person, reflecting the mental translocation of the egocentric perspective into the heterocentered (centered on the other’s body) perspective (Berthoz 2004; Berthoz and Thirioux 2010). Amorim (2003) asked subjects to encode the location of an object in a virtual scene where a virtual avatar
Exp Brain Res
was present, and then to detect its displacement when seeing the scene from the avatar’s perspective (see also Lambrey et al. 2008, 2011). Although a linear modulation of reaction times by the angle of the viewpoint rotation is usually observed when testing memory across viewpoint changes (i.e., by asking observers to study a scene from a given viewpoint and then to detect whether a given object has been moved after a viewpoint rotation; Diwadkar and McNamara 1997), this is not necessarily the case of visuospatial perspective taking. Imagining to take the avatar’s visual perspective allowed subjects to produce faster responses when they later observed the scene from the avatar’s perspective (primed condition), relative to when the following perspective was not or was wrongly anticipated (unprimed condition). Furthermore, the primed condition abolished the linear modulation of RTs as a function of the viewpoint rotation, typically observed when testing memory across different views. Here, we used a priming paradigm, similar to Amorim (2003), to assess the priming effect as a quantitative measure of visuospatial perspective taking advantage in a threedimensional space. We therefore examined the relationship between this behavioral effect and empathy, by administering self-reported measures of empathy to a sample of 148 participants. We reasoned that, if empathy is associated with visuospatial perspective taking in space (Blanke et al. 2005; Thakkar and Park 2010; Thirioux et al. 2014), according to the idea that a sense of shared interpersonal space is a prerequisite for empathy (Gallese 2003), performance in the priming task might be modulated by empathic abilities, with higher priming effects for more empathic individuals. We further explored this relationship by looking at potential gender effects, thus shedding more light on previous inconsistent findings (Mohr et al. 2010; Thakkar et al. 2009; Thakkar and Park 2010), and by assessing both emotional and cognitive dimensions, according to the multifaceted nature of empathy (Davis 1980; Berthoz 2004).
Methods Participants A total of 148 volunteers (85 females; mean age = 25 years, SD = 4.4) took part in the study. All subjects were right-handed, as assessed by the Edinburgh Handedness Inventory (Oldfield 1971) (mean index = 0.65; SD = 0.27). The study protocol was approved by the Ethics Committee of the Santa Lucia Foundation, Rome. Written informed consent was obtained from all subjects prior to testing, and experiments were conducted according to the Declaration of Helsinki.
Exp Brain Res
Stimuli We adopted the virtual environment used by Sulpizio et al. (2013, 2014) (Fig. 1a). During the experiment, participants were shown snapshots of the virtual environment, each simulating a photograph taken with a 24-mm lens (74° by 59° simulated field of view) from one of eight different perspectives (corresponding to the camera positions shown in Fig. 1a). In each snapshot, a plant, used as the target object, was visible at one of the eight positions shown in Fig. 1a. In some conditions (see below), a standing virtual avatar was visible (Fig. 1a), with his line of sight directed toward the center of the room. The avatar was always located at either 45° or 135° to either the right or the left of the observer. Spatial memory task Participants detected spatial displacements of the target object (the plant) across pairs of consecutive views of the environment (Fig. 1b–d). In each trial, when observing the first view (study phase, 4000 ms), they encoded the position of the plant in the room either after taking the visual perspective of a virtual avatar standing in the room (primed condition; Fig. 1b) or in the absence of such an avatar (unprimed condition; Fig. 1c). After a 1-s delay, participants were presented a second view of the environment without the avatar (test phase). In this phase, the plant position in the room changed in half of the trials, and participants decided whether its position remained the same as in the study phase or not. The test picture was shown until the subjects gave their response by means of the keyboard (see Fig. 1e). In the primed condition (Fig. 1b), the test perspective always corresponded to the perspective primed by the avatar (either 45° or 135° away from the study perspective). In the unprimed condition (Fig. 1c), the test perspective was randomly chosen between 45° and 135° of angular displacement. Another control condition (Fig. 1d) had no avatar during the study phase, and the test perspective was the same as in the study perspective, thus not requiring any kind of spatial transformation. Procedure Participants were seated in front of a monitor at the distance of 50 cm. In order to familiarize them with the virtual environment, a 52-s movie consisting of a 360° tour of the room was shown before starting the experiment. The experimental task consisted of two blocks of 48 trials without any feedback, preceded by 24 training trials. Practice trials were repeated until participants reached a threshold accuracy of at least 70 % in both the primed and the unprimed conditions. Trials were administered in a pseudorandom
sequence. After completing the spatial memory task, participants were administered personality questionnaires in order to assess several empathy components. Self‑reported empathy Participants completed the Interpersonal Reactivity Index (IRI; Davis 1980) and the Balanced Emotional Empathy Scale (BEES; Mehrabian and Epstein 1972), two of the most frequently used questionnaires in empathy assessment. The IRI is a measure of dispositional empathy that assumes empathy being the sum of a set of separate but related constructs. It contains four seven-item subscales. The Perspective Taking scale measures the reported tendency to spontaneously adopt the psychological point of view of others in everyday life (“I sometimes try to understand my friends better by imagining how things look from their perspective”). The Empathic Concern scale assesses the tendency to experience feelings of sympathy and compassion for unfortunate others (“I often have tender, concerned feelings for people less fortunate than me”). The Personal Distress scale taps the tendency to experience distress and discomfort in response to extreme distress in others (“Being in a tense emotional situation scares me”). The Fantasy scale measures the tendency to imaginatively transpose oneself into fictional situations (“After seeing a play or movie, I have felt as though I were one of the characters”). While the Perspective Taking and Fantasy subscales assess cognitive empathy, the Empathic Concern and Personal Distress subscales assess emotional empathy. Participants responded to each item on a 5-point Likert scale. The IRI has been validated on the Italian population by Bonino et al. (1998) and Albiero et al. (2006). Sartori and Meneghini (2007), however, found that the emotional and cognitive components are not fully segregated in the Italian version of the IRI; therefore, we also measured the emotional dimension of empathy through the Italian version of the BEES (Meneghini et al. 2006). The BEES is one of the most popular self-report questionnaires for emotional empathy, assessing “one’s vicarious experience of another’s emotional experiences” (Mehrabian and Epstein 1972). It consists in 30 items (“Unhappy movie endings haunt me for hours afterward”). Participants responded to each item on a scale ranging from −3 (very strong disagreement) to +3 (very strong agreement). Higher scores represent higher levels of emotional empathy. The BEES has been used to explore the relationship between emphatic personality traits and the subject’s emotional responsiveness to a range of interpersonal situations, such as face recognition (Balconi and Bortolotti 2013), emotional contagion, prosocial behavior (Balconi
and Canavesio 2012), and motor identification with imagined agents (Marzoli et al. 2011). Data analysis Separate mixed-effects ANOVAs were conducted on mean response times and on the percentage of correct responses in the spatial memory task. Both measures were analyzed as a function of gender (male and female) and condition (primed, unprimed, and control). The effect of perspective displacement was explored with a repeated-measures 2 × 2 ANOVA, with the two perspective taking conditions (primed and unprimed) and angle (45° and 135°) as factors. The use of two angular steps allowed us to test the angular dependency associated with perspective transformations: Larger angles of mental rotation should be accompanied by longer RTs and reduced accuracy (Diwadkar and McNamara 1997). The control condition was not included in the analysis at this point because the perspective did not change between consecutive views. To investigate the relationship between visuospatial perspective taking and empathy, we performed Pearson’s correlations between the behavioral performance during the visuospatial memory task and the scores at the IRI and BEES, used to assess both cognitive and emotional (or affective) components of dispositional empathy. The Shapiro–Wilk test indicated that all the empathy subscales were normally distributed (p > 0.05). For these analyses, a Bonferroni correction was used to account for the number of multiple comparisons. In addition, we used a multiple linear regression procedure to determine whether the four IRI subscales and the BEES scores predicted the ability to solve the spatial memory task from both primed and unprimed perspectives (Supplementary Table 2, regression model 1). Inspired by previous studies (Mohr et al. 2010; Thakkar et al. 2009; Thakkar and Park 2010), we also looked for a potential gender-based effect, adding the gender and its interactions with each empathy subscale as further predictors of the regression model (see also Supplementary Table 2, regression model 2). The regression analysis was conducted with the criteria of probability of F-toenter of 0.05 and a tolerance value of 0.0001; the intercept was included in the model. For the correlation and regression analyses, behavioral performance was indexed as the cost (in terms of both increased response times and increased error rates) of solving the task in the most difficult condition (i.e., with a 135° perspective change) relative to the control condition, where no perspective change was required. In particular, we computed four indices: a time and a performance cost relative to the unprimed condition (i.e., 135° unprimed condition minus control condition) and a time
Exp Brain Res Fig. 1 Virtual environment and experimental paradigm. a A survey ▸ perspective of the virtual environment used in the experiment. The different virtual cameras (in blue) were distributed at 45° intervals along a circle whose center corresponded to the middle of the virtual room. Similarly, the target (the plant) position is distributed every 45° along a smaller concentric circle. The avatar could take up eight different positions (using increments of 45°) around the center of the room. b–d Examples of stimuli used in the primed (b), unprimed (c), and control (d) condition. Participants detected spatial displacements of a target object (the plant) across pairs of consecutive views of a familiar virtual room. In each trial, when observing the first view (study phase), participants encoded the position of a target in the room either, b after taking the visual perspective of a virtual avatar (primed condition), or c and d in the absence of such an avatar (unprimed and control condition). In the second picture (test phase), participants decided whether the position of the plant in the room was the same or not as in the study phase. In d, participants did not engage in any kind of spatial transformation, since their perspective was the same across views (control condition). The sketch represents examples of trials where the target did not change its position and the amount of perspective taking (b) or perspective change (c) was 45°. e Experimental time line: Each trial started with a 2-s written instruction, followed by a first view (study phase) of the environment lasting 4 s. After a fixed delay period (1 s), the second picture (test phase) was presented until the subject gave a response. The next trial began after an inter-trial interval of 1 s
and a performance cost relative to the primed condition (i.e., 135° primed condition minus control condition). Note that, while the indices relative to the unprimed condition reflect the ability to detect object displacements from different viewpoints, only the indices relative to the primed condition reflect the ability to take advantage of the presence of the avatar for solving the task and can thus be considered as a quantitative measure of the ability to imagine the scene from the avatar’s perspective (visuospatial perspective taking).
Results Self‑reported empathy The means for the IRI subscales and BEES are displayed in Table 1. t tests were used to compare the scores across males and females. Gender effects were observed for almost all the empathy scales: The females scored higher than the males on the Empathic Concern (t146 = 4.53; p