514589

research-article2014

PSSXXX10.1177/0956797613514589Leotti, DelgadoControl Over Monetary Gains and Losses

Research Article

The Value of Exercising Control Over Monetary Gains and Losses

Psychological Science 2014, Vol. 25(2) 596­–604 © The Author(s) 2014 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/0956797613514589 pss.sagepub.com

Lauren A. Leotti and Mauricio R. Delgado Rutgers University

Abstract Using functional MRI, we examined how the affective experience of choice, the means by which individuals exercise control, is modulated by the valence of potential outcomes (gains, losses). When trials involved potential gains, participants reported liking cues predicting a choice opportunity better than cues predicting no choice opportunity— an effect that corresponded with blood-oxygen-level-dependent (BOLD) increases in ventral striatum (VS) activity. Surprisingly, no differences were observed between choice and no-choice cues when participants anticipated potential losses. Individual differences in subjective choice preference in the loss condition, however, corresponded to choicerelated BOLD activity in VS. We conducted a second experiment to examine whether monetary losses were perceived differently in the context of simultaneous gains. When losses occurred in the absence of gains, participants showed an increased affective experience of choice—they reported greater liking of choice than no-choice trials, and VS activity was greater for choice than for no-choice cues. Collectively, the findings suggest that the affective experience of choice involves reward-processing circuitry when people anticipate appetitive and aversive outcomes, but the choice experience may be sensitive to context and individual differences. Keywords functional MRI, striatum, anticipation, choice, perceived control, reward, cognitive neuroscience, neuroimaging Received 6/29/13; Revision accepted 10/23/13

Imagine you are driving to work on the highway when traffic suddenly comes to a standstill—you see the nearest exit and consider the choice between sitting in traffic or getting off the highway to take a longer route. Now imagine you are sitting on a bus in that traffic, and you are at the will of the driver. In the first scenario, neither option is necessarily better; yet somehow the situation seems less stressful when a choice is available and you are the one in control. Is there something inherently valuable in exercising control through choice? Belief in one’s ability to exert control over the environment is essential for an individual’s well-being. Converging evidence suggests that perception of control enhances positive emotions and can also buffer negative emotional responses to aversive events (e.g., Leotti, Iyengar, & Ochsner, 2010; Ryan & Deci, 2006; Shapiro, Schwartz, & Astin, 1996). For instance, impaired coping ability and exacerbated stress responses to aversive events are observed in the absence of perceived control (e.g., Bandura, 1992; Maier, 1986), which suggests that perceived control may mitigate the negative emotional response

elicited by anticipating potentially negative outcomes. Because perception of control is integral for adaptive behavior and emotion regulation, it is important to identify the behavioral and brain mechanisms by which perceived control exerts its modulatory effects. Every day, individuals make countless choices aimed to approach positive outcomes and avoid negative outcomes. When a causal action (i.e., a choice) results in a positive consequence, future opportunity to exercise choice becomes desirable and may be experienced as inherently rewarding. Thus, expectancies of control through choice may serve an important role in regulation of affect and motivated behavior. However, it is unknown whether the affective experience of choice and control depends on the valence of the resulting outcome.

Corresponding Author: Mauricio R. Delgado, Rutgers University, Department of Psychology, Smith Hall, Room 340, 101 Warren St., Newark, NJ 07102 E-mail: [email protected]

Downloaded from pss.sagepub.com at SIMON FRASER LIBRARY on June 2, 2015

Control Over Monetary Gains and Losses 597 hypothesis, however, is that choice is undesirable in the context of potential losses, because incorrect choices may lead to feelings of regret, and individuals may prefer to accept the status quo rather than make errors of commission (Samuelson & Zeckhauer, 1988). In the present study, we tested these hypotheses using a simple choice paradigm (adapted from Leotti & Delgado, 2011) designed to examine the affective experience of anticipating choice in the context of positive and negative outcomes. In our experimental paradigm (Fig. 1), participants could choose between two keys (choice condition) or accept a computer-selected key (no-choice condition). Both conditions, however, led to similar outcomes. Symbolic cues predicted upcoming choice or nochoice trial types and whether key presses would result in a potential monetary gain (gain condition) or monetary loss (loss condition). Participants provided subjective ratings of these symbolic cues, and functional MRI (fMRI) analyses focused on blood-oxygen-level-dependent (BOLD) activity in response to these cues during the anticipation of choice. Experiment 1 had a 2 × 2 design that allowed us to examine the anticipation of choice opportunity (choice vs. no choice) as a function of outcome valence (gain vs. loss). In Experiment 2, we examined choice effects for losses occurring in the absence of potential gains to determine the role of context in the affective experience of choice.

Recently, we demonstrated that when people anticipate positive outcomes, choice opportunity recruits brain regions involved in affective and motivational processes (Leotti & Delgado, 2011). In the experiments reported here, we examined whether similar neural circuitry is involved when people anticipate choice opportunity when that choice results in avoidance of a negative outcome. Previous research has demonstrated increased activity in reward circuitry during the anticipation of reward and the avoidance of negative outcomes (Kim, Shimojo, & O’Doherty, 2006; Knutson, Adams, Fong, & Hommer, 2001), as well as for rewards contingent on behavior (Bjork & Hommer, 2007; O’Doherty et al., 2004; Tricomi, Delgado, & Fiez, 2004). However, the present study is unique because, rather than examining actual choice behavior or anticipation of a positive or negative monetary outcome (as prior studies have done), we investigated the affective experience involved when people merely anticipate the opportunity to exercise control through choice. Our hypothesis was that perceiving control carries motivational significance regardless of whether the choice is followed by a potentially positive or negative outcome. If this is the case, we expected cues that predict choice to produce increased activity in regions involved in reward anticipation, such as ventral striatum (VS; Knutson, Taylor, Kaufman, Peterson, & Glover, 2005). An alternative

Choice Trial: Gain Condition

No-Choice Trial: Loss Condition

Cue: 2 s

+

Cue: 2 s ISI: 2–5 s

+

ISI: 2–5 s

Response: 2 s

Response: 2 s

Outcome: 2 s

$50

Outcome: 2 s

–$50 ITI: 4–7 s

+

ITI: 4–7 s

+

Fig. 1.  Example of a choice trial in the gain condition and a no-choice trial in the loss condition. At the start of each trial, a symbolic cue indicated whether the trial was a choice or no-choice trial, and the cue’s orientation (pointing upward or downward) indicated whether the trial was a gain or loss trial. Following the cue, an interstimulus interval (ISI) was presented for a randomly jittered duration. This was followed by the response phase, in which participants saw two response keys located side by side. On choice trials, they indicated which key they wished to choose; on nochoice trials, participants indicated the location of the key selected by the computer. Immediately after the response phase, the monetary outcome was presented. This was followed by an intertrial interval (ITI), which was presented for a randomly jittered duration.

Downloaded from pss.sagepub.com at SIMON FRASER LIBRARY on June 2, 2015

Leotti, Delgado

598

Experiment 1 Method Participants.  Data were obtained from 24 healthy right-handed individuals (12 males, 12 females; median age = 21 years) recruited from the Rutgers UniversityNewark campus. All gave informed consent according to the procedures of the institutional review boards at Rutgers and the University of Medicine and Dentistry of New Jersey (see Participants in the Supplemental Material available online for further details). Procedure.  Participants were presented with gain and loss trials, which were separated into blocks to minimize the cost of switching between trial types. At the start of a new block, participants saw “Gain Next” or “Loss Next” on the screen (presented for 2 s). On each trial (Fig. 1), a cue announced the type of condition (choice or no choice). Then two colored keys (in the shape of rectangles) appeared. In the choice condition, participants could freely choose between the two keys, and in the no-choice condition, participants were forced to accept a computer-selected key. On choice trials in the gain blocks, participants were instructed to select the key (a choice between blue and yellow keys) they thought would win them the most money, and key presses could result in potential monetary gains ($0, $50, or $100 experimental dollars). On choice trials in the loss blocks, participants were instructed to select the key (a choice between cyan and magenta keys) they thought would lose them the least money, and key presses could result in potential monetary losses (–$0, –$50, –$100). Within the gain and loss conditions, rewards and penalties were assigned to the two colored keys equally, regardless of the specific key selected (see the Supplemental Material for more details). Participants were not informed of reward probabilities in advance. Participants were told that the overall goal was to earn as many experimental dollars as possible, which would be translated into a monetary bonus at the end of the experiment. Because both keys (within a gain or loss condition) had the same average probability of being associated with reward or punishment, all participants earned the same range of experimental dollars, which was translated into real monetary compensation of $50 at the end of the experiment. Data were collected and stimuli were presented using E-Prime software (Version 2.0; Schneider, Eschman, & Zuccolotto, 2001). There were four types of symbolic cues presented. Each cue marked the beginning of a new trial and indicated which one of four trial types would occur: (a) choice trials that could result in a gain (e.g., $50 or $100) or nongain ($0), (b) choice trials that could result in a loss (e.g., –$50 or –$100) or a nonloss (–$0), (c) nochoice trials that could result in a gain or nongain, and

(d) no-choice trials that could result in a loss or nonloss. Choice trials were differentiated from no-choice trials by cue shape (triangle vs. horseshoe, respectively), and the gain and loss conditions were differentiated by the orientation of the shape (pointing upward or downward, respectively). Associations between cues and trial types were learned explicitly prior to the scanning session. Each of the four trial types occurred 32 times throughout the task, and trial order was pseudorandomized within four valence blocks (four choice and four no-choice trials per block) in each of four separate fMRI runs. The order of the valence blocks varied within each functional run, and the order of the runs varied across participants. Immediately following the scanning session, participants were asked to rate how much they liked or disliked each of the trial types on a scale from 1 (disliked a lot) to 5 (liked a lot). A rating of 3 indicated that the trial was neither liked nor disliked. fMRI data acquisition and analyses.  A 3T Siemens Allegra head-only scanner and a Siemens standard head coil were used for data acquisition at Rutgers’s University Heights Center for Advanced Imaging. Imaging data were analyzed using BrainVoyager software (Version 2.3; Brain Innovation, Maastricht, The Netherlands). A random effects analysis was performed on the fMRI data using a general linear model; this analysis included the four cue conditions, as well as the response and outcome phase and nuisance variables (see the Supplemental Material for details). To explore our a priori hypotheses, we first conducted a region-of-interest (ROI) analysis, with ROIs in the midbrain and VS defined by coordinates reported in the literature as selective to the anticipation of monetary reward (Knutson et al., 2005). We used these areas previously to investigate the affective value of anticipating choice (Leotti & Delgado, 2011). Parameter estimates (βs) extracted from these regions were compared across cue types using paired t tests, and significant statistical tests are reported at a corrected threshold of p < .05. To complement the use of functionally defined ROIs, we confirmed and expanded these results with a whole-brain analysis examining differences in anticipatory BOLD activity associated with the four cue types using a 2 (choice opportunity: choice vs. no choice) × 2 (outcome valence: gain vs. loss) repeated measures analysis of variance (ANOVA; p < .005, k = 6 contiguous voxels for a corrected threshold of p < .05). See fMRI Data Acquisition and Analyses in the Supplemental Material for additional details.

Results Behavioral results.  Participants reported liking cues predicting choice more than cues predicting no choice when they were anticipating monetary gains. However, when they were anticipating monetary losses, the difference in cue ratings was not significant (see Fig. 2a).

Downloaded from pss.sagepub.com at SIMON FRASER LIBRARY on June 2, 2015

Control Over Monetary Gains and Losses 599

a

Behavioral Results

Cue Rating

5

Choice

No Choice

4 3 2 1

Parameter Estimate (β; z score)

b 0.10

Gains

Losses

ROI: Left Ventral Striatum

0.08 0.06 0.04 0.02 0.00

x = –9, y = 10, z = 0

Gains

Losses

–0.02

Fig. 2.  Results from the gain and loss conditions of Experiment 1: mean rating of choice cues (a) and mean z-scored parameter estimate (b) as a function of whether participants anticipated a potential monetary gain or loss and whether they could choose or not choose a particular response key. Error bars show standard errors of the mean. The brain image shows the location of the left ventral striatum, which was the a priori region of interest (ROI) sensitive to monetary gains. Talairach coordinates are shown.

The full factorial analysis revealed a significant main effect of choice, F(1, 23) = 7.78, p = .01. There was also a significant main effect of valence, such that ratings were positive (above a score of 3) for the gain condition, and ratings were negative (below 3) for the loss condition, F(1, 23) = 38.4, p < .0005. In addition, the interaction term between choice opportunity and outcome valence was marginally significant, F(1, 23) = 4.136, p = .054, which illustrates the larger preference for choice cues in the gain condition than in the loss condition. Participants also rated the cues on several other dimensions and reported preferences for specific choice options (see Results and Table S1 in the Supplemental Material). Reaction times (RTs) were collected during the response phase of the choice task. There was a significant main effect of choice opportunity, F(1, 23) = 28.025, p < .001, such that responses were slower when participants made a decision between two keys (choice condition) than when they just responded to the location of a key (no-choice condition). There was a main effect of outcome valence, F(1, 23) = 26.6, p < .001, which was evidenced by slower overall responses when participants

made decisions leading to potential monetary losses than when they made decisions leading to potential monetary gains. There was also a significant interaction between choice opportunity and outcome valence, F(1, 23) = 5.01, p = .035, such that there was a greater difference in response time between the choice and no-choice conditions in the loss condition than in the gain condition; this suggests that making a choice was more difficult when participants anticipated potential losses instead of potential gains, perhaps because of anticipated regret. RT differences between choice conditions were not correlated with differences in BOLD activity. Neuroimaging results.  Within the gain condition, parameter estimates associated with choice cues were significantly larger than those for no-choice cues in both left VS, t(23) = 3.9, p < .05 (see Fig. 2b), and right VS, t(23) = 4.78, p < .05, as well as midbrain, t(23) = 2.7, p < .05. This result replicates previous findings of choice opportunity in the context of potential rewards (Leotti & Delgado, 2011). However, within the loss condition, there were no significant differences between cue types in any

Downloaded from pss.sagepub.com at SIMON FRASER LIBRARY on June 2, 2015

Leotti, Delgado

600

a

Behavioral Results Choice

No Choice

5

Cue Rating

4 3 2 1

b

Preferred Choice

Preferred No Choice

Group × Cue Interactions in Left Ventral Striatum Parameter Estimate (β; z score)

of the ROIs. Results of the full factorial analysis are provided in the Supplemental Material. We also conducted a Choice Opportunity × Outcome Valence ANOVA in the whole brain and observed significant main effects of choice opportunity. Activity was greater for choice cues than for no-choice cues in left VS, midbrain, dorsal anterior cingulate cortex (dACC), and supplementary motor area. Activity was greater for nochoice cues than for choice cues in right ventrolateral prefrontal cortex (PFC), bilateral inferior parietal lobule, and right posterior middle temporal gyrus. Main effects of outcome valence were found only in the precuneus. We observed significant Choice Opportunity × Outcome Valence interactions in dorsal striatum and VS, midbrain, dACC, ventromedial PFC, and bilateral hippocampus. For all interactions, activity was greater for choice than for no choice in the gain condition but greater for no choice than for choice in the loss condition, though given the individual differences in choice preference (see the following analysis, as well as the Supplemental Material), it is difficult to interpret these interactions. Table S2 in the Supplemental Material lists all main effects and interactions. Contrary to our primary hypothesis that choice would be rewarding independent of outcome valence, we observed no significant effect of choice in the loss condition. However, given that individuals vary in their sensitivity to losses in the context of simultaneous gains, we explored whether individual differences in the perceived value of choice in the loss condition explained why we did not observe main effects of choice opportunity. We looked at posttask cue ratings to establish whether individuals liked cues predicting choice better than those predicting no choice. In the gain condition, most of the participants reported an explicit preference for choice cues. In contrast, in the loss condition, only half of the sample (n = 12) reported liking choice cues better than no-choice cues, whereas the remaining participants reported a preference for the no-choice cues. Participants were categorized into two groups based on their preference for choice in the loss condition to enable us to examine individual differences in valuation of choice (see Fig. 3a). Individuals who preferred choice (n = 12) reported a preference for choice cues over no-choices cues (choice rating – no-choice rating: M = 1.25), and individuals who preferred no choice (n = 12) reported no preference or an explicit preference for no-choice cues (M = −0.92). A Choice Opportunity × Group ANOVA demonstrated no significant main effect of choice, as expected, but a significant interaction, F(1, 22) = 55.5, p < .001. Because there was considerable individual variation in reported preference for choice in the loss condition, we compared BOLD activity for cues predicting potential losses for the groups who preferred choice and preferred no choice. Activity in VS was consistent with reported

0.15 0.10 0.05 0.00 –0.05

Preferred Choice

Preferred No Choice

Fig. 3.  Results from the loss condition in Experiment 1: mean rating of choice cues (a) and mean z-scored parameter estimate (b) as a function of whether participants preferred choice or preferred no choice and whether they could choose or not choose a particular response key. Error bars show standard errors of the mean.

preference for cue types, such that participants who preferred choice demonstrated greater activity in VS when anticipating choice than when anticipating no choice, and participants who preferred no choice demonstrated greater activity in VS when anticipating no choice than when anticipating choice (see Fig. 3b). This Choice Opportunity × Group interaction was significant in left VS, F(1, 22) = 7.1, p = .014, and marginally significant in right VS, F(1, 22) = 3.7, p = .067, but was not significant in the midbrain. Notably, there were no significant interactions for cue ratings (p = .4) or for any of the ROIs in the gain condition (p > .4), which suggests that the interaction was specific to the loss condition (see Results in the Supplemental Material for further analysis).

Experiment 2 Experiment 1 replicated previous findings regarding the anticipation of choice opportunity in the context of potential monetary gains (Leotti & Delgado, 2011).

Downloaded from pss.sagepub.com at SIMON FRASER LIBRARY on June 2, 2015

Control Over Monetary Gains and Losses 601 However, results were more variable when participants anticipated choice in the context of potential monetary losses. One explanation for this variability in brain and behavioral responses in choice valuation under aversive conditions (i.e., loss) lies in individual differences in choice preference, as suggested by the results of Experiment 1. A contributing factor could be the context effect inherent in our experimental design, in which losses co-occurred with gains, which may have magnified the individual differences observed in Experiment 1. Indeed, the impact of context effects (e.g., endowment effects, shifts in reference point) on loss aversion and decision making has been well established (De Martino, Kumaran, Holt, & Dolan, 2009; De Martino, Kumaran, Seymour, & Dolan, 2006; Kahneman & Tversky, 2000). By experiencing losses concurrently with gains, the anticipation of losses may have been made more salient for some participants. If losses had been experienced in the absence of monetary gains, we hypothesized that there would be more consistency in both subjective reports of preference and brain correlates of choice observed in Experiment 1. To address this hypothesis, we conducted a follow-up experiment, in which choice anticipation was examined in the context of monetary losses only.

Method Participants.  Data were obtained from 14 right-handed individuals (6 males, 8 females; median age = 21 years), who gave informed consent to participate in this experiment (see the Supplemental Material for further details). Procedure.  The task used in Experiment 2 was identical to that in Experiment 1, except that the gain trial blocks were removed. Thus, the task consisted of 32 choice trials and 32 no-choice trials, all of which could lead to potential monetary losses (–$0, –$50, –$100). Participants were first endowed with $5,000 in play money and instructed to keep as much money as possible. As in Experiment 1, all participants ended up losing the same range of experimental dollars, which resulted in a final actual payment of $50 (see the Supplemental Material for further details). After the scanning session, participants rated how much they liked or disliked the cues on a scale from 1 (disliked a lot) to 7 (liked a lot); a rating of 4 indicated that the trial was neither liked nor disliked. fMRI data acquisition and analyses.  Data were acquired using a Siemens 3T Magnetom Trio whole-body scanner at Rutgers University Brain Imaging Center and analyzed using BrainVoyager software (Version 2.3). Choice and no-choice cues were compared in the previously defined ROIs and across the whole brain (see the Supplemental Material for further details).

Results Behavioral results.  Participants reported liking cues predicting an opportunity for choice (M = 5, SD = 1.2) significantly more than cues predicting no opportunity for choice (M = 3, SD = 1.5), t(13) = 4.1, p = .001 (Fig. 4a). This finding is in contrast to the results of Experiment 1, which suggests that choice opportunity was valued differently in the context of negative outcomes alone (see Results and Table S1 in the Supplemental Material for a comparison of the two experiments). Neuroimaging results.  Linear contrasts of choice effects were analyzed within the a priori ROIs defined in Experiment 1 (Fig. 4b). Consistent with findings from our previous experiment (Leotti & Delgado, 2011), in the gain condition in Experiment 1 and in the loss condition for the group who preferred choice in Experiment 1, parameter estimates associated with choice cues in Experiment 2 were significantly larger than those for nochoice cues in left VS, t(13) = 3.5, p = .004, and right VS, t(13) = 2.8, p = .014, but not in the midbrain. In the whole-brain analysis, linear contrast of BOLD activity for choice cues versus no-choice cues revealed increased choice-related activity in bilateral VS and anterior insula (Fig. 4c; see Results and Table S3 in the Supplemental Material).

Discussion Consistent with our previous study (Leotti & Delgado, 2011), results of the present study showed that participants in Experiment 1 reported greater preference for choice opportunity and demonstrated increased activity in VS and midbrain when anticipating choices leading to potential monetary gains than when anticipating choices leading to potential monetary losses. The recruitment of neural circuitry involved in reward-related processes, such as VS (Delgado, 2007; Montague & Berns, 2002; O’Doherty, 2004; Rangel, Camerer, & Montague, 2008; Robbins & Everitt, 1996), suggests that in the context of positive outcomes, choice opportunity may be inherently valuable. However, in the context of negative outcomes, specifically monetary losses, there was greater variability in the behavioral and brain correlates of the value of choice. Individuals who reported liking choice in the context of potential losses demonstrated increased activity in VS in response to choice cues relative to no-choice cues, whereas those who reported disliking choice in the loss condition demonstrated the opposite pattern. The findings from Experiment 2 suggest that this variability is reduced when potential losses occur in the absence of potential gains. Specifically, when there was only a loss condition, participants more consistently reported a behavioral preference for choice over no choice and

Downloaded from pss.sagepub.com at SIMON FRASER LIBRARY on June 2, 2015

Leotti, Delgado

602

a 7

Behavioral Results

Cue Rating

6 5 4 3 2 1

b

Choice No Choice

ROI: Left Ventral Striatum Parameter Estimate (β; z score)

0.25

c

Choice > No Choice R

L

8.0

0.20 0.15 0.10 0.05 0.00

3.37

Choice No Choice

t(13)

Fig. 4.  Results from Experiment 2: mean rating of choice cues (a) and mean z-scored parameter estimate (b) as a function of whether participants could choose or not choose a particular response key. Error bars show standard errors of the mean. The image from the whole-brain analysis (c) shows blood-oxygen-level-dependent activity in ventral striatum in response to the choice cues > no-choice cues contrast. R = right, L = left, ROI = region of interest.

demonstrated a corresponding increase in striatal activity when anticipating choice opportunity. The discrepancy between the two experiments when examining choice in the presence of loss suggests the important role of context in the affective experience of choice and control, and it highlights individual differences in the perceived value of choice. A major difference between the two experiments presented here was the interpretation of the potentially negative outcomes. In Experiment 1, avoiding a monetary loss may have been perceived as a negative outcome relative to receipt of a monetary gain. As a result, the value of choice was more variable across participants, depending on possible individual differences in loss aversion and perceived control in the face of failure, which people tend to attribute to external sources beyond their control (Brewin & Shapiro, 1984; Rotter, 1966). In contrast, in Experiment 2, avoiding a monetary loss may have been perceived as a positive outcome (Kim et al.,

2006), thus enhancing perceived control (Thompson, Armstrong, & Thomas, 1998) and producing greater reward-related activity. This may explain the differences in choice-related VS activity across the two experiments, consistent with previous findings demonstrating that VS is particularly susceptible to context effects from a shift in reference points (De Martino et al., 2009). A potential consequence of the context effect present in our design is that it may also have amplified individual differences, in turn contributing to the variability in behavioral and brain responses observed in the loss condition. The act of choosing may be more stressful to some individuals than to others when there is not sufficient information to make an informed selection (Paterson & Neufeld, 1995); this was the case in the current experiments, in which there was no real difference between options, and outcomes occurred randomly (see Supplementary Discussion in the Supplemental Material). In fact, increased activity in lateral PFC has been

Downloaded from pss.sagepub.com at SIMON FRASER LIBRARY on June 2, 2015

Control Over Monetary Gains and Losses 603 associated with a tendency to avoid cognitive demand in decision making (McGuire & Botvinick, 2010). This may explain why, in the context of potential losses in Experiment 1, we observed greater activity in the ventrolateral PFC across participants for the no-choice condition than for the choice condition. Furthermore, individuals may vary in their tendency to accept the status quo, as opposed to overtly acting in a way that could lead to an error and potential regret (Baron & Ritov, 1994; Samuelson & Zeckhauer, 1988). Additionally, there may be individual differences in the perceived threat of monetary losses (Sokol-Hessner et al., 2009) or loss aversion (Tom, Fox, Trepel, & Poldrack, 2007) that could influence neural activity in VS and the subjective value of choice opportunity. Collectively, these findings suggest that the opportunity to exercise control through choice is desirable for choices that lead to both obtaining rewards and avoiding punishment, and such choice opportunities recruit common neural circuitry involved in reward processing. Though increased activation in VS is commonly reported in the reward literature, it has also been linked to regulation of negative emotion, including avoidance learning (Delgado, Jou, Ledoux, & Phelps, 2009), reappraisal of negative affect (Schiller & Delgado, 2010; Wager, Davidson, Hughes, Lindquist, & Ochsner, 2008), and reevaluation of aversive events (Sharot, Shiner, & Dolan, 2010). In the current experiments, choice-related activity in VS following the cue may suggest that reward processes are activated when people anticipate an opportunity to choose. The recruitment of VS has been demonstrated in many other affective and motivational processes beyond reward. For example, other researchers have found that VS responds to the anticipation of action when execution and withholding of responses are contrasted (Guitart-Masip et al., 2011). However, whereas the choice condition may be seen as more engaging than the no-choice condition in the current study, anticipation of action does not likely account for our findings because both conditions involved active responding following the cue (see Supplementary Discussion in the Supplemental Material). The current study is also distinguishable from other studies investigating action-outcome contingency (e.g., O’Doherty et al., 2004; Tricomi et al., 2004). Previous studies focused on outcomes or action-contingent cues predicting such outcomes, whereas the present study focused on the cue that predicted having control (irrespective of outcome), that is, the affective experience when people anticipate an opportunity to choose. Although the expected value associated with choice and no-choice trials was equated in the current study, it is nonetheless possible that participants perceived a difference. More specifically, participants may have formed preferences about the value of the two colored keys (i.e., which option delivered the best chance for a positive

outcome) and thus believed that having a choice afforded the greatest probability that the best available option would be selected (see the Supplemental Material for further discussion). The present study characterizes the affective experience of choice, and future research will be necessary to understand what is driving the perceived value of choice. In conclusion, by exploiting choice opportunity as the simple yet fundamental basis of perceiving control, the current study provides the basis for understanding how the perception of control influences people’s ability to regulate emotional responses to appetitive and aversive stimuli. Across two experiments, we observed involvement of striatum in processing the affective experience of choice, which reveals that expectations of control recruit neural circuitry important in affective and motivational processing. Furthermore, our findings demonstrate that context effects previously observed in VS (De Martino et al., 2009) also extend to the mere opportunity for choice. Because deficits in perceived control are believed to be at the core of many psychiatric diseases (Beck, Emery, & Greenberg, 1985; Maier & Seligman, 1976; Shapiro et al., 1996), fully characterizing the affective experience of choice will be critical for learning about both the development and effective therapeutic treatment of many psychiatric disorders. Author Contributions L. A. Leotti and M. R. Delgado conceived and designed the study. Testing and data collection were performed by L. A. Leotti. L. A. Leotti and M. R. Delgado wrote the manuscript and approved the final version of the manuscript for submission.

Acknowledgments We thank Vicki Lee and Meg Speer for assistance with data collection and figures.

Declaration of Conflicting Interests The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article.

Funding This research was supported by funding from the National Institute on Drug Abuse to M. R. Delgado (DA027764) and to L. A. Leotti (F32-DA027308).

Supplemental Material Additional supporting information may be found at http://pss .sagepub.com/content/by/supplemental-data

References Bandura, A. (1992). Self-efficacy mechanism in psychobiologic functioning. In R. Schwarzer (Ed.), Self-efficacy: Thought control of action (pp. 355–394). London, England: Hemisphere.

Downloaded from pss.sagepub.com at SIMON FRASER LIBRARY on June 2, 2015

Leotti, Delgado

604 Baron, J., & Ritov, I. (1994). Reference points and omission bias. Organizational Behavior and Human Decision Processes, 59, 475–498. Beck, A. T., Emery, G., & Greenberg, R. L. (1985). Anxiety disorders and phobias. New York, NY: Basic Books. Bjork, J. M., & Hommer, D. W. (2007). Anticipating instrumentally obtained and passively-received rewards: A factorial fMRI investigation. Behavioural Brain Research, 177, 165– 170. Brewin, C. R., & Shapiro, D. A. (1984). Beyond locus of control: Attribution of responsibility for positive and negative outcomes. British Journal of Psychology, 75, 43–49. Delgado, M. R. (2007). Reward-related responses in the human striatum. Annals of the New York Academy of Sciences, 1104, 70–88. Delgado, M. R., Jou, R. L., Ledoux, J. E., & Phelps, E. A. (2009). Avoiding negative outcomes: Tracking the mechanisms of avoidance learning in humans during fear conditioning. Frontiers in Behavioral Neuroscience, 3, Article 33. Retrieved from http://www.frontiersin.org/journal/10.3389/ neuro.08.033.2009/abstract De Martino, B., Kumaran, D., Holt, B., & Dolan, R. J. (2009). The neurobiology of reference-dependent value computation. The Journal of Neuroscience, 29, 3833–3842. De Martino, B., Kumaran, D., Seymour, B., & Dolan, R. J. (2006). Frames, biases, and rational decision-making in the human brain. Science, 313, 684–687. Guitart-Masip, M., Fuentemilla, L., Bach, D. R., Huys, Q. J., Dayan, P., Dolan, R. J., & Duzal, E. (2011). Action dominates valence in anticipatory representations in the human striatum and dopaminergic midbrain. The Journal of Neuroscience, 31, 7867–7875. Kahneman, D., & Tversky, A. (2000). Choices, values, and frames. Cambridge, England: Cambridge University Press. Kim, H., Shimojo, S., & O’Doherty, J. P. (2006). Is avoiding an aversive outcome rewarding? Neural substrates of avoidance learning in the human brain. PLoS Biology, 4(8), e233. Retrieved from http://www.plosbiology.org/article/ info%3Adoi%2F10.1371%2Fjournal.pbio.0040233 Knutson, B., Adams, C. M., Fong, G. W., & Hommer, D. (2001). Anticipation of increasing monetary reward selectively recruits nucleus accumbens. The Journal of Neuroscience, 21(16), Article RC159. Retrieved from http://www.jneuro sci.org/content/21/16/RC159.full.pdf Knutson, B., Taylor, J., Kaufman, M., Peterson, R., & Glover, G. (2005). Distributed neural representation of expected value. The Journal of Neuroscience, 25, 4806–4812. Leotti, L. A., & Delgado, M. R. (2011). The inherent reward of choice. Psychological Science, 22, 1310–1318. Leotti, L. A., Iyengar, S. S., & Ochsner, K. N. (2010). Born to choose: The origins and value of the need for control. Trends in Cognitive Sciences, 14, 457–463. Maier, S. F. (1986). Stressor controllability and stress-induced analgesia. In D. D. Kelly (Ed.), Stress-induced analgesia: Annals of the New York Academy of Sciences (Vol. 467, pp. 55–72). New York, NY: New York Academy of Sciences. Maier, S. F., & Seligman, M. E. (1976). Learned helplessness: Theory and evidence. Journal of Experimental Psychology: General, 105, 3–46.

McGuire, J. T., & Botvinick, M. M. (2010). Prefrontal cortex, cognitive control, and the registration of decision costs. Proceedings of the National Academy of Sciences, USA, 107, 7922–7926. Montague, P., & Berns, G. (2002). Neural economics and the biological substrates of valuation. Neuron, 36, 265–284. O’Doherty, J. P. (2004). Reward representations and rewardrelated learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769–776. O’Doherty, J. P., Dayan, P., Schultz, J., Deichmann, R., Friston, K., & Dolan, R. J. (2004). Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science, 304, 452–454. Paterson, R. J., & Neufeld, R. W. (1995). What are my options? Influences of choice availability on stress and the perception of control. Journal of Research in Personality, 29, 145–167. Rangel, A., Camerer, C., & Montague, P. R. (2008). A framework for studying the neurobiology of value-based decision making. Nature Reviews Neuroscience, 9, 545–556. Robbins, T. W., & Everitt, B. J. (1996). Neurobehavioural mechanisms of reward and motivation. Current Opinion in Neurobiology, 6, 228–236. Rotter, J. (1966). Generalised expectancies for internal versus external control of reinforcement. Psychological Mono­ graphs, 80(1), 1–28. Ryan, R. M., & Deci, E. L. (2006). Self-regulation and the problem of human autonomy: Does psychology need choice, self-determination, and will? Journal of Personality, 74, 1557–1585. Samuelson, W., & Zeckhauer, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1, 7–59. Schiller, D., & Delgado, M. R. (2010). Overlapping neural systems mediating extinction, reversal and regulation of fear. Trends in Cognitive Sciences, 14, 268–276. Schneider, W., Eschman, A., & Zuccolotto, A. (2001). E-Prime user’s guide. Pittsburgh, PA: Psychology Software Tools. Shapiro, D. H., Jr., Schwartz, C. E., & Astin, J. A. (1996). Controlling ourselves, controlling our world: Psychology’s role in understanding positive and negative consequences of seeking and gaining control. American Psychologist, 51, 1213–1230. Sharot, T., Shiner, T., & Dolan, R. J. (2010). Experience and choice shape expected aversive outcomes. The Journal of Neuroscience, 30, 9209–9215. Sokol-Hessner, P., Hsu, M., Curley, N. G., Delgado, M. R., Camerer, C. F., & Phelps, E. A. (2009). Thinking like a trader selectively reduces individuals’ loss aversion. Proceedings of the National Academy of Sciences, USA, 106, 5035–5040. Thompson, S. C., Armstrong, W., & Thomas, C. (1998). Illusions of control, underestimations, and accuracy: A control heuristic explanation. Psychological Bulletin, 123, 143–161. Tom, S. M., Fox, C. R., Trepel, C., & Poldrack, R. A. (2007). The neural basis of loss aversion in decision-making under risk. Science, 315, 515–518. Tricomi, E. M., Delgado, M. R., & Fiez, J. A. (2004). Modulation of caudate activity by action contingency. Neuron, 41, 281–292. Wager, T. D., Davidson, M. L., Hughes, B. L., Lindquist, M. A., & Ochsner, K. N. (2008). Prefrontal-subcortical pathways mediating successful emotion regulation. Neuron, 59, 1037–1050.

Downloaded from pss.sagepub.com at SIMON FRASER LIBRARY on June 2, 2015

The value of exercising control over monetary gains and losses.

Using functional MRI, we examined how the affective experience of choice, the means by which individuals exercise control, is modulated by the valence...
547KB Sizes 0 Downloads 0 Views