European Journal of Neuroscience, Vol. 39, pp. 1370–1383, 2014

doi:10.1111/ejn.12511

COGNITIVE NEUROSCIENCE

Decision and action planning signals in human posterior parietal cortex during delayed perceptual choices Annalisa Tosoni,1,2 Maurizio Corbetta,3,4,5 Cinzia Calluso,1,2 Giorgia Committeri,1,2 Giovanni Pezzulo,6 G. L. Romani1,2 and Gaspare Galati7,8 1

Department of Neuroscience and Imaging, G. D’Annunzio University, Chieti, Italy Institute for Advanced Biomedical Technologies, G. D’Annunzio Foundation, Chieti, Italy 3 Department of Neurology, Washington University School of Medicine, St Louis, MO, USA 4 Department of Radiology, Washington University School of Medicine, St Louis, MO, USA 5 Department of Anatomy and Neurobiology, Washington University School of Medicine, St Louis, MO, USA 6 Institute of Cognitive Sciences and Technologies, CNR, Roma, Italy 7 Department of Psychology, Sapienza University of Rome, Roma, Italy 8 Laboratory of Neuropsychology, Santa Lucia Foundation, Roma, Italy 2

Keywords: action planning, decision-making, delayed choice, evidence accumulation, stimulus unmasking

Abstract During simple perceptual decisions, sensorimotor neurons in monkey fronto-parietal cortex represent a decision variable that guides the transformation of sensory evidence into a motor response, supporting the view that mechanisms for decision-making are closely embedded within sensorimotor structures. Within these structures, however, decision signals can be dissociated from motor signals, thus indicating that sensorimotor neurons can play multiple and independent roles in decision-making and action selection/planning. Here we used functional magnetic resonance imaging to examine whether response-selective human brain areas encode signals for decision-making or action planning during a task requiring an arbitrary association between face pictures (male vs. female) and specific actions (saccadic eye vs. hand pointing movements). The stimuli were gradually unmasked to stretch the time necessary for decision, thus maximising the temporal separation between decision and action planning. Decisionrelated signals were measured in parietal and motor/premotor regions showing a preference for the planning/execution of saccadic or pointing movements. In a parietal reach region, decision-related signals were specific for the stimulus category associated with its preferred pointing response. By contrast, a saccade-selective posterior intraparietal sulcus region carried decision-related signals even when the task required a pointing response. Consistent signals were observed in the motor/premotor cortex. Wholebrain analyses indicated that, in our task, the most reliable decision signals were found in the same neural regions involved in response selection. However, decision- and action-related signals within these regions can be dissociated. Differences between the parietal reach region and posterior intraparietal sulcus plausibly depend on their functional specificity rather than on the task structure.

Introduction Electrophysiological recordings of neuronal activity in non-human primates have significantly advanced our understanding of the mechanisms underlying simple forms of perceptual decision-making, i.e. the process of converting an ambiguous sensory input into a categorical outcome (Gold & Shadlen, 2007). These studies have supported an ‘intentional’ or ‘action-based’ framework of perceptual decision-making (Shadlen et al., 2008; Cisek & Kalaska, 2010). According to this view, the process of converting sensory evidence into a choice would be mediated by neurons specialised in motor planning. For example, in tasks in which monkeys are trained to

Correspondence: Annalisa Tosoni, above. E-mail: [email protected]

1

Department of Neuroscience and Imaging, as

Received 23 July 2013, revised 20 December 2013, accepted 10 January 2014

discriminate the motion direction of a random dot pattern and report their decision by making a saccade toward one of two visual targets, neurons in several oculo-motor areas, including the lateral intraparietal area (LIP), show signals consistent with the accumulation of sensory evidence from areas involved in sensory analysis of the stimulus (e.g. middle temporal), which may represent a decision variable encoding evidence but also other information such as prior Probability. When a threshold level of activity is crossed, circuits that activate an action (e.g. a saccadic response) or even a more arbitrary response rule are turned on. The generality of the intentional/action-based framework, however, has recently been challenged by new findings showing that signals reflecting evidence accumulation can be dissociated from signals reflecting action planning even within the same oculo-motor neurons of the LIP (Bennur & Gold, 2011). In particular, when decisions are not linked to a specific action, as part of a task set,

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd

Decision and action planning in human PPC 1371 representations for perceptual decisions appear to be independent of representations for action selection. Furthermore, studies on categorisation in the monkey brain indicate that LIP neurons can represent abstract categorical outcomes in a way that is largely independent of the motor response (Freedman & Assad, 2011). These findings indicate that cortical areas such as the LIP do not invariably encode decisions in terms of the motor plans used to report them, but can also encode the perceptual category of the stimulus, independent of the associated action, or even a categorical decision variable. In human observers, mechanisms of perceptual decision-making have been investigated in several studies using functional magnetic resonance imaging (fMRI) and electroencephalography/magnetoencephalography methods (Heekeren et al., 2006; Philiastides & Sajda, 2007; Ploran et al., 2007; Tosoni et al., 2008; Donner et al., 2009; Ho et al., 2009; Kayser et al., 2009; Liu & Pleskac, 2011; Gould et al., 2012; Hebart et al., 2012; O’Connell et al., 2012; Wyart et al., 2012; Erickson & Kayser, 2013; Filimon et al., 2013). Several studies have focused on the issue of whether perceptual decisions rely on response-related mechanisms or are mediated by high-level mechanisms independent of the motor response (Heekeren et al., 2006; Ho et al., 2009; Liu & Pleskac, 2011; Hebart et al., 2012; O’Connell et al., 2012; Filimon et al., 2013). Some evidence (Tosoni et al., 2008; Donner et al., 2009; Gould et al., 2012; Erickson & Kayser, 2013) indicates that perceptual decision-related signals occur in regions involved in planning and executing actions consistently with the intentional framework, but the slow temporal resolution of the blood-oxygenation-level-dependent (BOLD) signal has prevented a clear separation of decision from motor planning signals. In this study, we developed a novel perceptual decision-making paradigm in which observers categorised male and female faces. The sensory stimulus was gradually unmasked to maximise the temporal separation between signals for perceptual decision-making vs. action planning and execution. The slow unmasking was compatible with the temporal resolution of the BOLD signal and the response time of human observers on this paradigm. Observers reported their decision by performing either a saccadic eye movement or a hand pointing movement to a stimulus location depending on the stimulus category (male or female). We examined two main questions. First, whether cortical regions showing a preference for the planning and execution of eye or hand pointing movements exhibited BOLD signal modulations reflecting perceptual decision-making or motor planning. Second, whether decision signals in these regions are specific or independent of the preferred motor response. Our findings show that decision-related signals occur in both saccade- and pointing-selective posterior parietal regions, and to some extent also in motor/premotor regions. However, whereas in a pointing-selective medial parietal region activity increased only for the category associated with the preferred response, in a saccade-selective posterior intraparietal sulcus (pIPS) region activity “ramped up” during the decision period for pictures associated with both the preferred and non-preferred response. Therefore, we find both response-dependent and response-independent decision signals in two effector-selective regions of the human posterior parietal cortex (PPC) within the same paradigm.

Materials and methods Subjects Fifteen right-handed subjects participated in the study (nine females, mean age 25  4 years). The study was conducted with the understanding and written consent of each subject in accordance with The

Code of Ethics of the World Medical Association (Declaration of Helsinki, 1964) and following approval by the Human Studies Committee of G. D’Annunzio Chieti University. Each participant completed a psychophysical session for the selection of individual levels of sensory evidence and three fMRI sessions: one for localising occipital regions selective for face stimuli (compared with place) as well as parietal and motor/premotor regions specialised for planning and executing eye and hand pointing movements (effector-selective regions) and two for the male/female discrimination task. Functional magnetic resonance imaging paradigm On each trial, a noise-occluded face picture was either gradually unmasked until an individually selected level of sensory evidence was reached or remained substantially noisy for 9.345 s. Following the offset of the face picture (go signal), subjects were instructed to indicate whether they had seen a male or female picture by making an eye or pointing movement to the remembered location of a visual target presented at the beginning of the trial on the right or left of the visual screen (Fig. 1A). During stimulus unmasking, subjects were required to hold down a button with their right hand while maintaining central fixation and, only after the image had disappeared, they were instructed to either release the button and rotate their wrist to point in the direction of the target while keeping central fixation (pointing decisions), or to move the eyes in the direction of the target direction while continuing to hold the button (saccade decisions), and then immediately return to the starting point. Pointing movements were associated with female pictures in half of the subjects, and with male pictures in the other half. The scanning sessions included a total of 26 scans of 388 s duration each. Psychophysical study Before fMRI, we collected behavioral data on a reaction-time version of the decision task. The main purpose was to stretch average decision times over a relatively long interval compatible with the temporal resolution of fMRI and approximately corresponding to the duration of the unmasking interval (approximately 9 s), while maintaining a relatively accurate level of performance (approximately 85%). We used a staircase procedure in which the final level of sensory evidence was progressively adjusted until a stable discrimination performance of ~85% accuracy and a decision time of approximately 9 s was reached. This procedure ensured that neural signals during stimulus unmasking reflected decision formation (i.e. evidence accumulation) rather than postdecision components such as maintenance in working memory of the selected action plan. All subjects were trained on the fMRI paradigm prior to scanning to ensure that they fully understood the instructions and could perform the task. Stimuli and unmasking procedure Images were 240 9 240 pixel gray-scale digitised photographs of faces selected from a larger set developed by Neal Cohen, and used in previous experiments (Tosoni et al., 2008). Noisy pictures were generated using a function that adds a certain weight of white noise to the images in the form of pixels of 4 9 4 mm. Images were gradually unmasked according to a linear function of time (y = t) that equally divides the total amount of noise (y = 0–1) by the time length of the unmasking interval (t: 0-1). Visual stimuli were generated using an in-house toolbox for Matlab (The Mathworks). For

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

1372 A. Tosoni et al. A

B

C

Fig. 1. (A) Each trial began with the presentation of a central fixation cross and a peripheral target in one of four possible locations and was followed by stimulus unmasking for 9.345 s. The offset of the picture instructed subjects to indicate whether they had seen a female or male picture by making an eye or a hand pointing movement to the location of the visual target shown at the beginning of the trial. A variable intertrial interval (ITI) of 3.7, 5.6, or 7.5 s preceded the next trial. (B) Mean decision accuracy and reaction times ( SEM) for evidence and noise trials in the psychophysical session before scanning. (C) Behavioral accuracy and d′ ( SEM) for evidence and noise trials during the fMRI experiment. MR, magnetic resonance.

the psychophysical session before scanning, images were presented on a 17-inch LCD computer monitor (1280 9 796 pixels, refresh rate 60 Hz). Subjects viewed the display from a distance of 60 cm, with their head stabilised by a chin-rest. In the scanner, images were projected onto an LCD screen positioned at the back of the magnet bore and were visible to the subjects through a mirror attached to the head coil.

Localiser scans: stimulus-selective regions of interest Participants passively viewed unmasked images of indoor and outdoor scenes, and male and female faces, presented centrally for 300 ms every 500 ms in blocks of 16 s duration, and interleaved with fixation blocks of variable duration (mean duration 13.6 s). Eye movement recording

Localiser scans: saccade- and pointing-selective regions of interest In a blocked fMRI design, observers alternated 18-s blocks of delayed hand pointing or saccadic eye movements with blocks of visual fixation of variable duration (mean duration 13 s). Each block started with a written instruction (FIX, EYE, HAND) and contained four trials. Each trial began with observers maintaining central fixation while holding down a button on a response box with their right hand. On saccade or pointing trials, a peripheral target, indicating the location of the upcoming movement, appeared for 300 ms in one of four radial locations (1/4, 3/4, 5/4, 7/4 p) at an eccentricity of 8° of visual angle. The targets were filled white circles of diameter 0.9°. After a variable delay (1.5, 2.5, 3.5, or 4.5 s), the fixation point turned red, and participants were either instructed to release the button and rotate their wrist (without moving the shoulder or arm) to point with their right hand in the direction of the target while keeping central fixation (pointing blocks), or move the eyes in the direction of the target direction while continuing to hold the button (saccade blocks), and then immediately returned to the starting point. Two localiser scans were collected in each subject, each including eight blocks of pointing and saccadic eye movements. The visual parameters for the presentation of peripheral target stimuli were the same in the localiser and decision scans.

In the training session before scanning, eye position was monitored using an infrared eye-tracking system, whereas pointing responses were monitored online through a video camera mounted on the computer screen and recorded by button releases. Online monitoring of pointing movements was used to ensure that subjects were performing forefinger/wrist pointing movements and not long-range arm reaching or other types of gross movements. Subjects were thoroughly trained to maintain fixation during the unmasking period and to either execute a saccadic eye or a pointing movement in the direction of the visual target after the go signal for execution. During fMRI, the eye position was monitored and recorded through an infrared eye-tracking system, whereas pointing responses were only recorded by button releases. Pointing accuracy (i.e. pointing to left or right targets) was monitored online through the video camera only during the training session. Functional magnetic resonance imaging methods Image acquisition Functional T2*-weighted images were collected on a Philips Achieva 3T scanner using a gradient-echo planar imaging sequence to measure the BOLD contrast over the whole brain (repetition time, 1869 ms; echo time, 25 ms; 39 slices acquired in ascending interleaved order; voxel size, 3.59 9 3.59 9 3.59 mm; 64 9 64 matrix;

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

Decision and action planning in human PPC 1373 flip angle, 80°). Structural images were collected using a sagittal magnetisation-prepared rapid acquisition gradient echo T1-weighted sequence (repetition time, 8.14 ms; echo time, 3.7 ms; flip angle, 8°; voxel size, 1 9 1 9 1 mm). Functional magnetic resonance imaging preprocessing Differences in the acquisition time of each slice in a magnetic resonance frame were compensated by sinc interpolation so that all slices were aligned to the start of the frame. Functional data were realigned within and across scans to correct for head movement using six-parameter rigid body realignment. A whole-brain normalisation factor was uniformly applied to all frames within a scan in order to equate signal intensity across scans. Images were resampled into 3 mm isotropic voxels and warped into 711-2C space, a standardised atlas space (Talairach & Tournoux, 1988; Van Essen, 2005). Movement correction and atlas transformation were accomplished in one resampling step to minimise sampling noise. Statistical analysis of localiser scans Hemodynamic responses associated with localiser blocks were generated by convolving an idealised representation of the neural waveform for each condition with a gamma function with distinct regressors for each condition. Pointing-/saccade-selective regions were identified from single-subject z maps of contrasts (Boynton et al., 1996) from the localiser scans, as the intersection of the two statistical parametric maps resulting from the one-tail contrasts between pointing (saccades) vs. saccades (pointing), and between pointing (saccades) vs. fixation, each thresholded at z > 2 (yielding a combined P-value of 0.00052 uncorrected). An in-house peakfinding routine was used to extract regions of interest (ROIs) from these maps. ROIs were 10-mm radius spheres centered on map peaks with z scores > 2; spheres within 16 mm of each other were consolidated into a single ROI. These ROIs were used for independent time course analysis during the decision experiment. This selection procedure for effector-selective regions was developed in our previous work on perceptual decision-making (Tosoni et al., 2008) and was designed to exclude voxels with negative or non-significant activation from each ROI. Face-selective ROIs were identified by contrasting single-subject z maps of face (female plus male) vs. place (indoor plus outdoor scenes) passive viewing blocks, thresholded at z > 5, and applying the same settings used for pointing-selective ROIs to the peak-finding routine. Although this procedure and the corresponding threshold were not the same as those used to define parietal effector-specific regions, they closely followed accepted standards used to define face- and place- selective regions in the visual cortex (Epstein, 2008; Sulpizio et al., 2013). Statistical analysis of decision task Hemodynamic responses were estimated without any shape assumption (Ollinger et al., 2001), on a voxel by voxel basis, according to the general linear model. Trials from the decision task were modeled by a set of 14 delta functions covering 14 consecutive time points, each 1.869 s, aligned with the onset of the male/female image. The model included terms on each scan for an intercept and linear trend. For each ROI, regional time courses for each condition of the decision task were estimated over the 14 magnetic resonance time points by averaging across all of the voxels in the ROI. Individual time points of each estimated hemodynamic response, either on the regional data or at the single voxel, were entered into group analyses

conducted through random-effects ANOVAs. Before conducting the voxel-wise ANOVAs, the time course data were spatially smoothed by a Gaussian filter with a full-width-at-half-maximum of 9 mm. All voxel-wise ANOVAs were corrected for non-independence of time points by adjusting the degrees of freedom and for multiple comparisons using joint z score/cluster size thresholds (Forman et al., 1995) corresponding to z = 3.0 and a cluster size of 13 face-contiguous voxels. The z score/cluster size thresholds were determined using volume-based monte-carlo simulations. An automated algorithm that searched for the local maxima and minima, and localised according to a stereotactic atlas (Talairach & Tournoux, 1988), identified the coordinates of responses in z maps. For display purposes, volumes were mapped to surface-based representations using the PALS atlas and CARET software (Van Essen, 2005). Eye movement analysis The eye position was recorded at 120 Hz through an infrared eyetracking system (ISCAN ETL-400) and analysed through custom routines implemented in MATLAB. After blink removal and linear detrending, we estimated an event-related time course of eye position time-locked to the presentation of evidence and noise trials as a function of the selected motor response (presence/absence of button release indicating a pointing/saccadic response) and peripheral target location (left, right) (Tosoni et al., 2012). The eye position in the 500 ms interval before each trial onset was used to determine the relative change in eye position during 38 time bins (250 ms each) spanning the 9.5 s after stimulus onset (unmasking period). The same trial-based baseline was used to estimate the relative change of eye position during the 16 bins (4 s) following the go signal for action execution. Changes of eye position (in degrees of visual angle in the horizontal axis) were computed using a spatial transformation of calibration data collected during the training session in which subjects successively fixated nine fixation crosses on a 3 9 3 grid. The calibration grid covered 10° of visual angle corresponding to the horizontal projection of the four radial target locations in the training session (+5° right, 5° left). Compared with eye traces from the training session, eye movement data collected during the fMRI scanning showed elevated noise. In particular, due to the tilted head position, the shallow viewing angle (data were collected from a mirror positioned on the left side of the head coil) and instrumental noise, eye traces were contaminated by a robust non-saccadic spiking activity. Furthermore, due to technical problems in the eye-tracker software, eye traces from two experimental subjects were not recorded. In order to discriminate saccadic from non-saccadic activity, we developed an automated algorithm that detected fixation breaks and classified them as saccades or spikes, based on the amplitude and duration of the signal deviation. Eye position traces were then visually inspected by a condition-blind experimenter who accepted or rejected each proposed automated classification. The number of detected saccades during the unmasking and action execution periods was then computed as a function of trial type.

Results Psychophysics Behavioral data collected outside the scanner showed that we were successful in stretching the decision over a relatively long interval compatible with the temporal resolution of fMRI and the duration of the unmasking delay, and in maintaining a relatively accurate level

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

1374 A. Tosoni et al. of performance (see Fig. 1B). In fact, decision times for evidence trials were 8.8 s long on average (SEM 194 ms) and not significantly different from the expected value of 9 s (one-sample t-test, P = 0.1), whereas the decision accuracy was 86% on average (SEM 1.8%) and not significantly different from the expected value of 85% (one-sample t-test, P = 0.6). During the fMRI experiment, the task was run as a fixed delay choice paradigm in which the stimulus was still unmasked over 9 s, but subjects had to wait to respond until a go signal was indicated by the offset of the picture. Figure 1C shows the behavioral results during the scanning sessions. As expected, subjects were significantly more accurate and exhibited greater sensitivity (d′) on evidence than noise trials [Accuracy, T(14) = 16, P < 0.001; d′, T(14) = 16.5, P < 0.001]. Effector- and stimulus-selective regions The main analysis was carried out on a specific set of pointing-, saccade- and face-selective ROIs that were localised in each observer by contrasting two sets of localiser scans: blocks of memory-guided pointing vs. saccadic eye movements, and passive viewing of face vs. place images [see Tosoni et al. (2008) and Materials and methods for details]. Brain regions showing a preference for the planning/execution of hand pointing vs. saccadic eye movements, here defined as pointingselective regions, included a region in the medial PPC [parietal reach region (PRR)], a region along the central sulcus, including parts of the pre/postcentral gyri [sensorimotor cortex (SMC)], and a region at the intersection of the superior frontal sulcus with the precentral sulcus [frontal reach region (FRR)] (Fig. 2A). Brain regions

showing a preference for the planning/execution of saccadic eye vs. pointing movements, here defined as saccade-selective regions, included a region in the pIPS and a region in the frontal cortex located ventrally to the FRR [frontal eye fields (FEFs)] (Fig. 2A). Although the frequency map in Fig. 2 may suggest some overlap between the parietal PRR and the pIPS regions, due to intersubject variability in their anatomical location (see, e.g. magenta color in the lateral view), note that pointing- and saccade-selective regions were selected in each individual subject based on the same contrast (pointing vs. saccade), and thus the two regions do not overlap by definition at the individual level. Based on their anatomical position and functional responsiveness to pointing and saccadic eye movements, the PRR and pIPS regions probably correspond to the human homologues of monkey areas MIP/V6A (Snyder et al., 1997; Galletti et al., 1999) and LIP, respectively (Colby et al., 1996; Snyder et al., 1997). It is worth noticing that both the saccade-selective pIPS and pointing-selective PRR showed considerable BOLD responses to the non-preferred response effector during action execution, thus indicating that effector selectivity is relative and not absolute. This is consistent with the relevant neurophysiological findings on the monkey saccadic LIP and reachrelated MIP regions as well as with previous human studies on these regions showing that anatomical segregation is never absolute and that considerable responses to the non-preferred effector are also observed in these regions (Snyder et al., 1997; Sereno et al., 2001; Calton et al., 2002; Astafiev et al., 2003; Connolly et al., 2003; Dickinson et al., 2003; Tosoni et al., 2008; Galati et al., 2011). The face-selective region [fusiform face area (FFA)] was localised along the posterior fusiform gyrus (Fig. 2B) in a position consistent with previous descriptions of the FFA (Kanwisher et al., 1997; Levy et al., 2001).

A

Decision-related signals in the fusiform face area

B

Fig. 2. (A) Location of pointing- and saccade-selective regions in the posterior parietal and motor/premotor cortex. Conjunction of individual ROIs from the pointing/saccade localiser is superimposed on a dorsal, lateral and dorso-medial view of the PALS atlas (Van Essen, 2005). (B) Location of face and place-selective region in the ventral visual cortex. Conjunction of individual ROIs from the face/place localiser is superimposed on a medial and ventral view of the PALS atlas. PALS, population-average, landmarkand surface-based; PPA, parahippocampal place area. The color bar indicates ROI overlap between subjects. LH, left hemisphere; RH, right hemisphere.

A large body of evidence supports the idea that higher-order cortical regions involved in decision-making temporally integrate sensory signals provided by lower-level sensory areas (Smith & Ratcliff, 2004; Gold & Shadlen, 2007; Donner et al., 2009). We first examined the temporal dynamics of the BOLD response in the face-selective FFA region during evidence and noise trials as a function of the selected response. As decisions were formed between time point 0 and time point 5 (go signal), but signals for motor execution were not evident in the BOLD response until at least one frame after the go signal, time points 0–6 were selected for studying decisionrelated signals, and time points 7–13 for studying action execution signals. On these time bins, we conducted a regional ANOVA with Trial Type (evidence, noise), Response Effector (pointing, saccade) and Time as factors. For this and following ANOVAs, only correctly classified evidence trials were considered. Evidence trials evoked a significantly greater response than noise trials in the FFA (Fig. 3) with the first differentiation appearing at about 8 s (frame 4) following the onset of the picture, and similarly for pointing and saccadic responses. Accordingly, there was a significant interaction of Trial Type (evidence, noise) by Time (0–6) in the decision delay with posthoc tests indicating a significant difference between evidence and noise trials starting at time point 4 (F13,84 = 78.5, P < 0.001, time point 4: evidence > noise, P < 0.001). Notably, the modulation of sensory evidence continued during the response phase, thus suggesting the presence of motorrelated signals in the FFA. We have reported motor-related effects in category-selective regions of the ventral occipito-temporal cortex (Astafiev et al., 2004; Jack et al., 2007), and similar modulations

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

Decision and action planning in human PPC 1375 A

B

Fig. 3. (A) The location of the FFA ROI is shown on a ventral view of the PALS atlas. The color bar indicates ROI overlap between subjects. (B) The graph shows the time course of the BOLD signal in the FFA ROI as a function of Trial Type and Response Effector in the 14 time points following the picture onset. Error bars represent within-subjects SEM calculated according to Cousineau (2005). LH, left hemisphere; RH, right hemisphere; MR, magnetic resonance.

were observed but not reported in a similar perceptual decision-making paradigm (Tosoni et al., 2008). Decision-related signals in pointing- and saccade-selective posterior parietal regions The pointing-selective PRR showed strong selective signals for pointing > saccades during the execution phase [Fig. 4, Response Effector by Time (7–13): F6,84 = 12, P < 0.001]. During the decision period, signals grew toward the end of the delay period, and the effect of sensory evidence was specific for trials in which the preferred pointing response was selected [Trial Type by Response Effector by Time (0–6): F6,84 = 4.3, P < 0.001, evidence pointing > noise pointing (P = 0.002), > evidence saccade (P = 0.008) and > noise saccade (P = 0.003) at time point 5; evidence saccade = noise saccade (P = 0.7) at time point 5; evidence pointing = evidence saccade, noise pointing, noise saccade at time points 0–4]. A

B

The saccade-selective region in the pIPS also showed an effectorspecific modulation during action execution with a preference for saccadic eye vs. hand pointing movements [Fig. 4, Response Effector by Time (7–13): F6,84 = 3.4, P = 0.004]. In this region, however, the effect of sensory evidence (evidence vs. noise) was observed earlier than in the PRR and was independent of whether an eye or a pointing movement was selected for execution [Trial Type by Time (0–6): F6,84 = 14.8, P < 0.001, evidence > noise (P < 0.005) at time points 4–6]. Crucially, decision signals in the pIPS were also larger than in the PRR and progressively increased during the decision delay, thus more closely resembling a mechanism for evidence accumulation. Therefore, both effector-selective regions in the parietal cortex showed statistically significant effects of the amount of sensory evidence during decision formation. However, whereas in the pointingselective PRR the effect of sensory evidence was specific for the stimulus category associated with the preferred pointing response, the saccade-selective pIPS region showed earlier modulations of sensory evidence that were independent of the selected response, i.e. occurred for both the preferred and non-preferred response. The functional dissociation between decision-related signals in these two effector-selective regions was statistically confirmed by a significant interaction between Region (PRR, pIPS), Trial Type (evidence, noise), Response Effector (pointing, saccade) and Time (0–6) (F6,84 = 5.3, P < 0.001). These results indicate that, even when a perceptual decision is explicitly linked to a specific motor response from the beginning of the experiment, cortical regions showing a preference for one effector during the motor response can exhibit sensory accumulation signals during perceptual decision-making that are effectorindependent. Decision-related signals in pointing- and saccade-selective motor/premotor regions As indicated above, effector-selective regions in the motor/premotor cortex included primary areas in the SMC and dorsal premotor cortex (FEF, FRR). In these regions, we found decision signals that were consistent with those observed in the corresponding effectorspecific regions in the PPC (Fig. 5). Specifically, we found that the SMC encoded signals for perceptual decision-making that were selective for trials in which the preferred pointing response was selected, whereas the FEF showed decision signals that were independent of the selected action [SMC: Trial Type by Response Effector by Time (0–6), F6,84 = 2.5, P = 0.03, evidence pointing > noise pointing (P < 0.01) at time point 5; FEF: main effect of Trial Type, F6,84 = 8.8, P = 0.02]. No significant effect of sensory evidence was found in the FRR. Finally, all pointing-selective regions in both the frontal and parietal cortex showed robust pointing selectivity during the final part of the decision delay (time points 6) and during action execution (time points 7-13) [Response Effector by Time (0–6): PRR, F6,84 = 3.3, P = 0.006; FRR, F6,78 = 3.7, P = 0.002; SMC, F6,84 = 5.6, P = 0.01; pointing > saccade (P < 0.001) at time point 6; Response Effector by Time (7–13): FRR, F6,78 = 18.6, P < 0.001; SMC, F6,84 = 30.6, P < 0.001]. Spatially-selective signals in pointing- and saccade-selective parietal regions

Fig. 4. (A) The location of the PRR and pIPS ROIs is shown on a dorso-medial view of the PALS atlas. The color bar indicates ROI overlap between subjects. (B) Data are presented as in Fig. 3. LH, left hemisphere; RH, right hemisphere; MR, magnetic resonance.

We next tested whether decision signals in the PRR and pIPS regions were spatially selective by analysing their BOLD response to evidence and noise trials as a function of the selected response

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

1376 A. Tosoni et al. A

B

Fig. 5. (A) The location of the FRR, SMC and FEF ROIs is shown on a lateral view of the PALS atlas. The color bar indicates ROI overlap between subjects. (B) Data are presented as in Fig. 3. LH, left hemisphere; RH, right hemisphere; MR, magnetic resonance.

and location of the movement target (contralateral, ipsilateral to the region). Results showed that both regions showed spatially selective modulations (Fig. 6). However, whereas in the pointing-selective

PRR, spatial modulations were most evident toward the end of the delay period and were specific for evidence trials associated with the preferred pointing response, in the saccadic pIPS region, spatial signals emerged earlier in the decision delay and were independent of both sensory evidence and response effector. Statistically, this was shown by a significant interaction of Target Location (contralateral, ipsilateral) by Trial Type (evidence, noise) by Time (0–6) in the PRR [F6,84 = 4.3, P < 0.001, evidence pointing contralateral > all other conditions at time point 5 (P < 0.001)], and by a main effect of Target Location [Target Location by Time (0–6) (F1,14 = 8.3, P = 0.01)] with no significant interaction between Target Location and Response Effector (P = 0.5) or Trial Type (P = 0.7) in the pIPS region. Notably, although decision signals in the PRR averaged across target locations were weak compared with baseline, the PRR response to contralateral pointing evidence was significantly greater than baseline (time points 4–6; one simple t-test against 0, P = 0.05). Finally, it is worth noting that both the effector-specific and effector-independent spatial signals shown in the PRR and pIPS, respectively, were consistent with their action-dependent (PRR) and action-independent (pIPS) modulations for perceptual decision shown in the former analysis. Eye movement recordings

Fig. 6. Time course of the BOLD signal in the PRR and pIPS ROIs as a function of Trial Type, Response Effector and Target Location. MR, magnetic resonance; sacc, saccade (legends refer to all four graphs).

The analysis of eye movement data during the training session showed that, at both the single subject and group level (Fig. 7A and B), subjects maintained accurate fixation during the stimulus unmasking period while they executed saccadic eye movements after the go signal for action execution during trials in which no button release was recorded. During stimulus unmasking, the mean change

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

Decision and action planning in human PPC 1377 A

B

C

D

Fig. 7. Single-subject [(A) training session; (C and D) magnetic resonance imaging session] and grouped-averaged [(B) training session] time course of eye position time-locked to the presentation of evidence and noise trials as a function of selected motor response and target location during the unmasking and action execution periods. Mean time course is plotted along with SE in both B and D.

in eye position was 0.07° ( 0.07° for evidence and noise trials with left targets, 0.002° for evidence trials with right targets and 0.1° for noise trials with right targets) and saccades were present in only 7% of the trials, with no significant differences between evidence and noise trials (P = 0.5). These small values indicate that subjects accurately maintained fixation during the unmasking period. The eye movement data collected during the fMRI scanning, despite limitations in data quality discussed in Materials and methods, confirmed these observations. Only 7% of the trials contained an eye movement during the stimulus unmasking period with no significant differences between evidence and noise trials (P = 0.7). During action execution, only 5% of the trials in which a button release was recorded contained a concomitant saccadic eye movement. The event-related time course of eye position for the fMRI subject with the best available calibration data is shown in Fig. 7C and D. Averaged across time bins and experimental conditions, the mean change in eye position during the unmasking period was 0.6° (note a baseline shift in the time course that may be explained by a

drift of head position during the long-lasting scanning session), whereas saccade execution peaked on average at 8° and -10° for right and left target trials in which no button release was recorded (saccade) and 0.09° and 0.4° for right and left pointing trials. Notably, to verify that the effector-independent pattern of evidence modulations observed in the saccade-selective pIPS region was not explained by eye movements, we conducted a new set of analyses in which pointing trials with a concomitant eye movement were treated as separate regressors and excluded from the estimate of the pIPS average time course. The results confirmed a strong positive modulation of sensory evidence (i.e. greater BOLD response for evidence than noise trials, P = 0.007) that was independent of the selected response (no significant interaction of trial type by response effector) in the pIPS region. We also verified that this activity pattern was not explained by a change in the tendency to perform eye movements during pointing trials over the course of the experiment (e.g. a decrease in the frequency of eye movements induced by learning) by testing differences

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

1378 A. Tosoni et al. between the first vs. second temporal half of the decision experiment (i) in the percentage of trials with a concomitant pointing and eye movement, and (ii) in the pIPS BOLD response to decision modulations. As suggested by the lack of significant differences in the percentage of trials with concomitant eye and pointing movements (P = 0.26), the effector-independent pattern of activity observed in the pIPS region showed no modulations by the temporal subdivision of the data (we found no interaction between sensory evidence and time of the experiment). Overall, these results indicate that our findings cannot be explained by eye movements. Whole-brain analysis of decision-related signals To test whether decision signals for perceptual decision-making extend to other regions of the brain, we conducted a voxel-wise ANOVA with Trial Type (evidence, noise), Response Effector (pointing, saccade) and Time (time points 0–6) over the whole brain. The interaction of Trial Type by Time localised several regions in the visual occipital and fronto-parietal cortex that showed a differential response to evidence vs. noise trials independently of the selected response (Fig. 8). In particular, consistently with the pattern found in the FFA ROI, several regions of the fusiform and lateral occipital cortex displayed a stronger response for evidence than noise trials that was evident by 8 s after stimulus onset. Similarly, in regions of the medial intraparietal sulcus, corresponding to the saccade-selective pIPS ROI, the increment on evidence vs. noise trials increased gradually over the decision delay and was not specific for the effector (hand or eye). In contrast, several regions of the ‘default mode network’ (posterior cingulate, ventro-medial prefrontal cortex, and angular gyrus), typically deactivated during goal-driven tasks (Shulman et al., 1997; Raichle et al., 2001), showed stronger deactivation for evidence than noise trials. These included regions in the ventro-medial prefrontal cortex that have been previously associated with decision computations (Kable & Glimcher, 2007). Following Tosoni et al. (2008) and Ho et al. (2009), we note that such negative modulations of the BOLD signal are inconsistent with a process of active accumulation of sensory evidence. The ANOVA interaction between Trial Type, Response Effector and Time isolated three main regions in the medial fronto-parietal cortex (Fig. 9). The highest peaks of activation were found in two clusters localised in the precuneus that showed a positive modulation for evidence vs. noise trials associated with a pointing response, and a weak (anterior cluster) or even negative (posterior cluster) modulation for evidence vs. noise trials when a saccadic response was selected. The location and pattern of modulation during pointing matched well the profile and location of the PRR ROI. The third cluster showed a robust selectivity for pointing responses during action execution, similar to the profile of activity found in the SMC region. However, it showed a negative trend of activation during the decision delay, with no modulations for evidence vs. noise trials associated with a pointing response. This region may correspond to the supplementary motor area, and its profile is consistent with a region strictly involved in pointing execution.

Discussion Pivotal neurophysiological studies on behaving monkeys have shown that perceptual decisions are encoded within the same sensorimotor circuits that are responsible for the planning and execution of the actions used to report the decision (Gold & Shadlen, 2007).

Contrary to the predictions of traditional psychological theories assuming that decisions arise through a serial process involving perception, cognition and action as independent hierarchical processes (Marr, 1892; Donders, 1969; Posner, 1978), these findings have supported parallel models such as the ‘continuous flow’, in which decision and action are closely intertwined (Coles et al., 1985), and the so-called “intentional” or “action-based” framework for decisionmaking, a theoretical proposal that posits a strong interdependence between mechanisms for decision-making and the commitment to an action plan (Shadlen et al., 2008; Cisek & Kalaska, 2010). Consistent with this proposal, using fMRI during a fixed delay paradigm in which sensory evidence favoring a particular motor response was manipulated at a parametrical level, we have recently shown that even more complex arbitrary decisions in humans do not necessarily involve a general decision-making module but may rather depend on specific sensorimotor mechanisms that accumulate sensory information and plan motor actions (Tosoni et al., 2008). Importantly, however, because of the sluggish nature of the BOLD signal, the use of a fixed delay paradigm in which the sensory stimulus was briefly presented at the beginning of the trial did not allow a clear separation of decision from motor planning signals (even though effectively separating decision from action execution). More specifically, because the decision delay was clearly longer than required by the decision process, it is possible that the observation of a higher BOLD response in high-evidence than in low-evidence trials in response-related regions reflected maintenance of an action plan in working memory rather than accumulation of sensory evidence. Here we addressed this question by maximising the temporal separation between the neural signals associated with decision formation, i.e. stimulus encoding and evidence accumulation, and the signals associated with postdecision processing, such as working memory maintenance of the selected action plan until action execution. This was done in the context of a task requiring a fixed, predetermined association between complex visual categories (female, male) and arbitrary motor responses (eye, hand movements), providing the optimal basis for an unbiased comparison between the predictions of the intentional/action-based model of decision (i.e. the steps toward decision formation affect the same regions associated with motor planning) vs. the predictions of a more traditional model of decision (i.e. the motor system is recruited only after the decision has been committed in order to convert the decision outcome into action). The results showed that significant modulations by sensory evidence during evidence accumulation and before decision formation were exclusively observed within the same cortical regions responsible for planning and executing the actions used to report the decision. These regions specifically corresponded to the regions of the posterior parietal and frontal cortex that were selected for their preference for either pointing or saccadic motor responses. This result indicates that, in both simple perceptual decisions and more complex decisions that are arbitrary and involve high-level visual categories, decision-relevant accumulation processes can be found in brain areas involved in response selection. The fact that brain areas involved in response selection encode decision signals before a decision has been committed and do so long before a response is requested argues against the idea that they only report an already-made choice. Also, at least in the present task, it suggests that brain areas involved in response selection might play a role in the perceptual decision itself, which would be consistent with an ‘intentional’ or ‘action-based’ framework of perceptual decision-making (Shadlen et al., 2008; Cisek & Kalaska, 2010). Notably, we did not find

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

Decision and action planning in human PPC 1379

Fig. 8. Multiple-comparison-corrected map of the interaction of Trial Type by Time (0-6) is superimposed over different views of the PALS atlas. The color bar indicates the equivalent z score for the P-value from the ANOVA. The graphs show the time course of the BOLD signal as a function of Trial Type and Response Effector in regions showing the highest activation peak in the ANOVA. AG, angular gyrus; LatOccipital, lateral occipital; Lh, left hemisphere; MR, magnetic resonance; pIPS, posterior intraparietal sulcus; postCing, posterior cingulated; Rh, right hemisphere; VMPFC, ventro-medial prefrontal cortex.

decision signals compatible with a process of active evidence accumulation in any other brain region. Although this may depend on some aspects of our paradigm (including the small number of

evidence levels), this is a negative result and, as such, it cannot be taken as a demonstration that decisions are exclusively formed in the parieto-frontal cortex.

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

1380 A. Tosoni et al.

Fig. 9. Multiple-comparison-corrected z map of the interaction of Trial Type by Response Effector by Time is superimposed over a dorso-medial view of the PALS atlas. Data are presented as in Fig. 8. SMA, supplementary motor area; MR, magnetic resonance.

Stationary vs. non-stationary evidence in the face categorisation paradigm One assumption is that our paradigm taps into the same mechanisms of sensory evidence accumulation studied in previous studies (Gold & Shadlen, 2007). This is important for the interpretation of the delay activity as related to sensory accumulation, and to link our results to a large body of literature that has modeled data of twochoice decision tasks (e.g. the random dot direction task) using sequential sampling models such as the drift diffusion model (Smith & Ratcliff, 2004; Ratcliff & McKoon, 2008). In our paradigm, the available information was non-stationary as the variance of its distribution decreased with time. Under these conditions, participants may have ignored sensory information provided early during the delay and waited until sufficient sensory information was available before committing to a choice. However, at least in the saccade-selective pIPS region, the activity time course showed a gradual ‘ramping up’ that started early after stimulus onset and progressively increased during the decision delay, thus suggesting accumulation of sensory evidence already in the first part of the delay. Our interpretation is that subjects implemented a non-stationary evidence accumulation process, which is analogous to that assumed in sequential sampling models such as the drift diffusion model. Accordingly, a recent computational study (Tsetsos et al., 2011) tested the fitting of different sequential sampling models to experimental data in which decision evidence changed dynamically during the trial, and indicated that models such as the drift diffusion model or the leaky competing accumulator model (Usher & McClelland, 2001) can be readily extended to decisions with non-stationary evidence. Therefore, sequential sampling model methods remain the more widely accepted paradigm of choice, independent of the nature of the accumulation process, i.e. stationary or non-stationary. Decision signals in effector-selective regions of the posterior parietal cortex One of the main findings of this study is that, even when decision and action planning are temporally separated, decision-related

signals (evidence > noise) largely overlap with regions involved in motor preparation in the PPC. Three findings suggest that delay activity in these regions was related to computations for perceptual decision-making. First, the activity was stronger on evidence than noise trials and increased during the course of the delay as sensory evidence was gradually revealed. Second, the separation between noise and evidence trials occurred earlier then the first data point in which response-specific activity appeared (action execution, frame 7). Finally, the separation between noise and evidence trials occurred earlier in the delay than when the threshold for decision was reached, as determined psychophysically on a reaction time version of the task. The other important result of our study is that decision-related activity was effector-specific in the PRR, as it occurred only for decisions associated with the preferred pointing response, whereas it was effector-independent in the pIPS region, as it occurred in an eye movement region during decisions that called for a pointing response. The finding of an effector-independent modulation within a region that was specifically selected for its preference for one effector over the other is apparently puzzling and in contrast to the idea that sensory evidence is selectively accumulated for the preferred motor response. However, it is important to note that the activation pattern in the pIPS during decision formation, although effector-independent, cannot be simply accounted for as a ‘general’ or abstract decision signal. Indeed, sensory accumulation signals in the pIPS exhibited spatial selectivity for contralateral response targets, thus suggesting that the pIPS was not accumulating evidence for an abstract choice, but in favor of a concrete spatially-directed action plan. Perhaps this action plan, at least during the early phases of decision formation, was tied to a certain response location but not (yet) to a specific effector. It is not likely that the dissociation between the effector-specific decision signals in the PRR and the effector-independent decision signals in the pIPS reflects differences in sensory selectivity to the two categories. As shown in our previous work, these higher-order parietal regions do not respond to face stimuli per se but only in the context of a behavioral task in which the stimulus is associated with a motor response (Tosoni et al., 2008). It is also unlikely that pIPS

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

Decision and action planning in human PPC 1381 effector selectivity was less than the PRR. Both regions were selected using the same criteria, i.e. by subtracting activity obtained during spatially-directed saccades vs. pointing movements. Therefore, any potential delay activity for pointing in the pIPS during the localiser scans would have been subtracted away, and the pIPS voxels selected from the localiser scans were those in which the difference between saccadic and pointing activity was maximised. In fact, strong evidence of saccade selectivity in the pIPS is evident during the response period. A second possibility is that pointing-related delay activity in the pIPS underlies an eye-centered spatial transformation during hand pointing movements. Saccadic eye movements to visual targets are organised in eye-centered coordinates, whereas arm-reaching responses require both eye- and body-centered frames of reference. Against this interpretation, however, stands the strong similarity of decision-related activity in the pIPS for both saccadic and pointing trials, and correspondingly the difference between PRR and pIPS time courses during pointing trials. In fact, if pIPS activity for the non-preferred stimulus category were to represent an eye-centered transformation of an arm response, then the signal time course should be more similar to the time course in the PRR than to the time course for the preferred stimulus category. Instead, the signal time courses in the pIPS were similar for saccade- and pointing-associated stimulus categories, thus suggesting an effector-independent response. In our view, the functional dissociation between the response-specific and response-independent patterns of decision-related modulations observed in the PRR and pIPS stems from the different functional properties of these areas. Whereas the PRR primarily encodes signals related to the planning and execution of pointing movements, and only incorporate behavioral variables that potentially instruct such movements, the pIPS plays multiple and independent roles in saccade-related motor activity, spatial and object selection. This interpretation is consistent with many studies in non-human primates. The pointing-selective PRR has been mainly associated with the intention to make spatially-directed pointing movements (Andersen & Cui, 2009), whereas neurons in the saccade-selective LIP not only encode a large set of behavioral variables relevant to guide spatial behavior, but also more abstract information not linked to movements per se (Gottlieb & Snyder, 2010). For example, LIP neurons can code information such as sensory evidence, expected reward or stimulus color not only when these variables are relevant to the execution of an eye movement, but also when sensory stimuli are used to indicate a categorical decision that is not directly linked to a motor response (Shadlen & Newsome, 1996; Platt & Glimcher, 1999; Toth & Assad, 2002; Stoet & Snyder, 2004; Sugrue et al., 2004; Fitzgerald et al., 2011). Our results are also consistent with recent work (Bennur & Gold, 2011) showing that decision-related signals in the LIP are evident regardless of whether the appropriate stimulus–response association is indicated before, during, or after decision formation. The likely human homologue of the monkey LIP also appears to code for perceptual categorical decisions that are independent of action planning (Sereno et al., 2001; Silver & Kastner, 2009). In contrast, the human PRR shows decision signals that are more strongly linked to movement planning. This functional distinction was also present in our previous study (Tosoni et al., 2008), in which the PRR was mainly modulated by the strength of the sensory evidence in favor of a pointing response, whereas the pIPS showed a more complex pattern of activation with a mixture of saccade-related decision signals and attentional/perceptual signals. Whether different neurons in the human pIPS code for decision vs. action signals cannot be solved with fMRI at the current level of spatial resolution.

Decision signals in the frontal cortex Decision modulations in regions of the motor and premotor cortex were consistent with those observed in the corresponding effectorspecific regions in the PPC. In particular, whereas in the FEF we found signals for evidence accumulation that were independent of the selected response, the SMC region exhibited signals for evidence accumulation in favor of a pointing response. These support the idea of continuous flow models, according to which modulations of sensory evidence favoring a response continuously flow from sensory to motor regions (Coles et al., 1985). In addition, in all motor/premotor regions we observed greater BOLD responses for the preferred effector during the final part of the decision delay and during action execution, suggesting that they are more specifically involved in later stages of the decision process, such as action planning and execution. Finally, a whole-brain analysis on the main effect of sensory evidence identified significant decision modulations in the ventro-medial prefrontal cortex. However, decision signals in this region were negative as compared with baseline, and thus inconsistent with the expected functional pattern of a region showing active accumulation of sensory evidence (see also Tosoni et al., 2008; Ho et al., 2009). Implications for the intentional model of decision-making The intentional framework was originally developed to explain results obtained in particular behavioral contexts requiring fixed and over-trained visuomotor transformations. Successive studies have put its generality into question by showing that the use of more flexible stimulus–response mappings may reveal the existence of reportindependent mechanisms of choice (Freedman & Assad, 2011). A recent study shows dissociable perceptual decision-related signals irrespective of when the stimulus–response association is formed (Bennur & Gold, 2011). These studies suggest that the findings of response-related or response-independent signals for perceptual decision-making may depend on the specific task paradigms. In our study, however, the dissociation between regions encoding responsedependent and response-independent decision-related signals was observed within a single experimental paradigm. This suggests that the co-existence of these signals might not only depend on the specific task conditions used to study decision-making, but also reflect the specific functional properties of certain cortical areas. Our study has another notable implication for this discussion when we consider the task structure in relation to the brain-wise pattern of activity recorded. Even though in our task abstract visual categories were predeterminably associated with specific motor responses, its temporal structure is such that observers could have adopted a decision strategy that does not involve the motor system. In principle, in fact, the decision could have been completed in ‘high-level’ decision areas (e.g. prefrontal) before involving sensorimotor areas. However, this is not what our data show. Motor-specific activation was in fact present before the stimulus was fully unveiled and thus before a threshold for the decision was reached. Furthermore, the build-up of decision signals in sensorimotor areas reflected the dynamics of the decision itself. These results clearly speak against a serial model of decisionmaking in which decisions are first computed in abstract and only successively passed to the motor system to be converted into action, whereas they are more compatible with continuous flow models in which the partial results of decision computations are immediately transferred to sensorimotor areas (Coles et al., 1985). Notably, however, we did not find any evidence of decision modulations in

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

1382 A. Tosoni et al. prefrontal areas so a question remains of where (else) the decision is made. A stronger interpretation of the data is that sensorimotor areas play a role in the decision itself rather than only providing a (continuous) readout of decisions computed elsewhere. Accordingly, several theories, such as the intentional framework of decision-making (Shadlen et al., 2008; Cisek & Kalaska, 2010) and the continuous preparation view (Pezzulo & Ognibene, 2012), suggest that, in ecological settings, decisions are closely tied to the mechanisms for action specification, because ultimately most of them boil down to the selection of an action (Pezzulo & Castelfranchi, 2009; Redish, 2013). The sensorimotor system is ideally placed to compute (or at least significantly contribute to) decisions because it can access information such as action costs that are key to the choice, and might cross-talk with other (e.g. frontal) areas to form a ‘distributed consensus’ about the choice (Cisek, 2012). We conclude that our findings on human perceptual decisionmaking are consistent brain-wide with the idea that the neural regions involved in response selection might be more involved in the decision process itself than traditionally thought, as, at least in the current task, these areas show the more reliable decision signals. Our findings also significantly extend previous results by showing that decision signals can be dissociated from action signals even within the same neural regions involved in response selection.

Acknowledgements We thank Avi Snyder and Sara Spadone for technical support on data analysis and software. This work was supported by EU FP6 200726 (Brain-Synch) to M.C., Italian Ministry of University and Research grant PRIN 2008PBT985_002 to G.C, and EU FP7-ICT-270108 (Goal-Leaders) to G.P.

Abbreviations BOLD, blood-oxygenation-level-dependent; FEF, frontal eye field; FFA, fusiform face area; fMRI, functional magnetic resonance imaging; FRR, frontal reach region; LIP, lateral intraparietal area; pIPS, posterior intraparietal sulcus; PPC, posterior parietal cortex; PRR, parietal reach region; ROI, region of interest; SMC, sensorimotor cortex.

References Andersen, R.A. & Cui, H. (2009) Intention, action planning, and decision making in parietal-frontal circuits. Neuron, 63, 568–583. Astafiev, S.V., Shulman, G.L., Stanley, C.M., Snyder, A.Z., Van Essen, D.C. & Corbetta, M. (2003) Functional organization of human intraparietal and frontal cortex for attending, looking, and pointing. J. Neurosci., 23, 4689– 4699. Astafiev, S.V., Stanley, C.M., Shulman, G.L. & Corbetta, M. (2004) Extrastriate body area in human occipital cortex responds to the performance of motor actions. Nat. Neurosci., 7, 542–548. Bennur, S. & Gold, J.I. (2011) Distinct representations of a perceptual decision and the associated oculomotor plan in the monkey lateral intraparietal area. J. Neurosci., 31, 913–921. Boynton, G.M., Engel, S.A., Glover, G.H. & Heeger, D.J. (1996) Linear systems analysis of functional magnetic resonance imaging in human V1. J. Neurosci., 16, 4207–4221. Calton, J.L., Dickinson, A.R. & Snyder, L.H. (2002) Non-spatial, motor-specific activation in posterior parietal cortex. Nat. Neurosci., 5, 580–588. Cisek, P. (2012) Making decisions through a distributed consensus. Curr. Opin. Neurobiol., 22, 927–936. Cisek, P. & Kalaska, J.F. (2010) Neural mechanisms for interacting with a world full of action choices. Annu. Rev. Neurosci., 33, 269–298. Colby, C.L., Duhamel, J.R. & Goldberg, M.E. (1996) Visual, presaccadic, and cognitive activation of single neurons in monkey lateral intraparietal area. J. Neurophysiol., 76, 2841–2852. Coles, M.G., Gratton, G., Bashore, T.R., Eriksen, C.W. & Donchin, E. (1985) A psychophysiological investigation of the continuous flow

model of human information processing. J. Exp. Psychol. Human., 11, 529–553. Connolly, J.D., Andersen, R.A. & Goodale, M.A. (2003) FMRI evidence for a ‘parietal reach region’ in the human brain. Exp. Brain Res., 153, 140–145. Cousineau, D. (2005) Confidence intervals in within subject designs: a simpler solution to Loftus and Masson’s method. Tut. Quant. Methods Psychol., 1, 42–45. Declaration of Helsinki (1964) Human Experimentation: code of ethics of the World Medical Association. Brit. Med. J., 2, 177–180. Dickinson, A.R., Calton, J.L. & Snyder, L.H. (2003) Nonspatial saccade-specific activation in area LIP of monkey parietal cortex. J. Neurophysiol., 90, 2460–2464. Donders, F.C. (1969) On the speed of mental processes. Acta Psychol. (Amst), 30, 412–431. Donner, T.H., Siegel, M., Fries, P. & Engel, A.K. (2009) Buildup of choicepredictive activity in human motor cortex during perceptual decision making. Curr. Biol., 19, 1581–1585. Epstein, R.A. (2008) Parahippocampal and retrosplenial contributions to human spatial navigation. Trends Cogn. Sci., 12, 388–396. Erickson, D.T. & Kayser, A.S. (2013) The neural representation of sensorimotor transformations in a human perceptual decision making network. NeuroImage, 79, 340–350. Filimon, F., Philiastides, M.G., Nelson, J.D., Kloosterman, N.A. & Heekeren, H.R. (2013) How embodied is perceptual decision making? Evidence for separate processing of perceptual and motor decisions. J. Neurosci., 33, 2121–2136. Fitzgerald, J.K., Freedman, D.J. & Assad, J.A. (2011) Generalized associative representations in parietal cortex. Nat. Neurosci., 14, 1075–1079. Forman, S.D., Cohen, J.D., Fitzgerald, M., Eddy, W.F., Mintun, M.A. & Noll, D.C. (1995) Improved assessment of significant activation in functional magnetic resonance imaging (fMRI): use of a cluster-size threshold. Magn. Reson. Med., 33, 636–647. Freedman, D.J. & Assad, J.A. (2011) A proposed common neural mechanism for categorization and perceptual decisions. Nat. Neurosci., 14, 143–146. Galati, G., Committeri, G., Pitzalis, S., Pelle, G., Patria, F., Fattori, P. & Galletti, C. (2011) Intentional signals during saccadic and reaching delays in the human posterior parietal cortex. Eur. J. Neurosci., 34, 1871–1885. Galletti, C., Fattori, P., Kutz, D.F. & Gamberini, M. (1999) Brain location and visual topography of cortical area V6A in the macaque monkey. Eur. J. Neurosci., 11, 575–582. Gold, J.I. & Shadlen, M.N. (2007) The neural basis of decision making. Annu. Rev. Neurosci., 30, 535–574. Gottlieb, J. & Snyder, L.H. (2010) Spatial and non-spatial functions of the parietal cortex. Curr. Opin. Neurobiol., 20, 731–740. Gould, I.C., Nobre, A.C., Wyart, V. & Rushworth, M.F. (2012) Effects of decision variables and intraparietal stimulation on sensorimotor oscillatory activity in the human brain. J. Neurosci., 32, 13805–13818. Hebart, M.N., Donner, T.H. & Haynes, J.D. (2012) Human visual and parietal cortex encode visual choices independent of motor plans. NeuroImage, 63, 1393–1403. Heekeren, H.R., Marrett, S., Ruff, D.A., Bandettini, P.A. & Ungerleider, L.G. (2006) Involvement of human left dorsolateral prefrontal cortex in perceptual decision making is independent of response modality. Proc. Natl. Acad. Sci. USA, 103, 10023–10028. Ho, T.C., Brown, S. & Serences, J.T. (2009) Domain general mechanisms of perceptual decision making in human cortex. J. Neurosci., 29, 8675–8687. Jack, A.I., Patel, G.H., Astafiev, S.V., Snyder, A.Z., Akbudak, E., Shulman, G.L. & Corbetta, M. (2007) Changing human visual field organization from early visual to extra-occipital cortex. PLoS ONE, 2, e452. Kable, J.W. & Glimcher, P.W. (2007) The neural correlates of subjective value during intertemporal choice. Nat. Neurosci., 10, 1625–1633. Kanwisher, N., McDermott, J. & Chun, M.M. (1997) The fusiform face area: a module in human extrastriate cortex specialized for face perception. J. Neurosci., 17, 4302–4311. Kayser, A.S., Buchsbaum, B.R., Erickson, D.T. & D’Esposito, M. (2009) The functional anatomy of a perceptual decision in the human brain. J. Neurophysiol., 103, 1179–1194. Levy, I., Hasson, U., Avidan, G., Hendler, T. & Malach, R. (2001) Centerperiphery organization of human object areas. Nat. Neurosci., 4, 533–539. Liu, T. & Pleskac, T.J. (2011) Neural correlates of evidence accumulation in a perceptual decision task. J. Neurophysiol., 106, 2383–2398. Marr, D. (1982) Vision. W. H. Freeman, San Francisco.

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

Decision and action planning in human PPC 1383 O’Connell, R.G., Dockree, P.M. & Kelly, S.P. (2012) A supramodal accumulation-to-bound signal that determines perceptual decisions in humans. Nat. Neurosci., 15, 1729–1735. Ollinger, J.M., Shulman, G.L. & Corbetta, M. (2001) Separating processes within a trial in event-related functional MRI. NeuroImage, 13, 210–217. Pezzulo, G. & Castelfranchi, C. (2009) Thinking as the control of imagination: a conceptual framework for goal-directed systems. Psychol. Res., 73, 559–577. Pezzulo, G. & Ognibene, D. (2012) Proactive action preparation: seeing action preparation as a continuous and proactive process. Motor Control, 16, 386–424. Philiastides, M.G. & Sajda, P. (2007) EEG-informed fMRI reveals spatiotemporal characteristics of perceptual decision making. J. Neurosci., 27, 13082–13091. Platt, M.L. & Glimcher, P.W. (1999) Neural correlates of decision variables in parietal cortex. Nature, 400, 233–238. Ploran, E.J., Nelson, S.M., Velanova, K., Donaldson, D.I., Petersen, S.E. & Wheeler, M.E. (2007) Evidence accumulation and the moment of recognition: dissociating perceptual recognition processes using fMRI. J. Neurosci., 27, 11912–11924. Posner, M.I. (1978) Chronometric Explorations of Mind. L. Erlbaum, Hillsdale, NJ. Raichle, M.E., MacLeod, A.M., Snyder, A.Z., Powers, W.J., Gusnard, D.A. & Shulman, G.L. (2001) A default mode of brain function. Proc. Natl. Acad. Sci. USA, 98, 676–682. Ratcliff, R. & McKoon, G. (2008) The diffusion decision model: theory and data for two-choice decision tasks. Neural Comput., 20, 873–922. Redish, A.D. (2013) The Mind Within the Brain: How We Make Decisions and How Those Decisions Go Wrong. Oxford University Press, New York, NY. Sereno, M.I., Pitzalis, S. & Martinez, A. (2001) Mapping of contralateral space in retinotopic coordinates by a parietal cortical area in humans. Science, 294, 1350–1354. Shadlen, M.N. & Newsome, W.T. (1996) Motion perception: seeing and deciding. Proc. Natl. Acad. Sci. USA, 93, 628–633. Shadlen, M.N., Kiani, R., Hanks, T.D. & Churchland, A.K. (2008) Neurobiology of decision making. An intentional framework. In Engel, C. & Singer, W. (Eds), Better Than Conscious? Decision Making, the Human Mind, and Implications for Institutions. MIT Press, Cambrige, pp. 71–101.

Shulman, G.L., Fiez, J.L., Corbetta, M., Buckner, R.L., Miezin, F.M., Raichle, M.E. & Petersen, S.E. (1997) Common blood flow changes across visual tasks: II. Decreases in cerebral cortex. J. Cognitive Neurosci., 9, 648–663. Silver, M.A. & Kastner, S. (2009) Topographic maps in human frontal and parietal cortex. Trends Cogn. Sci., 13, 488–495. Smith, P.L. & Ratcliff, R. (2004) Psychology and neurobiology of simple decisions. Trends Neurosci., 27, 161–168. Snyder, L.H., Batista, A.P. & Andersen, R.A. (1997) Coding of intention in the posterior parietal cortex. Nature, 386, 167–170. Stoet, G. & Snyder, L.H. (2004) Single neurons in posterior parietal cortex of monkeys encode cognitive set. Neuron, 42, 1003–1012. Sugrue, L.P., Corrado, G.S. & Newsome, W.T. (2004) Matching behavior and the representation of value in the parietal cortex. Science, 304, 1782– 1787. Sulpizio, V., Committeri, G., Lambrey, S., Berthoz, A. & Galati, G. (2013) Selective role of lingual/parahippocampal gyrus and retrosplenial complex in spatial memory across viewpoint changes relative to the environmental reference frame. Behav. Brain Res., 242, 62–75. Talairach, J. & Tournoux, P. (1988) Co-Planar Stereotaxic Atlas of the Human Brain. Thieme Medical Publisher Inc, New York. Tosoni, A., Galati, G., Romani, G.L. & Corbetta, M. (2008) Sensory-motor mechanisms in human parietal cortex underlie arbitrary visual decisions. Nat. Neurosci., 11, 1446–1453. Tosoni, A., Shulman, G.L., Pope, A.L.W., McAvoy, M.P. & Corbetta, M. (2012) Distinct representations for shifts of spatial attention and changes of reward contingencies in the human brain. Cortex, 49, 1733–1749. Toth, L.J. & Assad, J.A. (2002) Dynamic coding of behaviourally relevant stimuli in parietal cortex. Nature, 415, 165–168. Tsetsos, K., Usher, M. & McClelland, J.L. (2011) Testing multi-alternative decision models with non-stationary evidence. Front. Neurosci., 5, 63. Usher, M. & McClelland, J.L. (2001) The time course of perceptual choice: the leaky, competing accumulator model. Psychol. Rev., 108, 550–592. Van Essen, D.C. (2005) A population-average, landmark- and surface-based (PALS) atlas of human cerebral cortex. NeuroImage, 28, 635–662. Wyart, V., de Gardelle, V., Scholl, J. & Summerfield, C. (2012) Rhythmic fluctuations in evidence accumulation during decision making in the human brain. Neuron, 76, 847–858.

© 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd European Journal of Neuroscience, 39, 1370–1383

Decision and action planning signals in human posterior parietal cortex during delayed perceptual choices.

During simple perceptual decisions, sensorimotor neurons in monkey fronto-parietal cortex represent a decision variable that guides the transformation...
1010KB Sizes 1 Downloads 3 Views