NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Communicated by Terrence Sejnowski

ARTICLE

of

Computing with a Canonical Neural Circuits Model with Pool Normalization and Modulating Feedback Tobias Brosch [email protected]

Pro

Heiko Neumann

[email protected] Institute of Neural Information Processing, University of Ulm, BW 89069, Germany

Un c

orr ect

ed

Evidence suggests that the brain uses an operational set of canonical computations like normalization, input filtering, and response gain enhancement via reentrant feedback. Here, we propose a three-stage columnar architecture of cascaded model neurons to describe a core circuit combining signal pathways of feedforward and feedback processing and the inhibitory pooling of neurons to normalize the activity. We present an analytical investigation of such a circuit by first reducing its detail through the lumping of initial feedforward response filtering and reentrant modulating signal amplification. The resulting excitatory-inhibitory pair of neurons is analyzed in a 2D phase-space. The inhibitory pool activation is treated as a separate mechanism exhibiting different effects. We analyze subtractive as well as divisive (shunting) interaction to implement center-surround mechanisms that include normalization effects in the characteristics of real neurons. Different variants of a core model architecture are derived and analyzed—in particular, individual excitatory neurons (without pool inhibition), the interaction with an inhibitory subtractive or divisive (i.e., shunting) pool, and the dynamics of recurrent self-excitation combined with divisive inhibition. The stability and existence properties of these model instances are characterized, which serve as guidelines to adjust these properties through proper model parameterization. The significance of the derived results is demonstrated by theoretical predictions of response behaviors in the case of multiple interacting hypercolumns in a single and in multiple feature dimensions. In numerical simulations, we confirm these predictions and provide some explanations for different neural computational properties. Among those, we consider orientation contrast-dependent response behavior, different forms of attentional modulation, contrast element grouping, and the dynamic adaptation of the silent surround in extraclassical receptive field configurations, using only slight variations of the same core reference model.

Neural Computation 26, 1–55 (2014) doi:10.1162/NECO_a_00675

c Massachusetts Institute of Technology 

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

2

T. Brosch and H. Neumann

1 Introduction

Un c

orr ect

ed

Pro

of

Evidence suggests that the primate cortex employs a set of canonical operations that serve as structured components for a modular computational design. Such modularity occurs, for example, for input filtering, gain control, and activity normalization in the visual pathway (Carandini & Heeger, 1994, 2012; Kouh & Poggio, 2008). For example, filtering signals in the driving feedforward (FF) stream of processing defines a key mechanism for feature extraction at different levels in cortical hierarchy (Hubel & Wiesel, 1972; Carandini, Heeger, & Movshon, 1999). Normalization effects have been observed in different sensory modalities, such as vision (Carandini & Heeger, 1994) and audition (Rabinowitz, Willmore, Schnupp, & King, 2011); it also underlies mechanisms of decision making (Louie, Grattan, & Glimcher, 2011) and behavioral control (Kiefer, 2007). It has been suggested that sensory-driven FF streams are combined with feedback (FB) streams of information to enable robust cognition and conscious perception (Grossberg, 1980; Hup´e et al., 1998; Crick & Koch, 1998; Lamme & Roelfsema, 2000; Sillito, Cudeiro, & Jones, 2006; Larkum, 2013). Hierarchical levels of neural representation are built in different pathways that mutually interact to organize processing streams and send FB to generate reentrant signals at the earlier stages of cortical representations (Tononi, Sporns, & Edelman, 1992; Angelucci, 2002; Bullier, 2001; Rao & Ballard, 1999). Such generic principles provide a basic computational repertoire that is available at different stages of processing to be executed in a variety of different contexts. In this article, we propose that neural competitive interaction for activity normalization combines with facilitatory mechanisms for feature enhancement by modulatory FB. The combination of gain enhancement and (pool) normalization formalizes the idea of a biased competition mechanism originally proposed by Reynolds and Desimone (2003) and Carandini and Heeger (2012). Such a circuit mechanism operates at stages as early as in visual cortex or higher up in parietal cortex to selectively enhance feature locations and decision making (Louie, Grattan, & Glimcher, 2011; Kiefer, 2007). Based on previous work reported in Neumann and Sepp (1999), Bayerl and Neumann (2004), Bouecke, Tlapale, Kornprobst, and Neumann (2011), and Raudies, Mingolla, and Neumann (2011), we propose a model circuit that integrates these mechanisms in a generic columnar three-stage model. It captures the dynamics of mean firing rates of small ensembles of cells exhibiting normalization properties and modulating FB enhancement without modeling the detailed response properties like spike timing of individual cells (Deco, Jirsa, & McIntosh, 2011). This makes the model a suitable tool to explain multiple neurophysiological evidence without necessitating the computational complexity of, say, detailed spiking neuron models ¨ (Hodgkin & Huxley, 1952; Larkum, Senn, & Luscher, 2004). Here, we investigate the dynamics of such a circuit in a semianalytic fashion to provide

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

3

orr ect

ed

Pro

of

a detailed analysis of its computational properties. In contrast to more general analyses of E-I networks (Zhaoping, 2011), our analysis enables a finely grained tuning of each model parameter to specific applications. The analysis is then used to explain multiple properties and phenomena of cortical computations demonstrated in various physiological experiments. For example, similarly oriented visual items forming smooth contours tend to be mutually supportive (Kapadia, Ito, Gilbert, & Westheimer, 1995; Field, Hayes, & Hess, 1993). The interaction, however, may also turn into suppressive interaction (Polat & Sagi, 1993). We provide explanations for response facilitation of weak and suppression of strong stimuli, contrast-dependent classical receptive field size, different forms of attentional modulation, top-down modulation of coherent textures, and orientation contrast-dependent response behaviors with only slight variations of the parameters. The rest of the article is organized as follows. We start in section 2 with a description of a generic three-stage architecture. In section 3, we present an in-depth analysis of model variants with and without pool inhibition in the stable regime and derive guidelines to interpret the impact of each model parameter. In section 4, we investigate parameterizations that result in inhibition-stabilized networks, instabilities, and oscillations. In order to validate the theoretical findings, section 5 describes selected simulation results that suggest explanations of experimental findings in the proposed unified framework. Finally, we discuss the relation to other models before we conclude with a summary of the major advances. 2 Simplified Neuron Model and Layered Architecture

Un c

2.1 Neuron Model. Our model architecture consists of areas that are represented by two-dimensional sheets of visuotopically arranged model columns implemented as excitatory-inhibitory (E-I) pairs of single-voltage compartments (point-like) entities (Koch, 1999; Herz, Gollisch, Machens, & Jaeger, 2006). Model columns interact by lateral connectivity, feeding input signals as well as ascending and descending connections to and from other model areas (Churchland, Koch, & Sejnowski, 1990). The dynamics of the membrane potential v for such model cells is captured by Cm

d v ≡ Cm v˙ = −αv + (E ex − v)gex + (E in − v)gin , dt

(2.1)

assuming a zero-level resting state with constant leak conductance gleak = α. The constant Cm denotes the membrane capacitance, and E ex,in are the saturation levels for the respective feeding inputs. The input conductances gex and gin are defined by integrating the synaptic inputs signaled by excitatory and inhibitory presynaptic neurons. Such a generic model has been used in several previous models (Grossberg, 1988; Carandini & Heeger,

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

4

T. Brosch and H. Neumann

Pro

of

1994; Somers et al., 1998; Carandini et al., 1999; McLaughlin, Shapley, Shelley, & Wielaard, 2000; Salinas, 2003). Simple algebraic manipulation of this equation demonstrates that the potential change is linear for low potential levels v, namely, Cm v˙ ≈ E ex gex + E in gin with feeding input signals scaled by constant gains of the respective saturation levels (Carandini et al., 1999). The generic model outlined above is extended with regard to two mechanisms to incorporate additional functional properties at the network level. The extensions take into account:

ed

• Nonlinear response properties of cells that lead to saturation and pool normalization effects (Heeger, 1992; Carandini & Heeger, 1994) • Nonlinear response amplifications of bottom-up feeding signals by top-down FB from higher-level processing stages that modulate activities at earlier stages (Larkum et al., 2004; Larkum, 2013; Cardin, Friston, & Zeki, 2011).

orr ect

Such computational mechanisms are incorporated to account for findings ´ reported in different studies (Nassi, Gomez-Laberge, Kreiman, & Born, 2014). In the following paragraphs we briefly outline these major findings, which lead to the proposed canonical circuit model. 2.2 Pool Normalization and Convergence of FF and FB Signals

Un c

2.2.1 Pool Response Normalization. Simple cells in area V1 tend to saturate their activation as a consequence of increasing stimulus contrast and surround activities (Heeger, 1992; Carandini & Heeger, 1994; Carandini et al., 1999; Carandini, Heeger, & Movshon, 1997; see Smith, 2006, for a summary). Other findings suggest that normalization occurs not only in early visual sensory processing but also in, auditory processing (Rabinowitz et al., 2011), attention mechanisms (Reynolds & Heeger, 2009; Treue, 2001), and decision making (Louie et al., 2011; Louie & Glimcher, 2012). These observations lead to the recent proposal (Carandini & Heeger, 2012) that such normalization mechanisms define a canonical principle in cortical computation captured by the steady-state mechanism  β · gex ( k sk · + ) + act bias ik  ri = , α + gin ( k yk · − ) ik

(2.2)

where s and y denote afferent inputs and neuronal pool activations, respectively. The functions gex/in (·) denote nonlinear gains for their input signals and +/− the input weights for excitatory and inhibitory net signal integration over a spatial or temporal neighborhood, respectively. The constants β and α define the scaling of activity (which can be biased by actbias ) and the rate of activity decay to a resting state. Because of the stabilizing and

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

5

self-adapting properties of normalization (Sperling, 1970; Grossberg, 1973, 1980, 1988; Lyu & Simoncelli, 2009), we include a stage of divisive normalization in our model that (at steady state) captures equation 2.2.

Pro

of

2.2.2 Convergence of FF and FB Signals by Modulation. One principle of brain organization is that cortical areas are mostly bidirectionally connected: FF signals drive, while FB signals tend to enhance the gain of the driving FF input (Sherman & Guillery, 1998; Sandell & Schiller, 1982; Bullier, McCourt, & Henry, 1988; Salin & Bullier, 1995; Hup´e et al., 1998; Bullier, 2001).1 Here, we employ a multiplicative enhancement of the driving FF signal actdriveFF by FB signals actFB biased by a tonic level (Eckhorn, Reitboeck, Arndt, & Dicke, 1990; Gove, Grossberg, & Mingolla, 1995) (in contrast to threshold models like Abbott & Chance, 2005), act driveFF → act driveFF · (1 + λ · act modFB ),

(2.3)

orr ect

ed

with λ a scalar constant amplifying the FB impact. Such a mechanism links signals that are integrated at the apical dendrites of an excitatory (pyramidal) cell by modulating the postsynaptic potentials integrated via the basal dendrites (Eckhorn et al., 1990; Eckhorn, 1999; Larkum et al., 2004; Larkum, 2013; Brosch & Neumann, 2014).

Un c

2.3 Core Model of Interacting Columns in Space-Feature Domain. The previously outlined mechanisms are combined in a core computational unit that is arranged in two-dimensional (2D) lattices of interconnected columns. In addition to excitatory couplings, fast inhibitory-to-inhibitory (I–I) connections in a small spatial neighborhood (Shapley & Xing, 2013) are accounted for by the shunting dynamics of our model (see section 4.2, for an analysis of the consequences of explicit I-I coupling). The proposed model circuit is composed of three stages, or compartments, that incorporate the mechanisms outlined: 1. An initial stage of input filtering 2. A stage of activity modulation by top-down or lateral reentrant signals (see equation 2.3) 3. A stage of center-surround normalization by activities over a neighborhood defined in the space-feature domain to finally generate the net output response

These compartments can be roughly mapped onto cortical area subdivisions (as suggested in Self, Kooijmans, Sup`er, Lamme, & Roelfsema, 2012). A sketch of the three processing stages is depicted in Figure 1.

1 We acknowledge that evidence exists demonstrating that FF and FB streams sometimes combines additively (Markov & Kennedy, 2013).

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

T. Brosch and H. Neumann

Pro

of

6

orr ect

ed

Figure 1: Three-stage hierarchy of the circuit model architecture. The processing stages of a model column consist of three stages. Filter responses of the first stage (1) generate the driving input (the two ellipses denote two subfields of an exemplary filter). The second stage (2) denotes the modulation of the driving bottom-up filter response by reentrant signals. The table at the bottom highlights the properties of the response modulation by FB signals. In the case of missing bottom-up FF input (upper row), the response will be zero. If a modulatory FB signal b is present when driving input a is also available, the output is enhanced by a fraction that represents the correlation a · b between the two signal streams. Finally, output stage (3) realizes a normalization by a pool of neurons.

An excitatory unit in our model is characterized by its location on a 2D grid and a feature selectivity, addressed by i and θ , respectively, while inhibitory units are fully characterized by their location, index i, because they integrate over all feature dimensions. The dynamics are given by (see Figure 2)

Un c

in τ r˙iθ = −αriθ + (β − riθ ) · gex iθ − (δ + riθ ) · Iiθ − (η + γ · riθ ) · g p (pi ), ⎧ ⎫ ⎨ ⎬ − τ p p˙ i = −α p pi + β p · gr (rk,θ ) · i,k,θ + (Ic )i − η p · g p (pi ), (2.4) ⎩ ⎭ k,θ

with excitatory net input defined by

gex iθ

=

Iiθex

+ γlat ·





gr (rk ) ·

+ i,k,θ

· (1 + λ · netiθFB ),

(2.5)

k

where g p,r (·) denote gain functions that map the potentials riθ and pi to output firing rates. The constants τ, τ p > 0 define the membrane constants for an excitatory unit and the inhibitory pool, respectively. +/− denote the input weights to define the lateral interaction and the extent of the pool (e.g., with gaussian weights). An additional input Ic is considered for the pool inhibition to later account for unspecific raising or lowering of

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

7

of

Canonical Neural Circuit Method

orr ect

ed

Pro

Figure 2: Illustration of the reduced model architecture. The three-stage model of Figure 1 is condensed to an E-I circuit by lumping the filtering and modulation stage into one with the pool represented by the I-neuron. Feeding input lines are defined for excitatory (arrowheads) and inhibitory (roundheads) driving signals (transfer functions omitted). Spatial arrangements of model circuits (representing cortical columns) are depicted for completeness by laterally connected nodes. Each E-cell may incorporate self-excitation (resembling E-E connectivity) and each I-cell self-inhibition (resembling I-I connectivity), denoted by small-dashed connections. Each cell’s excitatory activation can be enhanced by modulatory FB signals (flatheads).

Un c

the inhibitory gain. The parameters α, α p , β, δ, β p , η, η p , γ , λ ≥ 0 are constants to denote the activity decay, the excitatory and inhibitory saturation levels, the relative strength of the pool inhibition, and FB strength. The summed inputs gex and gin = Iin (equation 2.1) are modulated by reentrant FB signals netiθFB ≥ 0, which are biased by a tonic activation level that generates the signal enhancement in case of combined FF and FB signals. The pool inhibition accounts for the nonlinear gain elicited by activations of cells in a space-feature neighborhood of a target unit. Wex mostly focus on the case η, η p = 0 in which inhibition is defined by a shunt with zero reversal potential level (Koch, 1999). Note that even for η p = 0, there is still moderate self-inhibition via the −α p pi term, which is sufficiently strong to ensure stability in the case studies considered here. In accordance with the proposal of Somers et al. (1998), Li (1999), and Schwabe, Obermayer, Angelucci, & Bressloff (2006) we employ different gain functions in which excitatory cell firing rates gr (r) in equation 2.4 respond to lower membrane potentials than inhibitory cell firing rates g p (p) that have a steeper gain once firing threshold is exceeded. Cells that are enhanced via FB have a greater impact on the competitive interaction in the pool normalization stage. In other words, activities that were amplified by modulating FB have increased impact to lower the responses of other cells in the pool leading to enhancement and reduction of neural firing rates (i.e., biased competition, a mechanism proposed to explain target feature selection in attention; Reynolds & Desimone, 2003).

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

8

T. Brosch and H. Neumann

3 Analysis of a Reduced Columnar Model

Pro

of

In this section, we analyze the computational properties of the system defined in equation 2.4 operating in a stable regime and show that it governs a wide variety of phenomena. More precisely, we focus on an operational regime with only moderate self-excitation and self-inhibition, γSE ≤ α/β (see section A.4.2 for details on this condition) and η p = 0 (only self-inhibition via the −α p pi term in equation 2.4). We demonstrate that this operational regime already explains a number of neurophysiological phenomena. The extension to operational regimes incorporating strong selfinteraction is discussed in section 4. We now start with a general stability analysis to define the investigated regime.

ed

3.1 General Stability Analysis. In this section, we conduct a brief stability analysis of the full dynamic range of the system equation 2.4. In order to investigate the dynamic properties, we consider a version of equation 2.4 that is reduced to a single location with Iin = 0 and gex = I + γSE · gr (r) (the FB input 1 + λ · net Fb effectively scales I and γSE and is thus omitted here for better readability):

orr ect

τ r˙ = f (r, p) = −αr + (β − r) · (I + γSE · gr (r)) − (η + γ · r) · g p (p), τ p p˙ = g(r, p) = −α p p + β p · gr (r) + Ic − η p · g p (p).

(3.1)

For local stability analysis, we study the eigenvalues of ⎛

¯ f (¯r, p)

∂ g(¯r, ∂r

¯ p)

Un c

A=⎝

∂ ∂r

∂ ∂p

¯ f (¯r, p)

∂ g(¯r, ∂p

¯ p)



⎠,

(3.2)

¯ = 0 = g(¯r, p). ¯ The eigenwith r¯, p¯ the potential at equilibrium, that √ is, f (¯r, p) values are given by λ1/2 = 1/2 · (T ± T 2 − 4D) with trace T and determi∂ f = −α − I − γSE gr (r) + nant D. Stability critically depends on the sign of ∂r ∂

¯ < 0, both eigenvalγSE (β − r)gr (r) − γ g p (p) at equilibrium. For ∂r f (¯r, p) ∂ ¯ 0 ues have a negative real part, that is, the system is stable. If ∂r f (¯r, p) such that the trace T is positive, the system becomes unstable or oscillates. ∂ ¯ is less or greater The critical parameters that determine whether ∂r f (¯r, p) than zero are the strength of self-excitation γSE and the steepness of the transfer function of the E-neuron g r (¯r ) at equilibrium. Thus, for strong selfexcitation and a steep transfer function of the E-neuron, the system tends to become unstable while strong self-inhibition stabilizes the system (Latham, Richmond, Nelson, & Nirenberg, 2000; Latham, Richmond, Nirenberg, & Nelson, 2000). A brief analysis and discussion of the impact of I-I interaction and increased self-interaction are given in section 4. The influence of

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

9

Table 1: Summary of the Dependencies of Self-Excitation (SE) Strength and Stability Properties of the System in Equation 3.1 SE-Strength γSE

Condition of

∂ ∂r

f

Stability

∂ ∂r

Moderate Large

¯ 0 (the self-inhibition strength) can be absorbed in the constants β p and Ic . We assume that any inhibitory input is absorbed into the excitatory driving input (Iin = 0) and set Iex ≡ I. Taken together, the model denoted in equation 2.4 reduces to (indices omitted) d r = r˙ = −αr + (β − r) · [I + γSE · gr (r)] · (1 + λ · net FB ) dt − (η + γ · r) · g p (p),

d p = p˙ = −p + β p · gr (r) + Ic . dt

(3.3)

We investigate different settings with and without pool normalization. To ease the analysis, we consider only cases in which either η or γ is zero,

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

10

T. Brosch and H. Neumann

Table 2: Columnar Model Configurations. SP

SE

Equations

− +

− −

− +



+



r˙ = −αr + (β − r) · I · (1 + λ · net FB ) r˙ = −αr + (β − r) · (I + γSE gr (r)) · (1 + λ · net FB ) − γ · r · g p (p) p˙ = −p + β p · gr (r) + Ic r˙ = −αr + (β − r) · I · (1 + λ · net FB ) − η · g p (p) p˙ = −p + β p · gr (r) + Ic

of

DP

(I) (II)

(IIa)

Pro

Note: We analyze different conditions in the configuration of the columnar model in the stable regime. We differentiate between two settings: with and without pool normalization. In the case of pool normalization, we focus our investigation on divisive pool inhibition (DP), equation II, with (γSE > 0) or without (γSE = 0) self-excitation (SE). In addition, we analyze subtractive pool (SP) inhibition without self-excitation, equation IIa. All parameters are positive

Un c

orr ect

ed

that is, pool inhibition that operates exclusively divisive or subtractive. All instances are summarized in Table 2. In the case that the additional drive to the pool can be neglected, Ic = 0, the maximum equilibrium pool potential is given by β p β. We assume that g p (·) saturates without additional input Ic such that β p β − pm > 0 and that I = 0 implies r¯ = 0, that is, the neurons are not endogenously active. We focus our investigation on this case in which we will prove that such systems are always stable, have a unique local solution for all initial values in (r0 , p0 ) ∈ [0, ∞)2 (see section D.1), and are monotonously increasing functions of their input I. The dynamics with increased self-interaction that result, for example, in inhibition-stabilized neural networks are studied in section 4. To explain, for example, contrast-dependent integration size, we investigate settings with and without pool normalization, which can be seen as special cases regarding the activity of surrounding neurons. 3.3 Columnar Circuit Model Without Pool Inhibition. In the case where pool activity depends on the activity of the surrounding neurons, the response properties for an inactive surround are captured by a single excitatory neuron without pool inhibition (see equation I in Table 2). We arrive at the equilibrium point r˙ = 0 = −α r¯ + (β − r¯) · I · (1 + λ · net FB ) for r and consider the steady-state solution that reads r¯ =

β · I · (1 + λ · net FB ) I∗ , =β FB α + I · (1 + λ · net ) α + I∗

(3.4)

(Grossberg, 1988) with I∗ = I · (1 + λ · net FB ), is a nonlinear equation since netFB itself depends on r. The output r¯ is bounded—i.e. limI|net FB →∞ r¯ = β— and all other elements set constant.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

11

Pro

of

The response at equilibrium r¯ has the characteristics of an S-potential measured for retinal cones (Naka & Rushton, 1966) in which the parameter α controls the balance between response differentiation and degree of saturation. A value of approximately half the maximum input has been suggested, α ≈ Imax /2. The parameter λ increases the relative importance of the FB signal in relation to the driving input signal (see section A.1 in the appendix). The response gain can be generalized by incorporating a nonlinear transfer function f for the input I, with f (I) = Iq , q ∈ N (Kouh & Poggio, 2008). 3.4 Columnar E-I Circuit Model with Pool Inhibition. We now consider the recurrent interaction of the excitatory target neuron with the inhibitory pool neuron (see equations II and IIa in Table 2).

⎧ 0 ⎪ ⎪ ⎪ ⎪ ⎨ p− p 0 r0 ≤ r ≤ rm , g p (p) = p − p ⎪ 0 ⎪ ⎪ m ⎪ ⎩ rm < r 1

r < r0

orr ect

⎧ 0 ⎪ ⎪ ⎪ ⎨ r−r 0 · rm gr (r) = r − r ⎪ m 0 ⎪ ⎪ ⎩ rm

ed

3.4.1 Approximation of Gain Functions. In order to simplify the analysis, we employ piecewise linear gain functions2 p < p0 p 0 ≤ p ≤ pm pm < p (3.5)

Un c

We capture the main property that inhibitory cells respond after exceeding a higher threshold level (compared to excitatory cells) and then having a steeper gain (McCormick, Connors, Lighthall, & Prince, 1985; Somers et al., 1998). For simplicity, we assume that r0 = 0 and rm = β, that is, gr (r) is the identity function on [r0 , rm ]. The ordering of the gain functions’ inflexion p p points fulfills the relation 0 = r0 < β0 < βm < rm = β (the ratios have been p

p

derived based on the observation that the linear relation p¯ = β p · r¯ + Ic holds ˙ = (0, 0)). for the fix point (˙r, p) The inhibitory gain function g p (p) specifies three domains depending on the input strength I: I ∈ [0, θlow ] (low domain; membrane potential below firing threshold, that is, an inactive pool), I ∈ (θlow , θhigh ) (middle domain; increasing membrane potential increases pool firing rate, that is, an active pool), and I ∈ [θhigh , ∞) (high domain; increasing membrane potential does not increase pool firing rate anymore, that is, a saturated pool). To improve readability, we set net FB = 0. Note, however, that the results also apply 2 Note that these results also transfer to smooth sigmoidal functions by convolution of the piecewise linear transfer function with a Friedrichs mollifier (Friedrichs, 1944), for example.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

12

T. Brosch and H. Neumann

of

∗ for net FB > 0 when [I, γSE ] ∈ (R+ )2 is replaced by [I∗ , γSE ] = [I, γSE ] · (1 + FB λ · net ). The two domain boundary values are found by solving p¯ = p0/m for I:

(p0 − Ic )(β p α − γSE (β p β − p0 + Ic )) θlow = max 0, , (3.6) β p (β p β − p0 + Ic )

Pro

θhigh = ⎧

(pm − Ic )(β p (α + γ ) − γSE (β p β − pm + Ic )) ⎪ ⎪ ⎪ max 0, , divisive pool, ⎪ ⎪ ⎨ β p (β p β − pm + Ic )

⎪ ηβ p + (pm − Ic )α ⎪ ⎪ ⎪max 0, , subtractive pool, ⎪ ⎩ β p β − pm + Ic

ed

(3.7)

under the condition that (in case of self-excitation, γSE > 0) (3.8)

0 ≤ β p (θhigh + α − βγSE + γ ) + 2γSE (pm − Ic );

(3.9)

orr ect

0 ≤ β p (θlow + α − βγSE ) + 2γSE (p0 − Ic ),

otherwise, set θlow/high to 0 (note that these conditions always hold for γSE = 0). 3.4.2 Analysis of Steady-State Potentials r¯. To analyze the properties of each parameter and the response properties, we determine the potential of the excitatory cell at equilibrium for each domain. For the low domain (I ∈ [0, θlow ]), we obtain

Un c

⎧ βI ⎪ ⎪ ⎪ ⎨α + I, r¯ = ⎪ + βγSE − α − I ⎪ ⎪ 1 low , ⎩ 2 γSE

subtractive or divisive pool with γSE = 0, divisive pool, γSE > 0. (3.10)

For the middle domain (I ∈ (θlow , θhigh )), we obtain

r¯ =

⎧ βI(pm − p0 ) + η(p0 − Ic ) ⎪ ⎪ , ⎪ ⎨ (α + I)(pm − p0 ) + ηβ p

subtractive pool,

⎪ 1 −(α + I − βγSE )(pm − p0 ) + γ (p0 − Ic ) + ⎪ ⎪ , ⎩ 2 γ β p + γSE (pm − p0 )

divisive pool. (3.11)

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

13

For the high domain (I ∈ [θhigh , ∞)), we obtain

with low =



subtractive pool, (3.12)

of

divisive pool, γSE = 0, divisive pool, γSE > 0,

Pro

⎧ βI − η ⎪ ⎪ , ⎪ ⎪ α+I ⎪ ⎪ ⎪ ⎪ ⎨ βI , r¯ = γ + α+I ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 1 high + βγSE − α − I − γ ⎪ ⎪ ⎩ , 2 γSE

(α + I − βγSE )2 + 4βγSE I > 0,

(3.13)

2 = [−(α + I − βγSE )(pm − p0 ) + γ (p0 − Ic )]2

ed

+ 4(pm − p0 )β((pm − p0 )γSE + β p γ )I > 0,  high = (α + I + γ − βγSE )2 + 4βγSE I > 0.

(3.14) (3.15)

Un c

orr ect

For better readability, we briefly summarize the main results here while the details like the input sensitivity can be found in the appendix, sections A.2, A.3, and A.4. To summarize, the equilibrium is a steady and monotonically increasing function of the excitatory input I. Furthermore, it is shown that the equilibria are stable3 and limI→∞ r¯ = β. An analysis of the radical expressions of the eigenvalues reveals the parameter regimes where no oscillations or damped oscillations occur. Due to the complexity of the equations for the radical expression (especially for divisive pool inhibition), it should be analyzed when the majority of the parameters have already been adjusted to a given task. 3.4.3 Hysteresis. When γSE = 0 (no self-excitation), the equilibrium potential r¯ for I = 0 is always 0. In the case of divisive pool inhibition and γSE > 0 for zero input I = 0, the equilibrium r¯ is guaranteed to be zero only when βγSE − α ≤ 0. When the system starts from a nonzero potential r0 > 0 and βγSE − α − γ > 0, the potential at equilibrium r¯ for I = 0 is strictly positive; once excited, the neurons stay tonically active. Note that netFB can dynamically change the effective self-excitation strength—FB can dynamically make a neuron become tonically active (see sections A.4.6 and A.4.2 for more details). 3 The resulting eigenvalues in the mid-domain in the case of divisive pool inhibition and self-excitation cannot be easily treated analytically. For that reason, we conducted a Monte Carlo analysis, testing about 10 billion parameter sets for variables smaller than 100. Each one turned out to be stable. For the low and high domains, respectively, analytical proofs are provided (see section A.4.7).

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

T. Brosch and H. Neumann

Pro

of

14

orr ect

ed

Figure 3: Impact of model parameters on nullclines in phase space (subtractive/shunting pool inhibition depicted left/right; left plot corresponds to a case without self-excitation). The nullcline of the pool potential p˙ = 0 is pushed to the top for increasing values of Ic (at the intersection with the p-axis). Increasing values of β p increase the incline of the nullcline p˙ = 0. The nullcline of the neuron potential r˙ = 0 is pushed to the right for increasing values of β, I, and γSE , while increasing values of α and γ (or η) decrease the neuron potential. Note that though the overall type of the parameters’ impact is identical (e.g., decreases the potential), the exact kind of interaction (e.g., divisively) can vary (see section 3 the text for details). Parameters have been set to α = 1, β = 1, β p = 2, γSE = 0.5, γ = η = 0.2, p0 = 0.2, pm = 0.3, IE = 0.4.

Un c

3.4.4 Parameters and Input Signals for Columnar Circuit Model with Pool Inhibition. The overall impact of the various parameters on the nullclines in phase space and the neuron potential at equilibrium is summarized in Figures 3 and 4. In the following, we briefly describe the impact each parameter or input signal has on the output response (we assume an (r-p)-phase plot). Differences between subtractive and divisive pool inhibition and the impact of self-excitation are highlighted. The saturation type of the equilibrium potentials constitutes one of the main differences between subtractive and divisive pool inhibition. For subtractive inhibition, the saturation type is described by bI + c → b I + a I→∞

(3.16)

which also accounts for the case of the inactive and saturation regions in the divisive case (low/high domain). In case of divisive inhibition with γSE > 0

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

15

Pro

of

Canonical Neural Circuit Method

orr ect

ed

Figure 4: Impact of model parameters on equilibrium potential r¯. The thresholds determining the three solution domains (inactive pool = ˆ low domain; pool-active = ˆ mid-domain; saturated pool = ˆ high domain) are indicators of the response behavior. Impact of parameters on θlow/high is depicted by arrows (see section 3 the text for details).

(and in the mid-domain in case of divisive inhibition with γSE = 0), it is described by 

(I + a)2 + bI − (I + a) → b/2, I→∞

→ 0

a→∞

(3.17) (3.18)

Un c

with constants a, b, and c set according to the corresponding polynomial coefficients of I in equations 3.10 to 3.12.4 We group the description of different parameters according to their role in the E-I system. In the following, we describe the influence of different parameter settings for the constants α, β, and γ in the excitatory cell of the E-I system: • Passive decay rate of activation r, α: Increasing α decreases the equilibrium potential. In case of subtractive pool inhibition (see equation IIa in Table 2) and for the low and high domain for divisive pool interaction without self-excitation (see equation II in Table 2 with γSE = 0), α acts divisively (for small inputs I). In case of variant equation II in Table 2 with γSE > 0 and in the mid-domain with γSE = 0, the decrease type is according to equation 3.18. Increasing values of α push the

4 The

derivation of these limits is described in section A.4.8.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

16

T. Brosch and H. Neumann

ed

Pro

of

nullcline r˙ = 0 to the left and increase the domain boundary values θlow/high to separate the three response domains. • Saturation level for potential r, β: For subtractive pool inhibition, β scales the neuron potential at equilibrium multiplicatively. For divisive pool inhibition, the impact of β for small inputs I is described by multiplication with the linear term of the root (see equation 3.17, β = 2b). Increasing values of β push the nullcline r˙ = 0 to the right and decrease θlow/high . • Parameter γ (η in case of subtractive pool inhibition) scales the inhibitory pool activation (see equation II in Table 2). For strong inputs I, the nature of pool inhibition is reflected in the following way (i.e., equilibrium potentials are in the high domain). In case of subtractive pool inhibition, η discounts the potential linearly; for divisive pool inhibition without self-excitation, γ divides the pool potential; and for divisive pool inhibition with self-excitation, it increases parameter a in equation 3.18. Increasing values of γ (or η in case of subtractive pool inhibition) push the nullcline r˙ = 0 to the left and increase θhigh (see equation 3.7).

orr ect

In all, the three parameters lead to shifts in the r-nullcline and influence the domain boundary values θlow/high . We now describe the influence of I and Ic :

Un c

• The excitatory input I increases the nullcline r˙ = 0, as well as the equilibrium potential r¯. Depending on the type of pool inhibition, the saturation type is described by equation 3.16 (for subtractive or divisive pool inhibition) in the case without self-excitation γSE ). For a system with divisive inhibition that incorporates self-excitation (γSE > 0), the saturation type is described by equation 3.17. • Input Ic to the inhibitory pool neuron pushes the nullcline p˙ = 0 and the intersection with the p-axis up. Such input leads to a decrease of the domain boundary values θlow/high of the inhibitory gain function.

Finally, we summarize the influence that parameters β p , the inflexion points p0 , pm , and the strength of self-excitation γSE have on the output potential: • Parameter β p increases the steepness of the nullcline p˙ = 0 (thus, lowering the activity of the E-neuron) and decreases θlow/high . • The inflexion points p0/m of the sigmoidal gp increase the domain boundary values θlow/high of the linearized gain function. Thus, they determine the region of increasing inhibition by increased pool activity. • Parameter γSE controls the strength of self-excitation. An increase in γSE increases the equilibrium potential r¯. For low excitatory input I, the FB sensitivity is approximately equal to the self-excitation

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

17

strength sensitivity multiplied by λ · γSE (see section A.4.6). Similar to β, increasing values of γSE push the nullcline r˙ = 0 to the right and decrease the domain boundaries θlow/high .

of

This concludes our description of the model parameters that ease the adaptation of the model to replicate different neurophysiological findings.

Pro

4 Analysis of Extended Network Dynamics

This article mainly focuses on the stable operational regime of the canonical neural circuit model. A brief look at the regimes of oscillatory states in which real networks often operate is given in the following section.

Un c

orr ect

ed

4.1 Inhibition Stabilized Network. Physiological investigations have revealed that a significant number of neurons in cortical area V1 exhibit a behavior that has been dubbed inhibition stabilization at the network level (ISN) (Tsodyks & Gilbert, 2004; Ozeki, Finn, Schaffer, Miller, & Ferster, 2009). The main observed effect is that increased external excitatory input to inhibitory cells paradoxically causes a decrease in their firing rates in the new steady state. Previous studies of networks models have revealed how to control ISNs (Jadi & Sejnowski, 2014a, 2014b). These recent investigations used network mechanisms that do not incorporate conductances and also donot consider any modulating FB. Our proposed canonical columnar circuit model considers conductance-based membrane properties and normalizing gain control. Such shunting-type mechanisms can also operate in an ISN regime. The phase-space of the E-I pair of units is shown in Figure 5. The sigmoidal transfer function of the excitatory component in combination with strong self-excitation leads to a domain of positive incline of the nullcline of the E-neuron. Increased inhibitory surround input shifts the nullcline of the I-neuron upward in the phase-space with the intersection of the two nullclines shifting to the left along the r-axis and dropping along the p-axis. This directly reflects an overall decrease of the output activation of the E-neuron. We analytically investigate the shape of the nullcline of the E-neuron which can be derived from the system defined in equation II in Table 2. We get p = f (r) = g−1 p



 −αr + (β − r) · (I + γSE gr (r)) · (1 + λ · net FB ) . γ ·r

(4.1)

To obtain a region of positive incline, the term γSE · gr (r) must be strong enough to dominate the negative impact of the r summation terms. Rearranging terms ignoring constant summation terms yields the proper

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

T. Brosch and H. Neumann

Pro

of

18

condition,

orr ect

ed

Figure 5: Strong self-excitation and sigmoidal excitatory transfer function form an inhibition-stabilized network (ISN) (see Ozeki et al., 2009, Figure 6); that is, when additional surround input is provided, the intersection of the E- and I-nullcline is shifted down and to the left. Here, additional surround input rsurr is added to the pool, p˙ = −p + β p · (gr (r) + rsurr ), for rsurr = 0 (center) and rsurr = 0.2 (center+surround). The transfer functions were g(r) = 2/(1 + exp(−8r3 )) − 1 and g p (p) = p with parameters set to α = 1, β = 1, Iex = 0.3, γ = 1, γSE = 2.5, and β p = 1.

α + γSE · gr (r) β γ · g (r) > + I, r SE r R

(4.2)

Un c

with R ≡ 1 + λ · net FB . Since R > 1 is strictly positive and β ≥ r, this condition is achieved for low activation levels r and for FB signals strong enough to almost eliminate the fraction on the right-hand side of the equation above. Jadi and Sejnowski (2014a, 2014b) demonstrate that a broad regime of ISN property (as a function of the excitatory response) exists for the additive network interactions they investigated. In the shunting-type network studied here, the ISN regime is restricted to a somewhat narrower band of excitatory output activation. This range has been analytically determined and is shown to be a function of (lateral) activity integration, modulatory FB gain, and the input strength. To summarize, FB cannot only lead to tonically active neurons (see section 3.4) but can also dynamically transform the system into an ISN. The demonstration that FB can dynamically shift the network state into an ISN regime offers multiple interpretations of the influence that activated FB templates have on the representations generated by FF input. This is in line with recent experimental observations that FB signals realize predictive coding principles (Bastos et al., 2012; Rauss, Schwartz, & Pourtoise, 2011); a reduction in response amplitude indicates that the predictions generated by FB signals match the driving FF signals (Markov & Kennedy, 2013).

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

19

ed

Pro

of

4.2 Pool Normalization and I-I Connectivity. It was argued that I-I connectivity is required to guarantee stability in a high-coupling regime (Latham, Richmond, Nelson et al., 2000). The mean firing rate of the model circuits in turn reflects the balance of excitation and inhibition experienced by the local network (Jadi & Sejnowski, 2014b). The combination of sub- and superlinearities of the transfer functions was shown to be a critical component determining the operating regime of the dynamic circuit model (Jadi & Sejnowski, 2014a, 2014b; Ahmadian, Rubin, & Miller, 2013). The properties of the transfer functions have been identified by Grossberg (1973) to determine the global network response and decision characteristics. As shown in section 3.1, such stability properties can also be identified for the shunting network dynamics used in the canonical neural circuit model here. √ Consider again the eigenvalues λ1/2 = 1/2 · (T ± T 2 − 4D) of the system defined in equation 3.1 with T = tr(A) and D = det(A) with A from equation 3.2. In general, strong self-excitation leads to instabilities, while pool normalization and I-I connectivity/self-inhibition of the I-neuron counteract this behavior and thus stabilize the system. More precisely, the following observations can be made:

Un c

orr ect

• The amount of self-excitation that potentially leads to instability of the system scales with the sensitivity of the excitatory transfer func∂ tion, that is, g r (¯r ) (because ∂r f (r, p) = −α − I − γSE gr (r) + γSE (β −

r)gr (r) − γ g p (p)). • Subtractive and divisive pool inhibition have been investigated. The steady-state analysis shows that divisive inhibition (controlled by γ ) is more effective than subtractive inhibition (controlled by η). While η affect only the determinant D, γ occurs in both—D and the trace T. • There are two forms of self-inhibition: the leakage term −α p · p, which can also be interpreted as inhibition via a linear transfer function, and direct self-inhibition via −η p · g p (p). The effect is a linear decrease of the trace T (which primarily affects the sign of the real part of the eigenvalues, that is, the stability of the system). While the first one leads to a constant subtractive term, the impact of the second term is scaled by the sensitivity of the transfer function gp at equilibrium, that ¯ (because ∂∂p g(r, p) = −α p − η p g p (p) < 0). Thus, leakage reis, g p ( p) sults in a state-independent stabilization, while direct self-inhibition is state dependent and is strongest for the most sensitive part of ¯ in which g p ( p) ¯ is pool interaction, that is, for equilibrium states (¯r, p) large.

To summarize, the interaction of I-I connectivity, pool normalization, self-excitation, and the shape of the transfer function creates a variety of operational regimes. These range from stable, monotonically increasing functions of the input I (which we primarily analyze in this article) over

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

20

T. Brosch and H. Neumann

inhibition-stabilized neural networks to oscillating and unstable systems or mixtures of the different types. Note that netFB effectively scales I and γSE and can thus dynamically shift the system into different operational regimes.

of

5 The Canonical Neural Circuit Model Accounts for Experimental Findings



2/(1 + exp(−4x2 )) − 1,

x > 0,

0,

x ≤ 0,

orr ect

(x) =

ed

Pro

We now demonstrate that the proposed canonical neural circuit model accounts for several findings in visual neuroscience. We generally employ the model variant with pool inhibition of the shunting type (see equation II in Table 2). The parameters are set to α = 1, β = 1, β p = 1, γ = 1, and I0 = 0 in all experiments. In the first experiment, we use the linearly approximated sigmoidal functions used in the previous section with p0 = 0.2 and pm = 0.3. In the remaining subsections, we utilize a smooth sigmoidal function g p (x) to obtain more gradual response behaviors that can be shifted by an offset and scaling parameter o and s (for o = 0, s = 1, it fits the piecewise linear transfer function with p0 = 0.2 and pm = 0.3):

g p (x) = ((x − o) · s).

(5.1) (5.2)

Un c

All presented results are obtained by the variation of the parameters o, s, γSE , and γ and the size of the integration kernels of self-excitation and pool inhibition. We demonstrate the facilitation of weak and suppression of strong neural input, contrast-dependent integration size (see section 5.1), different forms of attentional modulation (see section 5.2), noise reduction and contour enhancement (see section 5.3), and orientation contrastdependent response behavior (see section 5.4) using only minor parameter changes (see Table 3). 5.1 Facilitation/Suppression of Weak/Strong Stimuli and ContrastDependent Classical Receptive Field Size. Noise is a constant source of ambiguity. Strong input to a neuron is an indicator of the presence of a stimulus. Weak input on the other side tends to be noisy. Linking neighboring neurons helps to reduce the impact of noise effectively, increasing the signal-to-noise ratio. However, this integrative process reduces the effective resolution such that it should be disabled for strong inputs as reported in experiments (Hunter & Born, 2011). We adopt the approach of Ahmadian et al. (2013) and start our investigation by focusing on the facilitation (suppression) of weak (strong) inputs at a single location. After that, we generalize to the 2D case. When we assume that self-excitation corresponds

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

21

Table 3: Simulation Parameters. Experiment

s

γSE =γ ˆ lat

γ

(0)a

(1)a

0.9

1

0.175 0.6 0 0

7 1.5 1 10

0.2 0.05 0 0.2

1 20 1 2

10

0.2

10

Pro

of

Facilitation/suppression of weak/strong neural input (section 5.1) Contrast-dependent integration size (section 5.1) Different forms of attentional modulation (section 5.2) Noise reduction and contour enhancement (section 5.3) Orientation contrast-dependent responses (varient 1) (section 5.4) Orientation contrast-dependent responses (varient 2) (section 5.4)

o

0

ed

Note: Overview of the four parameters that are adjusted to account for the experimental findings (in addition to the size of the integration kernels; here γSE is denoted as γlat ). a Piecewise linear transfer functions were applied in the first experiment (corresponding parameters for the smooth transfer function in brackets).

orr ect

to input provided not only from the neuron itself but also from its neighbors and the same holds true for the excitation of the inhibitory pool neuron, we can differentiate between two extreme cases: 1. Silent surround. Only the neuron under investigation is excited; the neighboring neurons have no activity. Thus, the corresponding selfexcitation and pool inhibition (mainly driven by the surround) are almost zero. This corresponds to equation I in Table 2. 2 Active surround. All neurons are homogeneously excited. Input from the neighboring neurons is thus identical to that of the target neuron. This instance corresponds to equation II in Table 2.

Un c

Comparing the response curves for both conditions (γSE = 0.9/γSE = 1) shows that the variant with pool inhibition and self-excitation (homogeneous pool interaction) facilitates weak stimuli. Strong stimuli lead to suppression (see Figure 6). We now demonstrate how the facilitation of weak and suppression of strong stimuli transfer, to a 2D arrangement of neurons. The cell responses in cat and monkey visual cortex area 18/V1/LGN/MT were reported to depend not only on the classical receptive field (CRF) but also on a surround region that reaches well beyond it (Levick, Cleland, & Dubin, 1972; Felisberti & Derrington, 1999; Przybyszewski, Gaska, Foote, & Pollen, 2000; Jones, Grieve, Wang, & Sillito, 2001; Bonin, Mante, & Carandini, 2005; Harrison, Stephan, Rees, & Fristona, 2007; Hunter & Born, 2011). This surround region has been coined the extraclassical receptive field (ECRF). While the CRF can elicit a cell response, the cell is unresponsive to stimulation of the ECRF alone but can enhance or suppress the response of the CRF when both are stimulated (Carandini, Heeger, & Movshon, 1997). The size of both

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

22

T. Brosch and H. Neumann Combination of Excitatory and Inhibitory Pool

supp.

Pro

of

fac.

ed

Figure 6: Impact of pool inhibition and self-excitation. We compare the case of self-excitation and pool inhibition (solid gray curve) with the case where pool interaction and self-excitation are extinguished (dashed black curve). Note that the variant with self-excitation and inhibitory pool increases responses for weak stimuli (fac. region, gray) while responses to strong stimuli are suppressed (supp. region, black).

orr ect

central and extended receptive fields was shown not to be fixed but varying with stimulus contrast (Chen, Song, & Li, 2012). Contrast-dependent modulation has been shown previously in ISN models (Jadi & Sejnowski, 2014a, 2014b; Ahmadian et al., 2013). Here, we demonstrate that context-sensitive modulation also occurs in nonoscillating regimes. We employ variant II in Table 2 extended to lateral interaction in space (position index i):

r˙i = −αri + (β − ri ) Ii + γlat





gr (rk ) ·

+ ik

(1 + λ · net FB )

k

Un c

− γ · ri · g p (pi )  gr (rk ) · − p˙ i = −pi + β p · ik + Ic .

(5.3)

k

Integration of activities in the receptive field is realized with gaussian ker+ nels with weights ± with standard deviation σlat and σ p− , respectively. The coefficients in each kernel sum to one. We focus on the two extreme cases outlined above: 1. No surround activity. Only the neuron under investigation is excited. Pool activity can be neglected and the neuron behaves  like equation I in Table 2 because excitation of the pools reduces to k gr (rk ) · ± ≈0 ik for the center weight being much smaller than 1, the case for large σ p− , + σlat .

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

23

2. Homogeneous activation of all neurons (ri = r j , ∀i, j). In this scenario,  ± k gr (rk ) · ik reduces to gr (ri ) (because ri = r j , ∀i, j). Consequently, the neuron under investigation will behave exactly like in equation II in Table 2.

Un c

orr ect

ed

Pro

of

Applying the results shown in Figure 6, we predict that an increased stimulus size will increase the neuronal response for weak inputs (e.g., lowcontrast stimuli) while an increased stimulus size will decrease the neuronal response for strong inputs (e.g., high-contrast stimuli), which should be reflected in the measured CRF size. To confirm this theoretical observation, we conducted a 2D numerical experiment. Simulated neurons are excited with a circular input pattern (resembling preprocessed input, for example, from moving dot patterns or static Gabor gratings) of varying excitation strength and radius (see Figure 7, top). As in Jones et al. (2001), we chose the point with maximum response and subsequent activation decrease as an indicator of the CRF size (similarly, the point where the response magnitude saturates marks the size of the ECRF). Figure 7 shows that the numerical experiment confirms + our theoretical prediction of contrast-dependent CRF size (σlat = 1, σ p− = 5; remaining parameters are given in Table 3). In addition, it demonstrates that (1) an increase in contrast for a fixed stimulus size increases the firing rate (vertical cut through Figure 7, bottom) and (2) that an increase of the stimulus beyond the CRF size (marked by a black circle in Figure 7) decreases the firing rate. This is similar to the model predictions of the oscillating ISN model reported in Jadi and Sejnowski (2014a) in which condition (1) leads to an increased frequency of oscillations and a similar condition compared to our condition (2) leads to slower oscillations as the contrast of the surround stimulus increased (note that Jadi & Sejnowski, 2014a, showed that the modulation of oscillations is correlated with average firing rate). 5.2 Different Forms of Attentional Modulation. The same mechanisms can explain how attentional modulation can result in contrast or response gain amplification. First, consider a case in which the stimulus is small (radius rs ) and the attentional field (radius ra ) is large. In this case, the effect of pool inhibition is almost negligible and the attentional modulation signal netFB results in an effective increase of the excitatory input. This appears as a leftward shift of the input response function (see Figure 8, left, ra = 24, rs = 1). Second, consider the case in which the stimulus is large and the attentional field is small. Here, the relative decrease of inhibitory pool activity leads to an increase of the response gain of the model circuit (see Figure 8, right; ra = 3, rs = 24). Parameters are set to σlat = 0.5, σ p = 7; remaining parameters are given in Table 3. To summarize, the proposed circuit model explains contrast-dependent CRF size as well as different forms of attentional modulation. Unlike the

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

T. Brosch and H. Neumann

orr ect

ed

Pro

of

24

Figure 7: Contrast-dependent CRF size. (Top) A 2D grid of model neurons is excited by an artificial input pattern (top). The activity at equilibrium r¯ is shown in the second row and the activity of the neuron marked by a gray circle is plotted below (argmax = ˆ CRF size). (Bottom) Plots of multiple response curves for different input strengths (i.e., contrasts; black circle marks CRF size) demonstrate that CRF size decreases for rising input levels.

Un c

algorithmic model description in Reynolds and Heeger (2009), our model also captures the dynamics of this mechanism. 5.3 Noise Reduction and Contour Enhancement by Modulatory FB. Coherent structures can be grouped by the visual system to form extended boundaries (Lowe & Binford, 1983; Grossberg & Mingolla, 1985). Initial grouping was shown to be performed by hierarchical FF processing (Bosking, Zhang, Schofield, & Fitzpatrick, 1997; Kapadia, Westheimer, & Gilbert, 2000; Geisler, Perry, Super, & Gallogly, 2001) which can be refined by modulating signals from higher stages via FB (Li, 2001; Neumann & Mingolla, 2001; Weidenbacher & Neumann, 2009). Enhancement of those activities results in a higher impact in subsequent competition stages cohering with the biased competition hypothesis (Reynolds & Desimone, 2003; Reynolds & Heeger, 2009). We use the neuron model instance in equation II in Table 2 with γSE = 0. A 1D example is shown in Figure 9 (left). In order to

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

25

Pro

of

Canonical Neural Circuit Method

Un c

orr ect

ed

Figure 8: Different forms of attentional modulation (caused by different extents of the receptive field, the attention field and the stimulus). (Left) Small extent of the stimulus relative to the receptive and attentional field results in negligible inhibitory pool activity shifting the response function to the left, that is, input gain. (Right) Large stimulus size and small attentional field size relative to the receptive field size result in an increase of lateral excitation and diminished inhibitory pool activity. This results in an increase of response gain (Reynolds & Heeger, 2009, Figure 2). Field sizes are not drawn to scale; see section 5.2 for details.

Figure 9: Contour enhancement. (Left) 1D example. Decreased pool inhibition at neuron n0 (compared to n−1 , n−2 ) results in increased modulation (gray part of the bar), while increased pool inhibition at neuron n1 (compared to n2 ) results in decreased activity. (Center) The network is probed with like-oriented bars corrupted by gaussian noise (σ = 0.15) and artificial FB (top left/right panel). The results with FB show a significantly higher activity compared to those with disabled FB (bottom right versus left image panel). (Right) Significant FB enhancement visualized by the r- and z-measures.

selectively study the gain enhancement, we provide an artificial FB signal resembling an undistorted version of the continuous input contour. Strong (weak) pool normalization for neurons located on (off) the line results in

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

26

T. Brosch and H. Neumann

Pro

of

contrast enhancement. FB also increases the response of neurons with receptive fields located on the line and results in increased local suppression of background noise. To visualize this behavior, we probe our network architecture with an input of like-oriented items forming a dashed contour (see Figure 9, center; parameters given in Table 3). To measure the relative enhancement of the contour representation, we use the r- and z-measure defined by r = S¯cont /S¯all and z = (S¯cont − S¯all )/σall (Li, 1999; Hansen & Neumann, 2008). Here, S¯cont denotes the mean activity of all neurons located under the bar, while S¯all is the mean activity of all neurons and σall denotes the standard deviation of all activities. The r- and z-measures for varying FB strengths verify that increased FB strength and the interplay with pool normalization result in an increased facilitation of neurons representing the contour (see Figure 9, right).

Un c

orr ect

ed

5.4 Orientation Contrast-Dependent Response Behavior. Neighboring neurons sensitive to different feature properties like orientation and spatial position were reported to have a significant impact on the neuronal response (Knierim & Van Essen, 1992; Jones, Wang, & Sillito, 2002; Hunter & Born, 2011). For example, many V1 cells have a suppressive surround with a sensitivity that is maximally tuned to the same orientation as the cell’s excitatory response. The suppressive influence is reduced when the orientation of a surrounding stimulus deviates from the preferential orientation of the target neuron in the center (Jones et al., 2002; Knierim & Van Essen, 1992; Naito, Sadakane, Okamoto, & Sato, 2007; Yu & Levi, 2000). This effect can be explained by the full interaction of spatial and feature specific responses introduced in the base equation 2.4. Strong inhibition for stimuli of similarly oriented gratings is explained by strong lateral connectivity of neurons of similar orientation tuning. In a numerical experiment, we reproduce the responses as measured in physiological experiments with drifting gratings (Jones et al., 2002; see Figure 10). To evaluate the responses to different orientation contrasts, a circular annulus grating of varying orientation surrounding the central circular patch is presented (with equal frequency and drifting speed). In the simulation, we provide synthesized input resembling the output of motion-sensitive cells. Neural interaction is computed by weighted sums using two different kernel types to account for spatial and featural centersurround normalization. In the spatial domain we uses 2D gaussians and in the circular orientation domain, we use von Mises kernels (circular normal distributions).5 In the orientation domain, we define the target neuron’s preferred orientation as μ = 0◦ . The input strength of a neuron with 5 The parameters for the spatial kernels in equation 2.4 were set to σ + = 4 for + and lat lat σ p− = 10 for − p.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

27

Un c

orr ect

ed

Pro

of

Canonical Neural Circuit Method

Figure 10: Response to orientation-contrast stimuli. Black lines: Response patterns of two cells of monkeys. The patterns of the neurons show orientation alignment suppression with additional general suppression in the bottom panel (data from Jones et al., 2002, Figure 1, B/D). Gray lines: Response patterns of two simulated neurons reproducing the response patterns (black lines).

preferred orientation θ relative to the target neuron’s preferred orientation is then given by wθ =

exp(κ · cos(2 · (θ − μ)) , 2πI0 (κ )

(5.4)

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

28

T. Brosch and H. Neumann

orr ect

6 Discussion

ed

Pro

of

where I0 (κ ) denotes the modified Bessel function of order 0.6 The two different von Mises kernels for modeling the center and surround kernel shapes were specified by the parameters κcent and κsurr , respectively.7 The model parameters different from the other simulations are set to o = 0, s = 10, γlat =γ ˆ SE = 0.2 (see Table 3). The parameters κcent , κsurr and γ are varied to fit the response properties of different cells reported in Jones et al. (2002). Figure 10 shows two examples for κcent = 3, κsurr = 4, γ = 2 (top) and κcent = 2, κsurr = 0.5, γ = 10 (bottom). CRF size was determined to be 2 in both examples. Figure 10 demonstrates that the proposed canonical neuron model introduced in the base equation 2.4 accounts for the recordings of Jones et al. (2002). In both examples, orientation alignment suppression can be observed due to larger connection strengths of neurons tuned to similar orientations. In addition, the broadly tuned kernel for κsurr = 0.5 results in a composite effect of orientation-independent suppression and orientationalignment suppression. To summarize, orientation contrast-dependent response behavior can be explained by the proposed canonical neural circuits model with suitable pooling kernels in the spatial and featural domain.

Un c

6.1 Summary of Results. In this work, we propose and analyze a generic computational element for cortical circuit operations that implements an abstract model of a cortical column. Cortical cells modulate their responses as a function of the changes in the space-feature neighborhood showing adaptation and saturation effects. Such a response property had been explained by a normalization of individual filter responses (Carandini & Heeger, 2012; Tsui, Hunter, Born, & Pack, 2010). Such responses representing FF streams along sensory pathways are combined with FB signals originating at stages higher up in the cortical hierarchy that serve to enhance the gain of the sensory-driven representations (Bullier, 2001; Sherman & Guillery, 1998). We showed that the proposed model columnar circuit and its dynamics combine these computational elements and unify them in a canonical circuit mechanism. The main results reported in this article are threefold. As a first main contribution, we propose a generic architecture of processing stages that accounts for signal filtering and nonlinear gain control in 6 Large values of κ result in narrowly tuned kernels, while small values result in broadly  tuned kernels. We simulate 12 orientations (θ = 15◦ ), and the sum of kernel weights θ wθ is normalized to 1. 7 The activity of the currently presented orientation is set to one, convolved using the first, center, kernel, {wθcent }θ∈[−90◦ ,90◦ ) , and normalized such that the maximal activity is 1. The second kernel defines the weights of the featural component of the weights for − p in equation 2.4.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

29

Un c

orr ect

ed

Pro

of

combined FF and FB pathways. The three-stage architecture directly maps onto a compartmental segregation of cortical areas: the filtering stage relates to the processing of driving input signals in cortical layer IV and subsequent lateral activity integration in layers II/III; modulatory FB is delivered by axonal projections of remotely located cells to layer I and cells in layers III and V/VI contacting layer I via their apical dendrites; and layer V/VI cells laterally integrate activities over a large neighborhood to form a closed (inhibitory) loop with excitatory input units. Based on the generic model, as a second main contribution, we identified different operational regimes that range from stable interactions in which outputs montonically increase with the input to oscillating and unstable interactions. We presented an in-depth analysis of the stable operational regime in which we investigated different variants of the model (see section 3). We demonstrated that functions of sigmoidal shape combined with strong self-excitation lead to even richer network response characteristics, namely, to a paradoxical drop in overall response when the signal input is increased, called inhibition-stabilized networks (Latham & Nirenberg, 2004; Ozeki et al., 2009) or even unstabilities and oscillations. A major result of our analysis is the insight that top-down FB can dynamically change the network’s operational regime (see sections 3 and 4). These results have an impact on the development of neurotechnology and biomimetic systems derived from these investigations. They guide the proper parameter adjustment to resemble different neuron types. As a third major result, we demonstrate in section 5 that the model dynamics are rich and provide the functional variability in reproducing a wide range of observed phenomena in experiments without the need to design specific models that account for each phenomenon separately (see Table 3). None of the demonstrated examples required an extension of the canonical columnar model to achieve the demonstrated properties. 6.2 Comparison with Other Nonlinear Models with Normalization Properties. Various models of center-surround inhibition in cortex have been proposed (Grossberg, 1973, 2013; Somers et al., 1998; Stemmler, Usher, & Niebur, 1995) among which several mechanisms have stressed the importance of divisive inhibition to explain observed response nonlinearities and normalization properties (Bonin et al., 2005; Carandini & Heeger, 1994; Carandini et al., 1997). The nonlinear scaling of (linear) filter responses by a normalizer is calculated from a spatial pool of neurons over multiple feature dimensions that accounts for high-contrast saturation, nonspecific suppression by otherwise ineffective stimuli, and temporal summation nonlinearities (Carandini et al., 1999). We show how such nonlinear effects are generated dynamically through shunting surround competition but incorporate modulatory FB and dynamic pool inhibition of different types. While other models also stressed the importance of FB (Schwabe et al., 2006; Pi¨ech, Li, Reeke, & Gilbert, 2013) we argue that the normalization and modulating

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

30

T. Brosch and H. Neumann

n r¯ =

α

p k=1 sk · k n q m , + k=1 sk

Pro

of

FB combines to create a generic principle to implement a biased competition of activities. In addition, our model explains how modulatory FB can shift the system’s nullclines into different operational regimes, thus defining various functional states. It has been suggested that cortical computation can be described in terms of few canonical mechanisms. For example, Kouh and Poggio (2008) defined a nonlinear steady-state mechanism of weighted integration, nonlinear signal transformation, and divisive inhibition to account for various local neural circuits (Yu, Giese, & Poggio, 2002). Their proposed canonical mechanism reads (6.1)

Un c

orr ect

ed

where the neural activity r¯ depends on the weighted (k ) activity sk of the input neurons. While the models of Kouh and Poggio (2008) and Carandini and Heeger (2012) consider steady-state algebraic equations only, the components and parameters can be interpreted in the context of the investigation presented here. In particular, α denotes a passive decay parameter, the exponents p and q resemble nonlinear (polynomial) transfer functions of the driving input signals (corresponding to firing rates), and m regulates the normalization strength. Depending on the particular parameterizations, the energy computation mechanism for complex cell responses can realize a maximum pooling to achieve size and position invariance properties (Riesenhuber & Poggio, 1999, 2000; Serre, Wolf, & Poggio, 2005; Mutch & Lowe, 2008; Brosch & Neumann, 2012b), or a gain control by activity normalization. The response characterized in equation 6.1 is similar to the basic mechanism proposed by Carandini and Heeger (2012) to implement activity normalization as a canonical neural computation (see equation 2.2). Our investigation adds several new insights to the design of such generic cortical mechanisms. Unlike these models, we investigated the dynamic properties of such circuits by showing under which conditions the temporal trace yields to the steady-state solution discussed in Kouh and Poggio (2008) and Carandini and Heeger (2012). Additionally, we have shown that an excitatory sigmoidal transfer function in combination with strong selfexcitation leads to an inhibition-stabilized network explaining the behavior found in some neurons (see section 4.1 and Tsodyks, Skaggs, Sejnowski, & McNaughton, 1997; Ozeki et al., 2009). This also explains the seemingly paradox behavior of some neurons that increased input to the I-neuron leads to decreased inhibition and decreased activity of the E-neuron which implies a predictive coding principle implemented by the FB signal. (See Jadi & Sejnowski, 2014a, 2014b, for a detailed analysis of ISN networks of the nonshunting type). Closely connected to this are supralinear stabilized networks (SSN) in which surround suppression corresponds to sublinear

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

31

orr ect

ed

Pro

of

summation and facilitation by the near surround for weak inputs to supralinear summation (Ahmadian et al., 2013). Large-scale neural networks of interconnected neurons have been studied to investigate the emergence of simple and complex cell response selectivity as measured in cortical cells. Shapley and Xing (2013 used model cells that obey similar response dynamics as in our model depicted in equation 2.1. In their work, the main focus lies in the analysis of dynamic properties of recurrently connected cells that are organized in columnar units receiving bottom-up input from LGN cells. Unlike the strictly hierarchically organized Hubel and Wiesel model (Hubel & Wiesel, 1959, LGN → V1 simple → V1 complex cells) simple and complex cells in Shapley and Xing’s model receive strong bottom-up inputs in an egalitarian fashion (Tao, Shelley, McLaughlin, & Shapley, 2004; Zhu, Shelley, & Shapley, 2009). The authors demonstrate that the selectivity to stimulus features (e.g., spatial frequency) develops through the nonlinear response suppression by inhibition (see also Somers et al., 1998; Dragoi & Sur, 2000). Also, Murphy and Miller (2009) investigated the temporal properties of shaping feature selectivities in local recurrent networks. These investigations may be considered complementary: fast interactions in a pool shape the feature selectivity we assumed as independent domains and selectivities. In our theoretical investigations, we adopted the approach of Zhaoping (2011) but analyzed the impact of modulatory gain amplification of driving input signals via FB and different variants of output normalization realized by an inhibitory pool of neurons. It was shown that this interaction can be used in large-scale networks with overall stable response properties (Brosch & Neumann, 2012a; Endres, Neumann, Kolesnik, & Giese, 2011; Layher, Giese, & Neumann, 2014).

Un c

6.3 Other Models That Employ FB, Gain Modulation, and Predictive Coding. Our proposed three-stage columnar cascade model further extends the surround inhibition and normalization circuit discussed above by incorporating modulatory FB (Eckhorn et al., 1990; Eckhorn, 1999). We recently demonstrated that this modulation mechanism appears as a special case of a columnar circuit that is fed by self-inhibiting FF input and in which FB targets excitatory cells, which in turn contact pairs of inhibitory-toinhibitory cells to disinhibit the driving input response (Brosch & Neumann, 2014). The reproduction of gain enhancement of somatic cortical pyramidal cell responses by coincident input at distal dendritic tufts demonstrates the validity of this generic mechanism (Larkum, 2013). Several other models have been suggested that emphasize the role of FB in cortical processing for different tasks and levels of cognitive processing (Gove et al., 1995; Lamme, 1995; Roelfsema, Lamme, Spekreijse, & Bosch, 2002; Hamker, 2004; Corchs & Deco, 2002; Deco & Rolls, 2005; Schwabe et al., 2006; Pi¨ech et al., 2013). For example, Hamker (2004) used multiplicative enhancement as in our proposal, while the approach by Deco and colleagues

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

32

T. Brosch and H. Neumann

Un c

orr ect

ed

Pro

of

assumes additive FB into pools of excitatory neurons, which in turn are inhibited by a mechanism of unspecific inhibition. All of these model approaches can be represented by the canonical circuit investigated in this work (Thielscher & Neumann, 2003; Hansen & Neumann, 2004; Weidenbacher & Neumann, 2009, demonstrated boundary enhancement for textured regions, contour grouping and junction extraction). The same generic framework has also been extended and applied in the motion domain to account for the mechanism of motion integration, disambiguation (aperture problem), and segregation as well as selective motion feature attention (Barnes & Mingolla, 2013; Bayerl & Neumann, 2004, 2006; Berzhanskaya, Grossberg, & Mingolla, 2007; Raudies et al., 2011; Beck & Neumann, 2011). Similar to our study, Schwabe et al. (2006) investigated the role FB has on the extraclassical receptive field inhibition. Recently a model of cortical FB was proposed to investigate the influences context have on local feature representations in V1 (Pi¨ech et al., 2013; Gilbert & Li, 2013). The model structure has several properties in common with our model, namely, the modulation of bottom-up filter responses and the lateral integration of like-oriented cells through layer 2/3 horizontal interactions. Unlike Gilbert’s model, in the model proposed here, different inhibitory actions have been studied, including shunting inhibition that leads to response normalization. The functional role FB signals play remains controversial. Different theoretical frameworks have been proposed that receive support by different experimental findings (Markov & Kennedy, 2013). One such framework proposes that FF sensory activations are amplified by matching FB signals such that subsequent competition between neurons yields a competitive advantage, and thus gain enhancement, for those cells that have an increased gain through the action of modulating FB for matching feature selectivity (biased competition; Grossberg, 1980; Girard & Bullier, 1989; Desimone, 1998). Another framework interprets the role of FB as predictive in which a template is activated that represents the expected input pattern representation given the evidence derived from current bottom-up input signals. The action of FF and FB signal interaction reduces the residual discrepancy between the different signal streams (in light of a Bayesian interpretation and Kalman filtering principle; Ullman, 1995; Rao & Ballard, 1999; Bastos et al., 2012). The difference between these model frameworks is in the combination of FF and FB signals, which turn out to be two variants of the same generic principle in case of additive signal convergence (Spratling & Johnson, 2004; Spratling, 2008). In conjunction with the stage of pool normalization (by divisive inhibition), our model column implements the biased competition property as suggested to be a core principle in attention selection mechanisms (Desimone & Duncan, 1995; Reynolds & Desimone, 2003; Reynolds, Chelazzi, & Desimone, 1999). Recently, Reynolds and Heeger (2009) proposed their “normalization model of attention” to summarize the conjunctive interaction of multiplicative gain enhancement via FB and subsequent activity normalization. Their model uses algebraic steady-state

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

33

orr ect

ed

Pro

of

equations to account for the important nonlinearities in the computations (as in Carandini et al., 1999). In a similar direction Lee and Maunsell (2009) developed an attentional normalization model to account for the adjustment of sensory responses in the presence of multiple stimuli. FB in our model acts in a similar way as the attention field used in Reynolds-Heeger normalization model (see also Ohshiro, Angelaki, & DeAngelis, 2011). As Reynolds and Heeger (2009) showed, a model like the one proposed here that incorporates divisive normalization (Heeger, 1992) exhibits different forms of attentional modulation depending on the stimulus selectivity and conditions of the attentional FB in the model. These attention effects range from multiplicative response amplification (McAdams & Maunsell, 1999; Treue & Mart´ınez-Trujillo, 1999) over changes in contrast gain (Li & Basso, 2008; Mart´ınez-Trujillo & Treue, 2002; Reynolds, Pasternak, & Desimone, 2000) to attention-dependent sharpening of neuronal tuning (Spitzer, Desimone, & Moran, 1988; Martinez-Trujillo & Treue, 2004). Furthermore, these cover effects of response amplification and changes in contrast gain (Williford & Maunsell, 2006) and the reduction of activity when attending nonpreferred stimuli paired with a preferred stimulus inside the receptive field of a target cell (Moran & Desimone, 1985; Recanzone & Wurtz, 2000; Reynolds et al., 1999; Reynolds & Desimone, 2003). The model presented and analyzed here further justifies these findings and proposals: the inherent normalization property of the columnar model circuit can account for a wide variety of effects and embeds them in a plausible dynamic mechanism of cortical computation.

Un c

6.4 The Model Cortical Column Compared to Other E-I Systems. In our analysis of the computational circuit properties, we started our investigation by reducing it to a local excitatory-inhibitory (E-I) system. This is the first time that modulatory FB and an I-neuron integrating from a pool of neurons have been included in such an analysis. This is an important configuration because it was shown (Stimberg et al., 2009) that cortico-cortical recurrency establishes a cortical operating point close to a line of instability at which the system becomes particularly sensitive to modulatory effects of FB signals. The proposed model column relates to previous investigations of other E-I systems in which oscillatory and stability behavior has been studied locally (Wilson & Cowan, 1972; Ellias & Grossberg, 1975; Amit & Brunel, 1997; Terman, Kopell, & Bose, 1998; Zhaoping, 1998). Several variants of such networks of coupled elements have been studied to find theoretical justifications for the neurophysiologically plausible neural computations in groups of cells (van Vreeswijk, Abbott, & Ermentrout, 1994; Deco et al., 2011). Distinct network architectures with E-I compartments have been studied that consider spike pattern formation, synchronization, and other dynamic properties in pools of coupled spiking cells in a network (van Vreeswijk & Sompolinsky, 1996; Ermentrout & Kopell, 1998; Seri`es & Tarroux, 1999; Latham, Richmond, Nelson et al., 2000) while some

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

34

T. Brosch and H. Neumann

Un c

orr ect

ed

Pro

of

investigate networks of mutually interacting units with rate-coded activations or coupled oscillators (Wilson & Cowan, 1972, 1973; van Vreeswijk & Sompolinsky, 1996; Terman et al., 1998; Latham & Nirenberg, 2004; Curtu & Ermentrout, 2004). Some of these network structures include couplings from excitatory-to-inhibitory (E-I) neurons as well as the reverse I-E couplings (Wilson & Cowan, 1972, 1973), while some investigated the role E-E and I-I interactions have on the formation of response patterns and the stabilization of the network dynamics (Wilson & Cowan, 1973; Terman et al., 1998; Latham, Richmond, Nelson et al., 2000; Li, 2001; Latham & Nirenberg, 2004; Vogels & Abbott, 2009). Our work is closely related to the study of Ellias and Grossberg (1975), which also investigated E-I elements with saturation properties. Unlike this previous work, we consider here modulatory FB that is generated at higher cortical stages to integrate contextual information and the normalization of neural activities generated by a separate mechanism that spatially integrates the activity over a pool of units and over different feature domains. In order to account for recurrent E-E interactions (resembling lateral long-range integration of like-oriented cells in layer 2/3 in cortex; Zhaoping, 1998; Pi¨ech et al., 2013) we incorporated self-excitation in the reduced point-like model (as in Zhaoping, 2011). In accordance with previous models (Somers et al., 1998; Dragoi & Sur, 2000; Schwabe et al., 2006), we utilized monotonic functions for the transfer of excitatory activity that responds earlier than those for the inhibitory activity but with slower increase than the inhibitory signal function. Inhibitory-inhibitory (I-I) interactions occur in cortex (via mutually coupled inhibitory subnetworks; Beierlein, Gibson, & Connors, 2003) and have been investigated in previous models (Wilson & Cowan, 1973; Zhaoping, 2011). Some authors have explicitly excluded such interactions since they can lead to disinhibitory effects (Ellias & Grossberg, 1975). We have considered the computational consequences of incorporating such I-I couplings at the level of pool interactions by including a self-inhibition in the reduced point-like model (see section 4.2). In accordance with Latham, Richmond, Nelson et al. (2000) and Latham, Richmond, Nirenberg et al. (2000), we observed that I-I coupling counteracts the destabilization by increased E-E coupling. Furthermore, this stabilization has two components, one that is state independent and is caused by the leakage term and another that depends on the state of the pool and is strongest for pool activations ¯ in the sensitive part of the pool transfer function gp , that is, for large g p ( p). Furthermore, we observed how FB can dynamically shift the system into an inhibition-stabilized regime (see section 4). 6.5 Model Limitations and Possible Extensions. Combined experimental and theoretical investigations have demonstrated surprising effects in the dynamics of excitatory cells that interact with groups of inhibitory

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

35

Un c

orr ect

ed

Pro

of

cells (Tsodyks et al., 1997; Ozeki et al., 2009). It was shown that neural activity suppression can be achieved by a withdrawal of excitation, unlike the expected effect that is achieved by an increase of inhibition (dubbed inhibition-stabilized networks, or ISN). We showed in section 4.1 that such behavior is also possible in the circuit model proposed here. It remains to further investigate how this regime of instability in excitatory response can be used to drive a network of interconnected columnar units to an edge of instabilities to enable quick switching behavior. Related to such further investigations that we have not considered in this article are studies of bifurcation behavior to account for dynamic switches in visual behavior (Curtu & Ermentrout, 2004; Rankin, Tlapale, Veltz, Faugeras, & Kornprobst, 2013). There, neural field equations have been defined for excitatory and inhibitory cell ensembles that operate at different time scales (Veltz & Faugeras, 2010). A dynamic adaptation term is added as a third network component to change the effectiveness of the excitatory component. Such an adaptation could be integrated in the circuit that has been proposed here to investigate dynamic switching to spatially extended input stimuli. Here, we studied the case of a single intersection of the nullclines to yield a unique equilibrium (as in Terman et al., 1998). Further enhancement of the deflection of the S-shape to supralinear response functions (Ahmadian et al., 2013) of the r-nullcline may eventually lead to multiple intersections with the nullcline of the p-unit. As a consequence, a bifurcation must occur (compare, e.g., Jadi & Sejnowski, 2014a, 2014b, for an analysis of networks composed of components without saturating conductance properties and response gain adjustment that operate in an ISN regime). The different processing streams of FF stimulus processing, lateral response integration, and top-down FB reentrant signals are unequivocally related to different spatial spreads in the connection widths (Angelucci et al., 2002; Angelucci & Bullier, 2003). These different spreading widths of the connection kernels have been utilized in models like Schwabe et al. (2006) to explain certain long-range stimulus integration effects that occur in outer surround inhibition in cortex—something that remains to be investigated in the canonical neural circuits model. 7 Conclusion

This article investigates the computational properties of an abstract columnar circuit model and its dynamics. The proposed circuit combines filtering and the modulating enhancement of the potential of a columnar unit (E-I-pair), which is shown to dynamically shift the operational regime of a model column. By integrating over a spatial and featural neighborhood, the I-neuron is not only considered as a point-like entity but analyzed with respect to its impact in terms of pool normalization, which enabled us to

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

36

T. Brosch and H. Neumann

Un c

orr ect

ed

Pro

of

predict, for example, contrast-dependent CRF size or orientation contrastdependent response behavior with the same model. The results described in this article enable the definition of neuron models with a wide variety of response properties and serves as a computational building block used in engineering large-scale systems of neural processing in vision. The analytical results guarantee existence, stability, and predictability of response behavior for the different model variants discussed in section 3. Additionally, regimes of inhibition stabilized networks, instabilities, and oscillations are identified. The mathematical derivation of the impact of all model parameters enables researches to finely tune the employed neuron models. The examples shown demonstrate the explanatory power of the model architecture capable of explaining different neurophysiological findings ranging from contrast-dependent receptive field size adaptation over noise reduction and contour enhancement to orientation contrast-dependent response behavior. The stability properties of the employed model enable the definition of a wide variety of large networks of interacting model areas simulating different neuron types without losing control over the response properties (as demonstrated in Brosch & Neumann, 2012a). Most important we have shown that FB in such models can dynamically shift the response behavior of model computational units to different operational regimes without requiring different parameters. To summarize, the research results presented here provide a framework to analyze previous experimental results using a unified computational model mechanism. Also, variations and extensions of the analyzed circuit model can be investigated, making use of the acquired knowledge from theoretical analysis and guiding further investigation of future extended models.

Appendix: Proofs and Mathematical Derivations A.1 Notes on Excitatory Response Characteristics Without Pool Inhibition and Without Self-Excitation. This section presents the details regarding the response properties of the excitatory component of the EI pair as introduced in section 3.3. The self-excitation is set to zero. The steady-state potential is given by r¯ = β

I∗ , α + I∗

(A.1)

with I∗ = Iq · (1 + λ · net FB ) (assuming a polynomial response transfer function of the driving input, f (I) = Iq ). For the following theoretical considerations, we consider q = 1 to capture a simple Naka-Rushton shape.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

37

We show that the response gain of the system is a monotone function in I and netFB by investigating  ∂ r¯ ∂ r¯ ∂I∗ ∂ r¯  = ∗ = ∗ 1 + γ · net FB ∂I ∂I ∂I ∂I

of

(A.2)

and

respectively, where  1−

I∗ α + I∗

 =

αβ . (α + I∗ )2

ed

β ∂ r¯ = ∗ ∂I α + I∗

Pro

∂ r¯ ∂ r¯ ∂I∗ ∂ r¯ = = ∗ · γ I, FB ∗ FB ∂net ∂I ∂net ∂I



(A.3)

(A.4)



orr ect

∂I Together with the positive functions for ∂I∂I and ∂net FB , this demonstrates that the response gain is strictly positive. The gain slowly converges to zero for increasing values of I and/or netFB indicating the saturation properties of the system. In formal terms, ∂I∂ r¯∗ ≥ 0 reaching the zero level in the limit when I∗ → ∞.

A.2 Notes on Subtractive Pool Inhibition Without Self-Excitation. This section presents additional notes and mathematical proofs omitted in section 3.4 regarding subtractive pool inhibition. A.2.1 Input Sensitivity. The input sensitivity in case of subtractive pool inhibition and net FB = 0 is given by

Un c

⎧ αβ ⎪ ⎪ , ⎪ ⎪ (α + I)2 ⎪ ⎪ ⎪ ⎪ ∂ r¯ ⎨ (pm − p0 )((β p β − p0 + Ic )η + αβ(pm − p0 )) = , ∂I ⎪ ((pm − p0 )I + α(pm − p0 ) + ηβ p )2 ⎪ ⎪ ⎪ ⎪ ⎪ αβ + η ⎪ ⎪ ⎩ , (α + I)2

I ∈ [0, θlow ], I ∈ (θlow , θhigh ), I ∈ [θhigh , ∞). (A.5)

Note that adopting the notation from equations A.2 and A.3, all of the previous results also apply for nonzero feedback netFB . A.2.2 Stability in the Low and High Domain. The linearization at the equilibrium of equation II in Table 2 in the saturation regions (i.e., the low and

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

38

T. Brosch and H. Neumann

high domain) is given by r˙ = (−α − I · (1 + λ · net FB )) · (r − r¯), which has a stable solution.

λ1,2 =

1 −(pm − p0 )(1 + α + I) ±  , 2 pm − p0

of

A.2.3 Stability in the Mid-Domain. The eigenvalues of the linearization at the equilibrium of equation II in Table 2 in the mid-domain are given by (A.6)

Pro

where 2 = (pm − p0 )2 (I + α − 1)2 − 4ηβ p (pm − p0 ). One can show that (λ1/2 ) < 0. We sketch the proof in an itemized fashion: • If  is imaginary, done. • Otherwise, assume that λ1 > 0 and show contradiction by moving  to one side of the equation and squaring both sides.

ed

This shows that the nonlinear system is stable in the mid-domain.

orr ect

A.3 Notes on Divisive Pool Without Self-Excitation. This section presents additional notes and mathematical proofs omitted in section 3.4 regarding divisive pool inhibition in case of γSE = 0, that is, no self-excitation. A.3.1 Continuity. Caused by the piecewise definition of the linearized transfer function g p (·), we consider a piecewise solution of the equilibrium r¯ in the three domains [0, θlow ], (θlow , θhigh ), and [θhigh , ∞). The qualitative change of the ODE from one domain to the other could result in an unexpected behavior of the potential at equilibrium. In this section, we proove that the equilibrium potential is a steady function in I defined on [0, ∞). Continuity at θlow :

Un c

βI ! 1 −(α + I)(pm − p0 ) + γ (p0 − Ic ) + = α+I 2 γ βp

(A.7)

2β p βγ I = −(α + I)2 (pm − p0 ) + γ (p0 − Ic )(α + I) + (α + I)

(A.8)

(α + I) = (α + I)2 (pm − p0 ) −γ (p0 − Ic )α + γ (2β p β − p0 + Ic )I .    ≥0,

I≥θlow

(A.9) Taking the square of both sides and reduction of identical terms on both sides yields 0 = (β p β + Ic − p0 )I2 + α(Ic − p0 )I, which is true for I = θlow .

(A.10)

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

39

Continuity at θhigh : βI ! 1 −(α + I)(pm − p0 ) + γ (p0 − Ic ) + = γ +α+I 2 γ βp

(A.11)

of

· (γ + α + I) = 2β p βγ I + ((α + I)(pm − p0 ) − γ (p0 − Ic ))(γ + α + I). (A.12)

Pro

Taking the square of both sides and reduction of identical terms on both sides yields 0 = (β p β − pm + Ic )I2 − (γ + α)(pm − Ic )I, which is true for I = θhigh .

(A.13)

ed

A.3.2 Input Sensitivity. The input sensitivity in case of divisive pool inhibition without self-excitation is given by

orr ect

⎧ αβ ⎪ ⎪ , I ∈ [0, θlow ], ⎪ 2 ⎪ ⎪ ⎪ (α + I) ⎪ ⎪ ⎪ (α + I)(pm − p0 ) + (2β p β − p0 + Ic )γ − ⎪1 ⎪ ⎪ , ∂ r¯ ⎨ 2 (pm − p0 ) ·

γ β p = ∂I ⎪ ⎪ ⎪ I ∈ (θlow , θhigh ), ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ β(α + γ ) ⎪ ⎪ ⎩ , I ∈ [θhigh , ∞). (α + I + γ )2 ∂ r¯ ∂I

Un c

It can be shown that

(A.14)

> 0 for I ∈ R+ .8

A.3.3 Eigenvalues and Stability. The stability in the low domain is identical to section A.2.2. The linearization at the equilibrium of equation II with γSE = 0 in Table 2 in the high domain is given by r˙ = (−α − I · (1 + λ · net FB ) − γ ) · (r − r¯), which has a stable solution. The eigenvalues in the mid-domain are given as λ1,2 =

1 −(pm − p0 )I − (α + 2)(pm − p0 ) + γ (p0 − Ic ) − ±  , 4 pm − p0 (A.15)

8 To show that ∂ r¯/∂I > 0, we move to one side and show that both sides are positive. Squaring both sides and reduction proves the proposition.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

40

T. Brosch and H. Neumann

 2 2 where 2 = (pm − p0 ) · I + (Ic − p0 ) · γ + (2 · (1 + α) · (1 + I) + α 2 ) · (pm − p0 )2 + ((2ββ p I + 2 · (1 + α) · (Ic − p0 )) · γ + (α + I − 6) · ) · (pm − p0 ) + γ · (Ic − p0 ) · . One can show that (λ1/2 ) < 0. We sketch the proof in an itemized fashion:

ed

Pro

of

• Show that the numerator without the  term is always negative: −(α + 2)(pm − p0 ) + γ (p0 − Ic ) − < 0. • Now we can assume  > 0 (otherwise, it is imaginary and we are done). • Now we assume that the eigenvalues are positive (which should lead to a contradiction): −(pm − p0 )I − (α + 2)(pm − p0 ) + γ (p0 − Ic ) − +  ≥ 0. • Rewrite to (pm − p0 )I + (α + 2)(pm − p0 ) − γ (p0 − Ic ) + ≤ . • Taking the square on both sides, expanding the 2 term, and reducing of identical terms of both sides leads to 0 ≤ −16 (pm − p0 ), which resembles the contradiction.

orr ect

This proves that the nonlinear system is always stable. A.4 Notes on Divisive Pool Inhibition with Self-Excitation. This section presents additional notes and mathematical proofs omitted in section 3.4 regarding divisive pool inhibition in case of γSE > 0, that is, in case of divisive pool inhibition and self-excitation.

Un c

A.4.1 Existence of Solution. Existence of solution (note that this also applies for the other ODEs studied in equations I to IIa in Table 2): Since the ODE in equation II in Table 2 is described by a continuous function on D = (−ε, ∞)2 , ε > 0, Peano’s existence theorem guarantees a local solution of the ODE. ¨ existence theorem can be applied by definIn addition, Picard Lindelof’s ing a special metric. Sketch of the proof to show Lipschitz continuity: • Let f : X → Y denote the function describing the ODE. • Show Lipschitz continuity of f by applying the Manhattan metric on X and the following metric d on Y = [0, ∞)2 : The distance d of the two points (r1 , p1 ) and (r2 , p2 ) ∈ Y is given by d((r1 , p1 ), (r2 , p2 )) = |r1 − r2 | + |r21 − r22 | + |r1 p1 − r2 p2 | + |p1 − p2 |.

(A.16)

Now we have shown that f is Lipschitz continuous. Since f is not dependent ¨ on t, it is also continuous in t, and thus we can apply Picard Lindelof’s existence theorem, which even guarantees a unique local solution.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

41

of

A.4.2 Zero Output for Zero Input. We consider the ODE equation II in Table 2. It is easy to see that the system stays at the state [r, p] = [0, 0] if I = 0—¯r = 0 for I = 0. However, it is not obvious what happens when the system starts from [r, p] = [r0 , p0 ] with r0 > 0 and I = 0. In this section, we show that this depends on the parameters α, β, γSE , and γ (see the text for the impact of feedback).

Pro

• For γSE > α/β and low values of Ic , the potential at equilibrium starts at a positive value (if the system starts at a positive potential r > 0). In the following, we consider all three domains because depending on the parameter set, the system could be in each of the three domains for I = 0 (e.g., Ic > p0 implies that the low domain is not relevant). Low domain: For γSE > α/β, α > 0, γSE

(A.17)

ed

r¯|I=0 = β −

orr ect

for β p (βγSE − α) ≤ γSE · (p0 − Ic ) ⇔ Ic ≤ Iclow := p0 − β p (βγSE − α)/γSE . Mid-domain: The solution in this case depends on the sign of the square term under the square root : fmid := (βγSE − α)(pm − p0 ) + γ (p0 − Ic ). The solution in this case reads ⎧ (βγSE − α)(pm − p0 ) + γ (p0 − Ic ) ⎪ ⎨ γ β p + γSE (pm − p0 ) = ⎪ ⎩ 0

r¯|I=0

fmid > 0

,

(A.18)

fmid ≤ 0

high

Un c

for Iclow < Ic ≤ Ic := pm − β p (βγSE − α − γ )/γSE . High domain: The solution in this case depends on the sign of fhigh := βγSE − α − γ . The solution reads ⎧ βγ − α − γ ⎪ ⎨ SE γSE = ⎪ ⎩0

r¯|I=0

high

for Ic

fhigh > 0

,

(A.19)

fhigh ≤ 0

≤ Ic .

It can be shown that the equilibrium potential r¯|I=0 in the low domain at the threshold to the mid-domain is continuous and that fmid > 0 for Ic = Iclow . It can further be shown that sgn( fmid ) = sgn( fhigh ) at high Ic = Ic and that the equilibrium potential r¯|I=0 in the mid-domain at the threshold to the high-domain is continuous. This shows that

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

42

T. Brosch and H. Neumann

βγSE − α − γ > 0 is a sufficient condition to guarantee strictly positive potential at equilibrium for no input and arbitrary pool input Ic . For 0 < βγSE − α < γ , the solution depends on the pool input Ic . • For γSE 0 (otherwise the low-/mid-domain are not relevant for any input I). Continuity at θlow : A solution for p¯ = p0 in the low domain exists only if 2γSE (p0 − Ic ) − β p (βγSE − α − I) ≥ 0 (this condition arises by solving p¯ = p0 ; taking the positive square root to one side, one sees that there is a solution only if this condition is met). A solution for p¯ = p0 in the mid-domain exists only if 2(p0/m − Ic )(γ β p + γSE (pm − p0 )) − β p (βγSE − α − I)(pm − p0 ) − γ β p (p0 − Ic ) ≥ 0 (this condition also arises by solving p¯ = p0/m ). Comparison of both conditions shows that the existence of the solution in the low domain implies the existence of the solution in the mid domain. Comparison of both solutions at I = θlow shows that the solutions are identical at I = θlow . Continuity at θhigh : For I ∈ (θlow , θhigh ), it holds that r¯ > 0. Consequently, if θhigh exists, it is positive. This implies that β p (α + γ ) − γSE (β p β − pm + Ic ) = γSE (pm − Ic ) − β p (γSE β − α − γ ) > 0 (nominator of θhigh . This in turn implies that 2γSE (pm − Ic ) − β p (βγSE − α − I − γ ) ≥ 0 which guarantees the existence of a solution for p¯ = pm in the high domain (condition arises by solving p¯ = pm ). Comparing the solutions in the mid- and high domain at I = θhigh shows that they are identical for I = θhigh . A.4.4 Monotonicity. The equilibrium r¯ is a monotone function of I. For the low and high domain, it is straightforward to show ∂ r¯/∂I > 0. For the mid-domain, this can be shown given β p β − pm > β p β − p0 > 0. We sketch the proof in an itemized fashion: • Look at the numerator; divide by pm − p0 ; and move to one side. • Using β p β − p0 > 0, show that the other side is greater zero. • Square both sides, and look at the difference of squares. This proves that increasing exciting input I results in an increased potential at equilibrium r¯.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

43

A.4.5 Input Sensitivity. The input sensitivity in case of divisive pool inhibition and self-excitation is given by

Pro

of

∂ r¯ = ∂I ⎧ 1 I + α + βγSE − low ⎪ ⎪ , I ∈ [0, θlow ], ⎪ ⎪ 2 low γSE ⎪ ⎪ ⎪ ⎪ ⎪ 2 ⎪ ⎪ 1 (pm − p0 ) I + (pm − p0 )((2β p β − p0 + Ic )γ + (α + γSE β )(pm − p0 ) − ) ⎪ ⎪ , ⎨ 2

(γ β p + γSE (pm − p0 )) ⎪ ⎪ I ∈ (θlow , θhigh ), ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 1 I + α + βγSE + γ − high ⎪ ⎪ , I ∈ [θhigh , ∞). ⎩ 2 high γSE ∂ r¯ ∂I

ed

It can be shown that

(A.20)

> 0 is strictly positive.9

orr ect

A.4.6 Self-Excitation Strength Sensitivity and Feedback Sensitivity. Selfexcitation strength sensitivity. The sensitivity ∂ r¯/∂γSE describes the rate of change the equilibrium response r¯ undergoes for varying strengths of the self-excitation parameter γSE . Again, we investigate the different input levels for the operating ranges of the firing rate functions. In particular, we get ∂ r¯ 1 β(α − I)γSE + (I + α) low − (I + α)2 = , 2 ∂γSE 2 low γSE

I ∈ [0, θlow ], (A.21)

Un c

pm − p0 ∂ r¯ = · ((γ β(pm − p0 )γSE + (p0 − Ic )γ 2 ∂γSE (γ β p + γSE (pm − p0 ))2 − (α + 3I)(pm − p0 )γ − γ ) · (β p β − p0 + Ic )

− β(I − α)(pm − p0 )2 γSE + (Ic − p0 )(I − α)(pm − p0 )γ − (I + α)2 (pm − p0 )2 + (I + α) (pm − p0 )),

I ∈ (θlow , θhigh ), (A.22)

2 ∂ r¯ 1 β(α − I + γ )γSE + (I + α + γ ) · high − (I + α + γ ) = , 2 ∂γSE 2 high γSE

I ∈ [θhigh , ∞).

(A.23)

9 Show that nominator of equation A.20 is positive by moving the square root ( low / / high ) of the nominator to one side of the inequality. Squaring both sides (both

sides are greater zero) and reduing proves

∂ r¯ ∂I

> 0.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

44

T. Brosch and H. Neumann

ed

Pro

of

It can be shown that ∂ r¯/∂γSE ≥ 010 . This is an important result in the context of the feedback sensitivity (see below). Impact of feedback. All of the previous results can be easily applied to ∗ the case of nonzero feedback. Just replace [I, γSE ] ∈ (R+ )2 by [I∗ , γSE ]= [I, γSE ] · (1 + λ · net FB ). In the case of the input sensitivity (see equation A.20), one needs to add the factor (1 + net FB ) (originating from the chain rule of calculus) to the input sensitivity ∂ r¯/∂I. Note that for initial nonzero potential r and γSE · (1 + λ · net FB ) > βα zero input I = 0 yields a positive potential at equilibrium r¯ (see section A.4.2 for more details). This implies that large feedback can significantly change the initial response behavior in this case since for Ic = 0; for example, the neuron may be tonically active depending on γSE · (1 + λ · net FB ) being less or greater than βα . Feedback sensitivity. The sensitivity regarding feedback can be easily derived from the input sensitivity ∂ r¯/∂I and the sensitivity regarding the selfexcitation strength ∂ r¯/∂γSE . Using the chain rule, the feedback sensitivity is given by (A.24)

orr ect

  ∂ r¯ ∂ r¯ ∗ ∗ ∂ r¯ ∗ ∗ = λ · I · ∗ (I , γSE ) + γSE · (I , γSE ) , net FB ∂I ∂γSE

Un c

∗ where [I∗ , γSE ] = [I, γSE ] · (1 + λnet FB ). The right-hand side represents a weighted sum of the input sensitivity and the self-excitation strength sensitivity. For low excitatory input I, for example, the feedback sensitivity is approximately equal to the self-excitation strength sensitivity multiplied by λ · γSE . For a weak self-excitation strength in turn, the feedback sensitivity is dominated by the input-sensitivity ∂ r¯/∂I. Equation A.24 also demonstrates that the feedback strength parameter λ directly scales the feedback sensitivity regardless of the input strength I or the self-excitation strength γSE . Consequently, one can directly increase the feedback’s impact strength by adjusting the parameter λ.

A.4.7 Stability. For the mid-domain, the equation gets more complicated due to the incorporation of the self-excitation term and the pool inhibition that receives input from this term as well. Since the resulting equation for λ1,2 renders not to be treatable analytically, we employed a numerical Monte Carlo analysis instead, testing about 10 billion valid parameter sets. More precisely, a parameter set was considered correct if 0 < p0 < pm < β p β, 0 < γSE < α/β, 0 ≤ Ic ≤ pm , and I ∈ [θlow , θhigh ]. Each strict inequality is fulfilled 10 To prove that ∂ r¯/∂γ SE ≥ 0, show that the nominator of equations A.21 to A.23 is positive by moving the summand containing the square roots ( low / / high ) to one side of the inequality. Squaring both sides (if the side without the square root is smaller zero we are done) and simplification proves the statement.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

45

of

such that the difference is at least ε = 0.001 (e.g., ε ≤ p0 , p0 + ε ≤ pm ). The value of each parameter was smaller than 100. The eigenvalues in all cases had a negative real part; that is, the tested equilibrium in the mid-domain was stable. In the following, we prove that the equilibria in the low and high domains are stable: • Stability in the low domain: The ODE simplifies to

Pro

r˙ = f (r) := − αr + (β − r) · (I + γSE gr (r)) · (1 + λ · net FB ).

(A.25)

Linearization at the equilibrium r¯ requires the determination of fr (¯r ), which is given as (A.26)

ed

 fr (¯r ) = − I2 + 2(α + βγSE )I + (α − βγSE )2 < 0. Consequently, the equilibrium is stable. • Stability in the high domain: The ODE simplifies to

orr ect

r˙ = f (r) := − αr + (β − r) · (I + γSE gr (r)) · (1 + λ · net FB ) − γ · r. (A.27)

Linearization at the equilibrium r¯ requires the determination of fr (¯r ) which is given as 1 fr (¯r ) = − (4I2 + 4(1 + 2βγSE + 2γ + 2α)I + 4 2

(A.28) 1

Un c

· (α − βγSE + γ )2 + 4 · (γ + βγSE + α) + 1) 2 < 0,

(A.29)

implying a stable equilibrium.

This proves that the equilibrium is stable in the low and high domain and that it is highly likely to select a parameter set that is stable in the middomain too. A.4.8 Limit. The potential at equilibrium r¯ is bounded. We utilize following formula: 

(I + a)2 + bI − (I + a) = 

b (I+a)2 +bI I2

+



I+a I→∞ I

b b = , 1+1 2

→ 0.

a→∞

(A.30)

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

46

T. Brosch and H. Neumann

Using this relation and high = the high domain is given as

lim r¯ = lim

I→∞

(I + α + γ − βγSE )2 + 4βγSE I, limI→∞ r¯ in

4βγSE 1 1 ( − (I + α + γ − βγSE )) = · = β. 2γSE high 2γSE 2

of

I→∞



(A.31)

ed

Acknowledgments

Pro

Using the same relation, one can show that the limit for the other domains is also β. This is important since this implies that the thresholds θlow/high always exist except for the cases in which the equilibrium starts at a strictly positive value and β p r¯ + Ic > p0/m for I = 0 (see section A.4.2 for more details).

References

orr ect

We thank P.R. Roelfsema for his helpful comments on an earlier version of this article. The constructive criticism of three anonymous reviewers helped to substantially improve the article. Their work is gratefully acknowledged. T.B. is supported by the Graduate School Mathematical Analysis of Evolution, Information and Complexity at Ulm University, and H.N. is supported in part by the Transregional Collaborative Research Centre SFB/TRR 62 Companion-Technology for Cognitive Technical Systems funded by the German Research Foundation (DFG).

Un c

Abbott, L. F., & Chance, F. S. (2005). Drivers and modulators from push–pull and balanced synaptic input. Progress in Brain Research, 149, 147–155. Ahmadian, Y., Rubin, D. B., & Miller, K. D. (2013). Analysis of the stabilized supralinear network. Neural Computation, 25(8), 1994–2037. Amit, D. J., & Brunel, N. (1997). Dynamics of a recurrent network of spiking neurons before and following learning. Network: Computational Neural Systems, 8(4), 373– 404. Angelucci, A., & Bullier, J. (2003). Reaching beyond the classical receptive field of V1 Neurons: Horizontal or feedback axons? Journal of Physiology, 97(2–3), 141–154. Angelucci, A., Levitt, J. B., Walton, E.J.S., Hup´e, J.-M., Bullier, J., & Lund, J. S. (2002). Circuits for local and global signal integration in primary visual cortex. Journal of Neuroscience, 22(19), 8633–8646. Barnes, T., & Mingolla, E. (2013). A neural model of visual figure–ground segregation from kinetic occlusion. Neural Networks, 37, 141–164. Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P., & Friston, K. J. (2012). Canonical microcircuits for predictive coding. Neuron, 76(4), 695–711. Bayerl, P., & Neumann, H. (2004). Disambiguating visual motion through contextual feedback modulation. Neural Computation, 16(10), 2041–2066.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

47

Un c

orr ect

ed

Pro

of

Bayerl, P., & Neumann, H. (2006). A neural model of feature attention in motion perception. BioSystems, 89(1–3), 208–215. Beck, C., & Neumann, H. (2011). Combining feature selection and integration: A neural model for MT motion selectivity. PLoS ONE, 6(7), e21254. Beierlein, M., Gibson, J. R., & Connors, B. W. (2003). Two dynamically distinct inhibitory networks in layer 4 of the neocortex. Journal of Neurophysiology, 90(5), 2987–3000. Berzhanskaya, J., Grossberg, S., & Mingolla, E. (2007). Laminar cortical dynamics of visual form and motion interactions during coherent object motion perception. Spatial Vision, 20(4), 337–395. Bonin, V., Mante, V., & Carandini, M. (2005). The suppressive field of neurons in lateral geniculate nucleus. Journal of Neuroscience, 25(47), 10844–10856. Bosking, W. H., Zhang, Y., Schofield, B., & Fitzpatrick, D. (1997). Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex. Journal of Neuroscience, 17(6), 2112–2127. Bouecke, J. D., Tlapale, E., Kornprobst, P., & Neumann, H. (2011). Neural mechanisms of motion detection, integration, and segregation: From biology to artificial image processing systems. EURASIP Journal on Advances in Signal Processing, 2011, 781561. Brosch, T., & Neumann, H. (2012a). The Brain’s sequential parallelism: Perceptual decision–making and early sensory responses. In Proceedings of the Internation Conference on Neural Information Processing (pp. 41–50). New York: Springer-Verlag. Brosch, T., & Neumann, H. (2012b). The combination of HMAX and HOGs in an attention guided framework for object localization. In Proceedings of the International Conference on Pattern Recognition Applications and Methods (vol. 2, pp. 281–288). Piscataway, NJ: IEEE. Brosch, T., & Neumann, H. (2014). Interaction of feedforward and feedback streams in visual Cortex in a firing-rate model of columnar computations. Neural Networks, 54, 11–16. Bullier, J. (2001). Integrated model of visual processing. Brain Research Reviews, 36(2– 3), 96–107. Bullier, J., McCourt, M. E., & Henry, G. H. (1988). Physiological studies on the feedback connection to the striate cortex from cortical areas 18 and 19 of the cat. Experimental Brain Research, 70(1), 90–98. Carandini, M., & Heeger, D. J. (1994). Summation and division by neurons in primate visual cortex. Science, 264(5163), 1333–1336. Carandini, M., & Heeger, D. J. (2012). Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13, 51–62. Carandini, M., Heeger, D. J., & Movshon, J. A. (1997). Linearity and normalization in simple cells of the macaque primary visual cortex. Journal of Neuroscience, 17(21), 8621–8644. Carandini, M., Heeger, D. J., & Movshon, J. A. (1999). Linearity and gain control in V1 simple cells. In P. S. Ulinski, E. G. Jones, and A. Peters (Eds.), Cerebral cortex (pp . 401–443). New York: Kluwer Academic/Plenum. Cardin, V., Friston, K. J., & Zeki, S. (2011). Top-down modulations in the visual form pathway revealed with dynamic causal modeling. Cerebral Cortex, 21(3), 550– 562.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

48

T. Brosch and H. Neumann

Un c

orr ect

ed

Pro

of

Chen, K., Song, X.-M., & Li, C.-Y. (2012). Contrast-dependent variations in the excitatory classical receptive field and suppressive nonclassical receptive field of cat primary visual cortex. Cerebral Cortex, 23, 283–292. Churchland, P. S., Koch, C., & Sejnowski, T. J. (1990). What is computational neuroscience? In E. L. Schwartz (Ed.), Computational neuroscience (pp. 46–55). Cambridge, MA: MIT Press. Corchs, S., & Deco, G. (2002). Large–scale neural model for visual attention: Integration of experimental single-cell and fMRI data. Cerebral Cortex, 12(4), 339–348. Crick, F., & Koch, C. (1998). Constraints on cortical and thalamic projections: The no-strong-loops hypothesis. Nature, 391, 245–250. Curtu, R., & Ermentrout, B. (2004). Pattern formation in a network of excitatory and inhibitory cells with adaptation. SIAM J. Applied Dynamical Systems, 3(3), 191– 231. Deco, G., Jirsa, V. K., & McIntosh, A. R. (2011). Emerging concepts for the dynamical organization of resting-state activity in the brain. Nature Reviews Neuroscience, 12, 43–56. Deco, G., & Rolls, E. T. (2005). Neurodynamics of biased competition and cooperation for attention: A model with spiking neurons. Journal of Neurophysiology, 94(1), 295–313. Desimone, R. (1998). Visual attention mediated by Biased Competition in Extrastriate visual cortex. Philos. Trans. R. Soc. Lond. B, 353(1373), 1245–1255. Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Reviews of Neuroscience, 18, 193–222. Dragoi, V., & Sur, M. (2000). Dynamic properties of recurrent inhibition in primary visual cortex contrast and orientation dependence of contextual effects. Journal of Neurophysiology, 83(2), 1019–1030. Eckhorn, R. (1999). Neural mechanisms of visual feature binding investigated with microelectrodes and models. Visual Cognition, 6(3), 231–265. Eckhorn, R., Reitboeck, H. J., Arndt, M., & Dicke, P. (1990). Feature linking via synchronization among distributed assemblies: Simulations of results from cat visual cortex. Neural Computation, 2(3), 293–307. Ellias, S. A., & Grossberg, S. (1975). Pattern formation, contrast control, and oscillations in the short term memory of shunting on–center off–surround networks. Biological Cybernetics, 20(2), 69–98. Endres, D., Neumann, H., Kolesnik, M., & Giese, M. A. (2011). Hooligan detection the effects of saliency and expert knowledge. In Proceedings of the International Conference on Imaging for Crime Detection and Prevention (pp. 1–6). Piscataway, NJ: IEEE. Ermentrout, G. B., & Kopell, N. (1998). Fine structure of neural spiking and synchronization in the presence of conduction delays. PNAS, 95(3), 1259–1264. Felisberti, F., & Derrington, A. M. (1999). Long–range interactions modulate the contrast gain in the lateral geniculate nucleus of cats. Visual Neuroscience, 16(5), 943–956. Field, D. J., Hayes, A., & Hess, R. F. (1993). Contour integration by the human visual system: Evidence for a local “association field.” Vision Research, 33(2), 173–193. Friedrichs, K. O. (1944). The identity of weak and strong extensions of differential operators. Transactions of the American Mathematical Society, 55, 132–151.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

49

Un c

orr ect

ed

Pro

of

Geisler, W. S., Perry, J. S., Super, B. J., & Gallogly, D. P. (2001). Edge co-occurrence in natural images predicts contour grouping performance. Vision Research, 41(6), 711–724. Gilbert, C. D., & Li, W. (2013). Top-down influences on visual processing. Nature Reviews Neuroscience, 14, 350–363. Girard, P., & Bullier, J. (1989). Visual activity in area V2 during reversible inactivation of area 17 in the macaque monkey. Journal of Neurophysiology, 62(6), 1287–1302. Gove, A., Grossberg, S., & Mingolla, E. (1995). Brightness perception, illusory contours, and corticogeniculate feedback. Visual Neuroscience, 12(6), 1027–1052. Grossberg, S. (1973). Contour enhancement, short-term memory, and constancies in reverberating neural networks. Studies in Applied Mathematics, 52, 213–257. Grossberg, S. (1980). How does a brain build a cognitive code? Psychological Review, 87(1), 1–51. Grossberg, S. (1988). Nonlinear neural networks: Principles, mechanisms, and architectures. Neural Networks, 1, 17–61. Grossberg, S. (2013). Adaptive resonance theory: How a brain learns to consciously attend, learn, and recognize a changing world. Neural Networks, 37, 1–47. Grossberg, S., & Mingolla, E. (1985). Neural dynamics of perceptual grouping: Textures, boundaries, and emergent segmentations. Perception and Psychophysics, 38(2), 141–171. Hamker, F. H. (2004). A dynamic model of how feature cues guide spatial attention. Vision Research, 44(5), 501–521. Hansen, T., & Neumann, H. (2004). Neural mechanisms for the robust representation of junctions. Neural Computation, 16(5), 1013–1037. Hansen, T., & Neumann, H. (2008). A recurrent model of contour integration in primary visual cortex. Journal of Vision, 8(8), 1–25. Harrison, L. M., Stephan, K. E., Rees, G., & Fristona, K. J. (2007). Extra-classical receptive field effects measured in striate cortex with fMRI. NeuroImage, 34(3), 1199–1208. Heeger, D. J. (1992). Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9(2), 191–197. Herz, A. V. M., Gollisch, T., Machens, C. K., & Jaeger, D. (2006). Modeling singleneuron dynamics and computations: A balance of detail and abstraction. Science, 314(5796), 80–85. Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology, 117(4), 500–544. Hubel, D. H., & Wiesel, T. N. (1959). Receptive fields of single neurones in the cat’s striate cortex. Journal of Physiology, 148(3), 574–591. Hubel, D. H., & Wiesel, T. N. (1972). Laminar and columnar distribution of geniculocortical fibers in the macaque monkey. Journal Comparative Neurology, 146, 421– 450. Hunter, J. N., & Born, R. T. (2011). Stimulus-dependent modulation of suppressive influences in MT. Journal of Neuroscience, 31(2), 678–686. Hup´e, J. M., James, A. C., Payne, B. R., Lomber, S. G., Girard, P., & Bullier, J. (1998). Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons. Nature, 394, 784–787.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

50

T. Brosch and H. Neumann

Un c

orr ect

ed

Pro

of

Jadi, M. P., & Sejnowski, T. J. (2014a). Cortical oscillations arise from contextual interactions that regulate sparse coding. PNAS, 111(18), 6780–6785. Jadi, M. P., & Sejnowski, T. J. (2014b). Regulating cortical oscillations in an inhibitionstabilized network. Proceedings of the IEEE, 102(5), 830–842. Jones, H. E., Grieve, K. L., Wang, W., & Sillito, A. M. (2001). Surround suppression in primate V1. Journal of Neurophysiology, 86(4), 2011–2028. Jones, H. E., Wang, W., & Sillito, A. M. (2002). Spatial organization and magnitude of orientation contrast interactions in primate V1. Journal of Neurophysiology, 88(5), 1796–1808. Kapadia, M. K., Westheimer, G., & Gilbert, C. D. (2000). Spatial distribution of contextual interactions in primary visual cortex and in visual perception. Journal of Neurophysiology, 84(4), 2048–2062. Kapadia, M. K., Ito, M., Gilbert, C. D., & Westheimer, G. (1995). Improvement in visual sensitivity by changes in local context: Parallel studies in human observers and in V1 of alert monkeys. Neuron, 15(4), 843–856. Kiefer, M. (2007). Top-down modulation of unconscious “Automatic” processes: A gating framework. Advances in Cognitive Psychology, 3(1–2), 289–306. Knierim, J. J., & Van Essen, D. C. (1992). Neuronal responses to static texture patterns in area V1 of the alert macaque monkey. Journal of Neurophysiology, 67(4), 961– 980. Koch, C. (1999). Biophysics of computation. New York: Oxford University Press. Kouh, M., & Poggio, T. (2008). A canonical neural circuit for cortical nonlinear operations. Neural Computation, 20(6), 1427–1451. Lamme, V.A.F. (1995). The neurophysiology visual cortex figure-ground segregation primary. Journal of Neuroscience, 15(2), 1605–1615. Lamme, V.A.F., & Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23(11), 571–579. Larkum, M. (2013). A Cellular mechanism for cortical associations: An organizing principle for the cerebral cortex. Trends in Neurosciences, 36(3), 141–151. ¨ Larkum, M. E., Senn, W., & Luscher, H.-R. (2004). Top-down dendritic input increases the gain of layer 5 pyramidal neurons. Cerebral Cortex, 14(10), 1059–1070. Latham, P. E., & Nirenberg, S. (2004). Computing and stability in cortical networks. Neural Computation, 16(7), 1385–1412. Latham, P. E., Richmond, B. J., Nelson, P. G., & Nirenberg, S. (2000). Intrinsic dynamics in neuronal networks. I. Theory. Journal of Neurophysiology, 83(2), 808– 827. Latham, P. E., Richmond, B. J., Nirenberg, S., & Nelson, P. G. (2000). Intrinsic dynamics in neuronal networks. II: Experiment. Journal of Neurophysiology, 83(2), 828–835. Layher, G., Giese, M., & Neumann, H. (2014). Learning representations of animated motion sequences: A neural model. Topics in Cognitive Science, 6(1), 170– 182. Lee, J., & Maunsell, J.H.R. (2009). A normalization model of attentional modulation of single unit responses. PLoS ONE, 4(2), e4651. Levick, W. R., Cleland, B. G., & Dubin, M. W. (1972). Lateral geniculate neurons of cat: Retinal inputs and physiology. Investigative Ophthalmology and Visual Science, 11(5), 302–311.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

51

Un c

orr ect

ed

Pro

of

Li, X., & Basso, M. A. (2008). Preparing to move increases the sensitivity of superior colliculus neurons. Journal of Neuroscience, 28(17), 4561–4577. Li, Z. (1999). Visual segmentation by contextual influences via intra-cortical interactions in the primary visual cortex. Network: Computation in Neural Systems, 10(2), 187–212. Li, Z. (2001). Computational design and nonlinear dynamics of a recurrent network model of the primary visual cortex. Neural Computation, 13(8), 1749–1780. Louie, K., & Glimcher, P. W. (2012). Efficient coding and the neural representation of value. Annals of the New York Academy of Sciences, 1251, 13–32. Louie, K., Grattan, L. E., & Glimcher, P. W. (2011). Reward value-based gain control: Divisive normalization in parietal cortex. Journal of Neuroscience, 31(29), 10627– 10639. Lowe, D. G., & Binford, T. O. (1983). Perceptual organization as a basis for visual recognition. In Proceedings of the Third National Conference on Artificial Intelligence. (pp. 255–260). Palo Alto, CA: Association for the Advancement of Artificial Intelligence. Lyu, S., & Simoncelli, E. P. (2009). Nonlinear extraction of independent components of natural images using radial gaussianization. Neural Computation, 21(6), 1485– 1519. Markov, N. T., & Kennedy, H. (2013). The Importance of being hierarchical. Current Opinion in Neurobiology, 23(2), 187–194. Mart´ınez-Trujillo, J. C., & Treue, S. (2002). Attentional modulation strength in cortical area MT depends on stimulus contrast. Neuron, 35(2), 365–370. Martinez-Trujillo, J. C., & Treue, S. (2004). Feature-based attention increases the selectivity of population responses in primate visual cortex. Current Biology, 14(9), 744–751. McAdams, C. J., & Maunsell, J.H.R. (1999). Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4. Journal of Neuroscience, 19(1), 431–441. McCormick, D. A., Connors, B. W., Lighthall, J. W., & Prince, D. A. (1985). Comparative electrophysiology of pyramidal and sparsely spiny stellate neurons of the neocortex. Journal of Neurophysiology, 54(4), 782–806. McLaughlin, D., Shapley, R., Shelley, M., & Wielaard, D. J. (2000). A neuronal network model of macaque primary visual cortex (V1), Orientation selectivity and dynamics in the input layer 4Cα. PNAS, 97(14), 8087–8092. Moran, J., & Desimone, R. (1985). Selective attention gates visual processing in the extrastriate cortex. Science, 229(4715), 782–784. Murphy, B. K., & Miller, K. D. (2009). Balanced amplification: A new mechanism of selective amplification of neural activity patterns. Neuron, 61(4), 635–648. Mutch, J., & Lowe, D. G. (2008). Object class recognition and localization using sparse features with limited receptive fields. International Journal of Computer Vision, 80(1), 45–57. Naito, T., Sadakane, O., Okamoto, M., & Sato, H. (2007). Orientation tuning of surround suppression in lateral geniculate nucleus and primary visual cortex of cat. Neuroscience, 149(4), 962–975. Naka, K. I., & Rushton, W.A.H. (1966). S-potentials from luminosity units in the retina of fish (Cyprinidae). Journal of Physiology, 185(3), 587–599.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

52

T. Brosch and H. Neumann

Un c

orr ect

ed

Pro

of

´ Nassi, J. J., Gomez-Laberge, C., Kreiman, G., & Born, R. T. (2014). Corticocortical feedback increases the spatial extent of normalization. Frontiers in Systems Neuroscience, 8(105), 1–13. Neumann, H., & Mingolla, E. (2001). Computational neural models of spatial integration in perceptual grouping. In T. Shipley & P. Kellman (Eds .), From fragments to objects: Grouping and segmentation in vision (pp. 353–400). Amsterdam: Elsevier. Neumann, H., & Sepp, W. (1999). Recurrent V1–V2 interaction in early visual boundary processing. Biological Cybernetics, 81, 425–444. Ohshiro, T., Angelaki, D. E., & DeAngelis, G. C. (2011). A normalization model of multisensory integration. Nature Neuroscience, 14(6), 775–782. Ozeki, H., Finn, I. M., Schaffer, E. S., Miller, K. D., & Ferster, D. (2009). Inhibitory stabilization of the cortical network underlies visual surround suppression. Neuron, 62(4), 578–592. Pi¨ech, V., Li, W., Reeke, G. N., & Gilbert, C. D. (2013). Network model of topdown influences on local gain and contextual interactions in visual cortex. PNAS, 110(43), E410817. Polat, U., & Sagi, D. (1993). Lateral interactions between spatial channels: Suppression and facilitation revealed by lateral masking experiments. Vision Research, 33(7), 993–999. Przybyszewski, A. W., Gaska, J. P., Foote, W., & Pollen, D. A. (2000). Striate cortex increases contrast gain of macaque LGN neurons. Visual Neuroscience, 17(4), 485– 494. Rabinowitz, N. C., Willmore, B.D.B., Schnupp, J.W.H., & King, A. J. (2011). Contrast gain control in auditory cortex. Neuron, 70(6), 1178–1191. Rankin, J., Tlapale, E., Veltz, R., Faugeras, O., & Kornprobst, P. (2013). Bifurcation analysis applied to a model of motion integration with a multistable stimulus. Journal of Computational Neuroscience, 34(1), 103–124. Rao, R.P.N., & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87. Raudies, F., Mingolla, E., & Neumann, H. (2011). A model of motion transparency processing with local center-surround interactions and feedback. Neural Computation, 23, 2868–2914. Rauss, K., Schwartz, S., & Pourtoise, G. (2011). Top-down effects on early visual processing in humans: A predictive coding framework. Neuroscience and Biobehavioral Reviews, 35(5), 1237–1253. Recanzone, G. H., & Wurtz, R. H. (2000). Effects of attention ON MT AND MST neuronal activity during pursuit initiation. Journal of Neurophysiology, 83(2), 777– 790. Reynolds, J. H., Chelazzi, L., & Desimone, R. (1999). Competitive mechanisms subserve attention in macaque areas V2 and V4. Journal of Neuroscience, 19(5), 1736– 1753. Reynolds, J. H., & Desimone, R. (2003). Interacting roles of visual attention and visual saliency in V4. Neuron, 37(5), 853–863. Reynolds, J. H., & Heeger, D. J. (2009). The normalization model of attention. Neuron, 61, 168–185.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

53

Un c

orr ect

ed

Pro

of

Reynolds, J. H., Pasternak, T., & Desimone, R. (2000). Attention increases sensitivity of V4 neurons. Neuron, 26(3), 703–714. Riesenhuber, M., & Poggio, T. (1999). Hierarchical models of object recognition in cortex. Nature Neuroscience, 2(11), 1019–1025. Riesenhuber, M., & Poggio, T. (2000). Models of object recognition. Nature Neuroscience, 3, 1199–1204. Roelfsema, P. R., Lamme, V.A.F., Spekreijse, H., & Bosch, H. (2002). Figure-ground segregation in a recurrent network architecture. Journal of Cognitive Neuroscience, 14(4), 525–537. Salin, P.-A., & Bullier, J. (1995). Corticocortical connections in the visual system: Structure and function. Physiological Reviews, 75(1), 107–154. Salinas, E. (2003). Background synaptic activity as a switch between dynamical states in a network. Neural Computation, 15(7), 1439–1475. Sandell, J. H., & Schiller, P. H. (1982). Effect of cooling area 18 on striate cortex cells in the squirrel monkey. Journal of Neurophysiology, 48(1), 38–48. Schwabe, L., Obermayer, K., Angelucci, A., & Bressloff, P. C. (2006). The role of feedback in shaping the extra-classical receptive field of cortical neurons: A recurrent network model. Journal of Neuroscience, 26(36), 9117–9129. Self, M. W., Kooijmans, R. N., Sup`er, H., Lamme, V. A., & Roelfsema, P. R. (2012). Different glutamate receptors convey feedforward and recurrent processing in macaque V1. PNAS, 109(27), 11031–11036. Seri`es, P., & Tarroux, P. (1999). Synchrony and delay activity in cortical column models. Neurocomputing, 26–27, 505–510. Serre, T., Wolf, L., & Poggio, T. (2005). Object recognition with features inspired by visual cortex. In Proceedings of the Conference on Computer Vision and Pattern Recognition (pp. 994–1000). Piscataway, NJ: IEEE. Shapley, R. M., & Xing, D. (2013). Local circuit inhibition in the cerebral cortex as the source of gain control and untuned suppression. Neural Networks, 37, 172–181. Sherman, S. M., & Guillery, R. W. (1998). On the actions that one nerve cell can have on another: Distinguishing “drivers” from “modulators.” PNAS, 95(12), 7121–7126. Sillito, A. M., Cudeiro, J., & Jones, H. E. (2006). Always returning: Feedback and sensory processing in visual cortex and thalamus. Trends in Neurosciences, 29(6), 307–316. Smith, M. A. (2006). Surround suppression in the early visual system. Journal of Neuroscience, 26(14), 3624–3625. Somers, D. C., Todorov, E. V., Siapas, A. G., Toth, L. J., Kim, D., & Sur, M. (1998). A local circuit approach to understanding integration of long–range inputs in primary visual cortex. Cerebral Cortex, 8(3), 204–217. Sperling, G. (1970). Model of visual adaptation and contrast detection. Perception and Psychophysics, 8(3), 143–157. Spitzer, H., Desimone, R., & Moran, J. (1988). Increased attention enhances both behavioral and neuronal performance. Science, 240(4850), 338—340. Spratling, M. W. (2008). Reconciling predictive coding and biased competition models of cortical function. Frontiers in Computational Neuroscience, 2(4), 1–8. Spratling, M. W., & Johnson, M. H. (2004). Neural coding strategies and mechanisms of competition. Cognitive Systems Research, 5, 93–117.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

54

T. Brosch and H. Neumann

Un c

orr ect

ed

Pro

of

Stemmler, M., Usher, M., & Niebur, E. (1995). Lateral interactions in primary visual cortex: A model bridging physiology and psychophysics. Science, 269(5232), 1877– 1880. ˜ J., Schummers, J., . . . Stimberg, M., Wimmer, K., Martin, R., Schwabe, L., Marino, Obermayer, K. (2009). The operating regime of local computations in primary visual cortex. Cerebral Cortex, 19(9), 2166–2180. Tao, L., Shelley, M., McLaughlin, D., & Shapley, R. (2004). An egalitarian network model for the emergence of simple and complex cells in visual cortex. PNAS, 101(1), 366–371. Terman, D., Kopell, N., & Bose, A. (1998). Dynamics of two mutually coupled slow inhibitory neurons. Physica D, 117(1–4), 241–275. Thielscher, A., & Neumann, H. (2003). Neural mechanisms of cortico-cortical interaction in texture boundary detection: A modeling approach. Neuroscience, 122, 921–939. Tononi, G., Sporns, O., & Edelman, G. M. (1992). Reentry and the problem of integrating multiple cortical areas: Simulation of dynamic integration in the visual system. Cerebral Cortex, 2(4), 310–335. Treue, S. (2001). Neural correlates of attention in primate visual cortex. Trends in Neurosciences, 24(5), 295–300. Treue, S., & Mart´ınez-Trujillo, J. C. (1999). Feature-based attention influences motion processing gain in macaque visual cortex. Nature, 399, 575–579. Tsodyks, M., & Gilbert, C. (2004). Neural networks and perceptual learning. Nature, 431, 775–781. Tsodyks, M. V., Skaggs, W. E., Sejnowski, T. J., & McNaughton, B. L. (1997). Paradoxical effects of external modulation of inhibitory interneurons. Journal of Neuroscience, 17(11), 4382–4388. Tsui, J.M.G., Hunter, J. N., Born, R. T., & Pack, C. C. (2010). The role of V1 surround suppression in MT motion integration. Journal of Neurophysiology, 103, 3123– 3138. Ullman, S. (1995). Sequence seeking and counter streams: A computational model for bidirectional information flow in the visual cortex. Cerebral Cortex, 5(1), 1–11. van Vreeswijk, C., Abbott, L. F., & Ermentrout, G. B. (1994). When inhibition not excitation synchronizes neural firing. Journal of Computational Neuroscience, 1(4), 313–321. van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293), 1724–1726. Veltz, R., & Faugeras, O. (2010). Local global analysis of the stationary solutions of some neural field equations. SIAM J. of Applied Dynamical Systems, 9(3), 954–998. Vogels, T. P., & Abbott, L. F. (2009). Gating multiple signals through detailed balance of excitation and inhibition in spiking networks. Nature Neuroscience, 12(4), 483– 491. Weidenbacher, U., & Neumann, H. (2009). Extraction of surface-related features in a recurrent model of V1–V2 interactions. PloS ONE, 4(6), e5909. Williford, T., & Maunsell, J.H.R. (2006). Effects of spatial attention on contrast response functions in macaque area V4. Journal of Neurophysiology, 96(1), 40–54. Wilson, H. R., & Cowan, J. D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12(1), 1–24.

NECO_a_00675-Brosch

neco.cls

September 12, 2014

11:19

Canonical Neural Circuit Method

55

Pro

of

Wilson, H. R., & Cowan, J. D. (1973). A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 13(2), 55–80. Yu, A. J., Giese, M. A., & Poggio, T. A. (2002). Biophysiologically plausible implementations of the maximum operation. Neural Computation, 14(12), 2857–2881. Yu, C., & Levi, D. M. (2000). Surround modulation in human vision unmasked by masking experiments. Nature Neuroscience, 3(7), 724–728. Zhaoping, L. (1998). A neural model of contour integration in the primary visual cortex. Neural Computation, 10(4), 903–940. Zhaoping, L. (2011). Neural circuit models for computations in early visual cortex. Current Opinion in Neurobiology, 21(5), 808–815. Zhu, W., Shelley, M., & Shapley, R. (2009). A neuronal network model of primary visual cortex explains spatial frequency selectivity. Computational Neuroscience, 26(2), 271–287.

Un c

orr ect

ed

Received May 7, 2013; accepted June 30, 2014.

Computing with a canonical neural circuits model with pool normalization and modulating feedback.

Evidence suggests that the brain uses an operational set of canonical computations like normalization, input filtering, and response gain enhancement ...
957KB Sizes 0 Downloads 3 Views