498

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

Brief Papers Dynamics Analysis of a Population Decoding Model Jiali Yu, Huajin Tang, and Haizhou Li

Abstract— Information processing in the nervous system involves the activity of large populations of neurons. It is difficult to extract information from these population codes because of the noise inherent in neuronal responses. We propose a divisive normalization model to read the population codes. The dynamics of the model are analyzed by continuous attractor theory. Under certain conditions, the model possesses continuous attractors. Moreover, the explicit expressions of the continuous attractors are provided. Simulations are employed to illustrate the theory. Index Terms— Continuous attractor, divisive normalization model, multiple-peaked activity, population decoding.

I. I NTRODUCTION Information is encoded in the brain by populations or clusters of cells, rather than by single cells. This encoding strategy is known as population coding [1]–[3]. In population coding, each neuron has a distribution of responses over some set of inputs, and the responses of many neurons may be combined to determine some value about the inputs. Population coding is widely applied in the sensor and motor areas of the brain [4], [5]. For instance, hippocampal place cells respond to the location of a rat in an environment [6]– [8]. In the visual cortical areas, cells are tuned to the moving direction [9]–[11]. Extracting information from these population codes is difficult because of the noise inherent in neuronal activities [12]. If the activity of each neuron is plotted as a function of its preferred orientation, the resulting pattern resembles a noisy hill. Several methods have been proposed to extract the encoded variables based on the observed noisy activity. One such method is the population vector estimator [13], another is maximum likelihood (ML) estimator [1], [12]. Recent studies on population coding have revealed that continuous stimuli, such as orientation, moving direction, and the spatial location of objects, are likely to be encoded as a continuous attractor in neural systems [14], [15]. A continuous attractor is defined as a set of connected stable equilibrium Manuscript received March 27, 2012; revised November 23, 2012; accepted December 19, 2012. Date of publication January 11, 2013; date of current version January 30, 2013. This work was supported by the Agency for Science, Technology, and Research (A*STAR), Singapore, under SERC Grant 092 157 0130. J. Yu and H. Tang are with the Institute for Infocomm Research, A*STAR, 138632, Singapore (e-mail: [email protected]; [email protected]). H. Li is with the Institute for Infocomm Research, A*STAR, 138632, Singapore, and also with the School of Electrical Engineering and Telecommunications, University of New South Wales, Kensington NSW 2052, Australia (e-mail: [email protected]). Digital Object Identifier 10.1109/TNNLS.2012.2236684

points [16]– [18]. Continuous attractors have been used to describe the encoding of continuous stimuli, such as eye position [19], head direction [20], moving direction [21], [22], cognitive map [23], and population coding [24]. The existence of the continuous attractors depends on many factors, such as the values of the parameters of the network. Under certain conditions, our paper shows that the network can possess continuous attractors. Moreover, the explicit expressions of the continuous attractors were calculated. Nowadays, divisive normalization models are studied widely [25]. Recordings in the primary visual cortex show that this kind of models provide a good fit to the input of neurons in V1. Moreover, this kind of models has a lot of computational advantages, such as winner-take-all [26], contour integration [27], and other neural processing [4], [14]. In Deneve et al. [12], used a divisive normalization model to read the population codes based on ML method. We propose a simplified version of model [12] and analyze the dynamics. We aim to analyze the conditions of the network to possess continuous attractors and the explicit expression of the continuous attractors. The continuous attractor of our model is used to read the population codes. Behavior with respect to noisy inputs should be considered in the real world generally [28]. An estimator that reaches the lower bound dictated by the noise is often referred to as an optimal or ideal observer, because it performs as well as possible given the noise [12]. Deneve et al. have pointed out that neural networks can implement an ideal estimator if the level of neuronal noise is independent of firing rate. When the noise is more Poisson-like, the network is a close approximation to optimal estimator. Since periodic membrane oscillations due to rhythmic background activity are typical for various brain regions [29]. In our paper, the firing rate is circular normal or von Mises function, periodic signal can be modeled as an inhomogeneous Poisson process by using the von Mises distribution [30]. Consequently when the neuronal noise is not Poisson-like, the neural network can implement an optimal estimator. Based on these points, some different kinds of Non-Poisson noises are considered in this brief. This brief is organized as follows. Section II introduces the proposed population decoding model. Dynamics analysis of the model is investigated in Section III. Simulations are given in Section IV to illustrate the theory. The model is used for application in robotic experiment in Section V. Finally, conclusions are given in Section VI. II. M ODEL Deneve et al. [12], used a divisive normalization model to read the population codes

2162–237X/$31.00 © 2013 IEEE

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

20

 ⎧ u i j (t + 1) = kl wi j,kl okl (t) ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ oi j (t + 1) =

u i j (t + 1)2  S + μ kl u kl (t + 1)2

(1)

where two indices described each neuron’s position, and neuron i j had preferred orientation and spatial frequency. wi j,kl is the filtering weight, oi j (t) is the activity of unit i j at time t, S is a constant, and μ is the divisive normalization weight. In order to study the dynamics of this kind of population decoding model, a new model is proposed  +∞ w(a, b)x 2 (b, t)db (2) x(a, ˙ t) = −x(a, t) + −∞  +∞ 1 + ν −∞ x 2 (b, t)db for t  0. a ∈ R is used as an index for the individual neuron, x(a, t) denotes the activity of neuron a at time t. w(a, b) > 0 is the synaptic connection between neuron a and neuron b. A neighboring neuron b can either drive neuron a to fire through synaptic connection w(a, b) or decrease its gain through a synaptic connection of strength ν. In model (2), there is only one parameter ν. It is more convenient to analyze the dynamics of the network theoretically than model (1). Moreover, all the neurons are in 1-D array, which is better understood. Finally, considering a relatively large number of neurons, the network is with infinite neurons, so that sums over neurons can be replaced by integrals. Definition 1: A set {x ∗ (a)|a ∈ R} is said to be an equilibrium of (2) if  +∞ w(a, b)x ∗2 (b)db ∗ −x (a) + −∞  +∞ =0 (3) 1 + ν −∞ x ∗ 2 (b)db for all a ∈ R. By Definition 1, an equilibrium is a curve in (x, a) plane. It is independent of time t. Definition 2: An equilibrium {x ∗ (a)|a ∈ R} of (2) is said to be stable if, given any ε > 0, there exists a δ > 0 such that |x(a, 0) − x ∗ (a)|  δ, a ∈ R implies that

499

|x(a, t) − x ∗ (a)|  ε, a ∈ R

for all t  0. An equilibrium is called unstable if it is not stable. Definition 3: A set of equilibria C is said to be a continuous attractor of (2) if it is a connected set and each point x ∗ ∈ C is stable. III. DYNAMICS A NALYSIS This section studies the dynamics analysis of the network (2). An output of the network in response to an input is defined to be some equilibrium point. There are two ways to look at the inputs in recurrent neural networks. One is to take external input as network inputs. The other is to take initial vectors as network inputs [31]. The latter way is adopted here.

0.8 15 0.6 10 0.4 5

0.2 5

10

15

20

Fig. 1. Synaptic connection matrix. The neuron number (1–20) is specified on both axes. wmax = 1, σ = 2.

The initial state of the network is assumed to be periodic  cos(a − m) − 1 x(a, 0) = x(0) · exp (4) 2σ 2 x(0) is a nonnegative constant, and m denotes the peak position of the population activity. It is interesting that the peak of the activity in this neural system can be localized at any value of m within a range. This kind of system is often defined in memory system. In the model of working memory, m corresponds to the quantity being stored. The function (4) is known as circular normal or von Mises function, which is the 1-D simplification of [12]. Since periodic membrane oscillations due to rhythmic background activity are typical for various brain regions, it is appropriate that the input of the network takes this kind of periodic function. The synaptic connection between neuron a and neuron b is  (a − b)2 (5) w(a, b) = w(a − b) = wmax exp − 2σ 2 where wmax is some positive constant. Clearly, the weight w(a, b) has a Gaussian shape with standard deviation σ . Fig. 1 shows the synaptic connection matrix. The gray-scale value of each matrix element represents the strength of the synapse between neuron a and neuron b. It is easy to see that the connection matrix is symmetric, because the synaptic weight between the two neurons a and b depends only on the difference |a − b|. This kind of connectivity is centersurround and can be interpreted as excitation between nearby neurons. Comparing to the corresponding synaptic connection from [12], the connection (5) is not periodic, which is easy to be understood. √ Lemma 1: Define w = σ πwmax ; each trajectory of network (2) can be represented as  cos(a − m) − 1 x(a, t) = x max (t) · exp , t0 (6) 2σ 2 where x max (t) is some differentiable function, which satisfies x˙max (t) ≈ −x max (t) + for all t  0.

2 (t) wx max √ 2 (t) 1 + σ 2πνx max

(7)

500

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

Proof: Substituting (6) in the right-hand side of (2), it follows that:  +∞ w(a, b)x 2(b, t)db −x(a, t) + −∞  +∞ 1 + ν −∞ x 2 (b, t)db  cos(a − m) − 1 2 + wmax x max (t) = −x max (t) · exp 2σ 2



 +∞ −(a−b)2 exp cos(b−m)−1 db −∞ exp 2σ 2 σ2

. ·  2 (t) · +∞ exp cos(b−m)−1 db 1 + νx max 2 −∞ σ By Taylor expansion   cos(a − m) − 1 (a − m)2 exp ≈ exp − . σ2 2σ 2 Thus



− b)2



+∞

= =

=



cos(b − m) − 1 −(a exp db 2 2σ σ2 −∞  +∞ (a − b)2 + (b − m)2 db exp − 2σ 2 −∞ +∞   (a − m)2 (a + m − 2b)2 db exp − exp − 4σ 2 4σ 2 −∞  (a − m)2 σ · exp − 4σ 2

   +∞ b b 2 a+m d − exp − · 2σ σ σ −∞  √ (a − m)2 σ π · exp − . 4σ 2 exp





Moreover



cos(b − m) − 1 db σ2 −∞  +∞ (b − m)2 exp − db ≈ 2σ 2 −∞ √ = σ 2π

+∞

exp

then

 +∞

w(a, b)x 2 (b, t)db  +∞ 1 + ν −∞ x 2 (b, t)db 

2 (t) wx max √ ≈ −x max (t) + 2 (t) 1 + σ 2πνx max  (a − m)2 · exp − 4σ 2 

2 (t) wx max ≈ −x max (t) + √ 2 (t) 1 + σ 2πνx max  cos(a − m) − 1 · exp . 2σ 2 −x(a, t) +

−∞

Thus, x max (t) must satisfy that x˙max (t) ≈ −x max (t) +

2 (t) wx max . √ 2 (t) 1 + σ 2πνx max

The result now follows and the proof is complete.

Lemma 2: O = { 0| a, m ∈ R} is a continuous attractor of network (2). Proof: Clearly, O is a connected set. For each m ∈ R, it is easy to see that each element {0|a ∈ R} ∈ O is an equilibrium point of (2). Next, we prove each equilibrium of O is stable. The linearization of (7) at x = 0 is given by d [x max (t) − 0] = − [x max (t) − 0] . dt Then (x max (t) − 0) = (x max (0) − 0) · exp(−t). Thus 0 < exp (−t)  1 for t  0. Given any ε > 0, there exists a δ = ε for a, m ∈ R, and suppose |x max (0) − 0|  δ then |x(a, 0) − 0|  δ. It holds that |x(a, t) − 0|     cos(a − m) − 1   = |x max (t) − 0| · exp  2σ 2  |x max (t) − 0| = |x max (0) − 0| · |exp (−t)|  |x max (0) − 0| δ = ε. By Definition 2, {0|a ∈ R} is stable. By Definition 3, set O is a continuous attractor of (2). The proof is complete. For extracting the optimal direction of movement from population codes, a zero continuous attractor is not essential. The reason is that a zero continuous attractor is not bell-shaped and the activities of every neuron are the same. The following theorem establishes sufficient conditions for the network (2) to possess a nonzero continuous attractor. Theorem 1: Define √ √ √ w + w2 − k . k = 4 2πσ ν, w = σ πwmax , r = 2 · k If 0  k < w2 then, the set

    cos(a − m) − 1  C = r · exp a, m ∈ R  2σ 2

is a nonzero continuous attractor of (2). Proof: Clearly, C is a connected set.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

Given each m ∈ R, for {x ∗ (a, m)|a ∈ R} ∈ C, x ∗ (a, m) can be written as  cos(a − m) − 1 ∗ . x (a, m) = r · exp 2σ 2 Substitute x ∗ (a, m) into the right-hand side of (2), it gives −r +

wr 2 = 0. √ 1 + σ 2πνr 2

(8)

Then it holds that   x(a, t) − x ∗ (a, m)     cos(a − m) − 1   = |x max (t) − r | · exp  2σ 2  |x max (t) − r | = |xmax (0) − r |      2wr  √ t  · exp −1 + 2 2 (1 + σ 2πνr )  |x max (0) − r |

δ = ε.

Then  +∞

w(a, b)x ∗ 2 (b, m)db −x (a, m) + =0  +∞ 1 + ν −∞ x ∗ 2 (b, m)db −∞



for a ∈ R. By Definition 1, {x ∗ (a, m)|a ∈ R} is an equilibrium. In the second part, we prove that each equilibrium of C is stable. By simple calculation, the linearization of (7) at r is given by d [x max (t) − r ] dt  2wr · [x max (t) − r ] . = −1 + √ (1 + σ 2π νr 2 )2 It follows from (8) that

By Definition 2, {x ∗ (a, m)|a ∈ R} is stable. Thus, by Definition 3, C is a continuous attractor. The proof is complete. Both Theorem 1 and Bayesian inference aim to estimate the encoded variables. However, they are different methods. Theorem 1 bases on stability theory while Bayesian inference is a statistical method. Bayesian inference is closely related to discussions of subjective probability, often called “Bayesian probability.” Moreover, Theorem 1 is a kind of neural decoding method. In this brief, we want to understand not only the information processing in the nervous system but also resolving the estimation problem. In this case, Theorem 1 is more accepted. In the next section, we only focus on the nonzero continuous attractor.

√ 1 + σ 2πνr 2 = wr so √ (1 + σ 2πνr 2 )2 2wr wr = 2 w 2 w2 + w −k = k k > 1. It follows that −1 +

2wr < 0. √ (1 + σ 2πνr 2 )2

Given any ε > 0, there exists a δ = ε for a, m ∈ R, and suppose |x max (0) − r |  δ, then

i.e.,

     x(a, 0) − r exp cos(a − m) − 1   δ   2 2σ   x(a, 0) − x ∗ (a, m)  δ.

501

IV. S IMULATIONS In this section, simulations are carried out to further illustrate the theory established in the last section. Let us consider the following network:

 +∞ 2 · x 2 (b, t)db 3 −∞ exp −(a−b) 8 x(a, ˙ t) = −x(a, t) + (9)  +∞ 1 + −∞ x 2 (b, t)db for t  0. Clearly

 w(a, b) = 3 · exp

−(a − b)2 8



the parameters of the network are wmax = 3, σ = 2, ν = 1. By Theorem 1, the network possesses a nonzero continuous attractor   √   cos(a − m) − 1  w2 + w2 − k · exp C = 2·  a, m ∈ R  k 8 where

√ √ k = 8 2π, w = 6 π.

In Fig. 2, the four thin blue curves are the initial states for different m and with four different types of noise: 0-1 normal noise, Gaussian white noise, Weibull noise, and Rayleigh noise. After relaxation, the network behaves as continuous attractors (red thick curves). Fig. 3 showed the decoding process of a multiple-peaked input with 0-1 normal noise. The network has 60 neurons

502

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

16

x

14

12

10

8 4 14

3

12 10

2

8 1

m

6 4

a

Fig. 2. Continuous attractors of model (9) for many values of m and for many different types of noise. The red curves are the continuous attractors, while the blue curves are the initial states with different types of noise: 0-1 normal noise, Gaussian white noise, Weibull noise, and Rayleigh noise.

m ˆ

1.6

input 2 iteration 4 iteration 6 iteration 8 iteration continuous attractor

1.4 1.2

Activity x

1 0.8 0.6 0.4 0.2 0

0

10

20

30 40 Preferred Orientation

50

60

Fig. 3. Result of reading 0-1 normal noisy population codes. The blue curve is the noisy input, after 14 iterations, the network converges to the continuous attractor (red curve). The preferred direction corresponding to the peak position of the continuous attractor is the estimate of the moving direction.

with preferred directions uniformly distributed between 0° and 360°. The blue curve is the noisy input, after 14 iterations, the network converges to the continuous attractor (red curve). The preferred direction corresponding to the peak position of the continuous attractor is the estimate of the moving direction. V. A PPLICATION IN ROBOTIC E XPERIMENT In this section, we implement model (2) in a mobile robot to solve a spatial memory task. The robot is placed inside a simulated maze environment and commanded to search for a hidden goal (shown in Fig. 4).

Fig. 4. Neuro-cognitive robotics platform. The maze has four cue walls with different shapes and colors: a hidden platform is placed randomly inside the maze, the camera and compass mounted on the robot head provide the vision and orientation inputs for the robot, and IR sensors are mounted on the body to detect the hidden goal platform. In the experiment, the robot can start journey at any starting points marked by the red  in the maze.

The layout of the enclosure is different from the one in the experiment of [32], which is a plus maze. A hidden goal platform is placed randomly in the maze. To provide the cue for the robot, four visible walls are placed outside the maze, which have different shapes and colors. As for the experimental robot, a camera is mounted on the head to provide the visual input and a compass is also mounted on the head to provide the orientation input. Also, IR sensors are mounted on the body to detect the hidden goal platform. In the experiment, the robot moves based on the visual and orientation input. Same as [32], a reward system is given to the neural system depending on the goal hunting results. After the robot has found a hidden goal in the maze, a positive reward will be given to this direction. The robot will get the knowledge of this goal location according to the reward system. In a case when the robot has not found the hidden goal in a maze, a negative reward will be given to the system. On each trial, the robot starts from any starting points as indicated by the red . The robot explored the maze autonomously and made the choice of moving a direction until it encountered the hidden platform or until a time limit of 1000 s was reached. When the robot pauses, it looks to the left, front and right sequentially. The model (2) is used to choose the moving direction. The model is composed of 60 neurons corresponding to 0°−359° in direction. The visual and orientation input of the network is a three peaked noisy hill. By regulating the parameters of the network, the activity of the network converges over time to a smooth hill. The preferred direction corresponding to the peak cell is the estimate of the moving direction. Then, the robot moves toward this direction. This scenario is repeated to see how the robot searches the hidden goal in an unknown environment. Fig. 5 shows the population decoding process. The blue curves are the visual and orientation inputs with different types of Non-Poisson noise and different m values, after some

0.06

0.06

0.04

0.04

Activity x

Activity x

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 3, MARCH 2013

0.02 0

0 0

20

40

60

0

20

40

Preferred Orientation

Preferred Orientation

(a)

(b)

0.06

0.06

0.04

0.04

Activity x

Activity x

0.02

0.02 0

60

0.02 0

0

20

40

60

0

20

40

Preferred Orientation

Preferred Orientation

(c)

(d)

60

Fig. 5. Decoding process of the moving direction in the spatial memory task. The blue curves are the inputs with different types of noise and different m values to the network and the red curves are the continuous attractors of the network. The preferred directions corresponding to the peak cells are the moving directions. (a) 0-1 normal noise. (b) Gaussian white noise. (c) Rayleigh noise. (d) Weibull noise.

iterations, the network converges to the continuous attractors (red curves). The preferred directions corresponding to the peak positions of the continuous attractors are the estimates of the moving directions, respectively. Because these four types of noise considered here are not Poisson-like, the decoding results make sense in this application. VI. C ONCLUSION In this brief, we proposed a model for population decoding. The population decoding performance relies on the specific neural dynamics to a great extent. Under certain conditions, this model can possess continuous attractors. We can choose the parameters of the model to satisfy these specific conditions easily, then the derived nonzero continuous attractor is used to decode information properly. This model is a good prototype for theoretical study and it can simplify the calculation. Moreover, it provides a solution to perform the spatial memory task in our robotic experiment. The analysis of this brief may contribute to a better understanding of population decoding in the neuron systems. R EFERENCES [1] A. Pouget, P. Dayan, and R. Zemel, “Information processing with population codes,” Nature Rev. Neurosci., vol. 1, no. 2, pp. 125–132, 2000. [2] S. Amari and H. Nakahara, “Difficulty of singularity in population coding,” Neural Comput., vol. 17, no. 4, pp. 839–858, 2005. [3] H. S. Seung and H. Sompolinsky, “Simple models for reading neuronal population codes,” Proc. Nat. Acad. Sci., vol. 90, no. 22, pp. 10749–10753, 1993. [4] S. Wu, S. Amari, and H. Nakahara, “Population coding and decoding in a neural field: A computational study,” Neural Comput., vol. 14, no. 5, pp. 999–1026, 2002. [5] S. Amari, “Dynamics of pattern formation in lateral-inhibition type neural fields,” Biol. Cybern., vol. 27, no. 2, pp. 77–87, 1977.

503

[6] J. O’Keefe and J. Dostrovsky, “The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely moving rat,” Brain Res., vol. 34, no. 1, pp. 171–175, 1971. [7] W. Bair, “Spike timing in the mammalian visual system,” Current Opinion Neurobiol., vol. 9, no. 4, pp. 447–453, 1999. [8] A. Borst and F. E. Theunissen, “Information theory and neural coding,” Nature Neurosci., vol. 2, no. 11, pp. 947–957, 1999. [9] J. Maunsell, D. C. Van Essen, “Functional properties of neurons in middle temporal visual area of the macaque monkey. I. Selectivity for stimulus direction, speed, and orientation,” J. Neurophysiol., vol. 49, no. 5, pp. 1127–1247, 1983. [10] W. Usrey and R. Reid, “Synchronous activity in the visual system,” Annu. Rev. Physiol., vol. 61, pp. 435–456, Mar. 1999. [11] R. Zemel, P. Dayan, and A. Pouget, “Probabilistic interpretation of population codes,” Neural Comput., vol. 10, no. 2, pp. 403–430, 1998. [12] S. Deneve, P. Latham, and A. Pouget, “Reading population codes: A neural implementation of ideal observers,” Nature Neurosci., vol. 2, no. 8, pp. 740–745, 1999. [13] A. Georgopoulos, J. Kalaska, and R. Caminiti, “On the relations between the direction of two dimensional arm movements and cell discharge in primate motor cortex,” J. Neurosci., vol. 2, no. 11, pp. 1527–1537, 1982. [14] S. Wu and S. Amari, “Computing with continuous attractors: Stability and online aspects,” Neural Comput., vol. 17, no. 10, pp. 2215–2239, 2005. [15] E. T. Rolls, “An attractor network in the hippocampus: Theory and neurophysiology,” Learn. Memory, vol. 14, nos. 7–12, pp. 714–731, 2007. [16] J. Yu, Z. Yi, and L. Zhang, “Representations of continuous attractors of recurrent neural networks,” IEEE Trans. Neural Netw., vol. 20, no. 2, pp. 368–372, Feb. 2009. [17] J. Yu, Z. Yi, and J. Zhou, “Continuous attractors of Lotka-Volterra recurrent neural networks with infinite neurons,” IEEE Trans. Neural Netw., vol. 21, no. 10, pp. 1690–1695, Oct. 2010. [18] L. Zou, H. Tang, K. C. Tan, and W. Zhang, “Analysis of continuous attractors for 2-D linear threshold neural networks,” IEEE Trans. Neural Netw., vol. 20, no. 1, pp. 175–180, Jan. 2009. [19] H. S. Seung, “Continouous attractors and oculomotor control,” Neural Netw., vol. 11, nos. 7–8, pp. 1253–1258, 1998. [20] K. C. Zhang, “Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory,” J. Neurosci., vol. 16, no. 6, pp. 2112–2126, 1996. [21] H. S. Seung and D. D. Lee, “The manifold ways of perception,” Science, vol. 290, no. 5500, pp. 2268–2269, 2000. [22] S. M. Stringer, E. T. Rolls, T. P. Trappenberg, and I. E. T. Araujo, “Selforganizing continuous attractor networks and motor function,” Neural Netw., vol. 16, no. 2, pp. 161–182, 2003. [23] A. Samsonovich and B. L. McNaughton, “Path integration and cognitive mapping in a continuous attractor neural network model,” J. Neurosci., vol. 17, no. 15, pp. 5900–5920, 1997. [24] A. Pouget, K. Zhang, S. Deneve, and P. E. Latham, “Statistically efficient estimation using population coding,” Neural Comput., vol. 10, no. 2, pp. 373–401, 1998. [25] E. Salinas, “Background synaptic activity as a switch between dynamical states in a network,” Neural Comput., vol. 15, no. 7, pp. 1439–1475, 2003. [26] D. K. Lee, L. Itti, C. Koch, and J. Braun, “Attention activates winnertake-all competition among visual filters,” Nature Neurosci., vol. 2, no. 4, pp. 375–381, 1999. [27] Z. Li, “A neural model of contour integration in the primary visual cortex,” Neural Comput., vol. 10, no. 4, pp. 903–940, 1998. [28] W. J. Ma, J. Beck, P. Latham, and A. Pouget. “Bayesian inference with probabilistic population codes,” Nature Neurosci., vol. 9, no. 11, pp. 1432–1438, 2006. [29] E. Y. Cheu, J. Yu, C. H. Tan, and H. Tang, “Synaptic conditions for auto-associative memory storage and pattern completion in Jensen et al.’s model of hippocampal area CA3 ,” J. Comput. Neurosci., vol. 33, no. 3, pp. 435–447, 2012. [30] S. Grun and S. Rotter, Analysis of Parallel Spike Trains. New York: Springer-Verlag, 2010, ch. 4, p. 74. [31] Z. Yi, L. Zhang, J. Yu, and K. K. Tan, “Permitted and forbidden sets in discrete-time linear threshold recurrent neural networks,” IEEE Trans. Neural Netw., vol. 20, no. 6, pp. 952–963, Jun. 2009. [32] W. Huang, H. Tang, J. Yu, and C. H. Tan, “Neuro-cognitive robot for spatial navigation,” in Proc. 18th Int. Conf. Neural Inf. Process., Shanghai, China, Nov. 2011, pp. 485–492.

Dynamics analysis of a population decoding model.

Information processing in the nervous system involves the activity of large populations of neurons. It is difficult to extract information from these ...
354KB Sizes 0 Downloads 3 Views