Biological Cybernetics

Biol. Cybernetics25, 8342 (1977)

9 by Springer-Verlag 1977

Analysis and Simulation of Double-Layer Neural Networks with Mutually Inhibiting Interconnections T. Tokura and I. Morishita

Faculty of Engineering, Universityof Tokyo,Tokyo,Japan

Abstract. Double-layer neural networks with mutually

inhibiting interconnections are analyzed using a continuous-variable model of the neuron. The first layer consists of excitatory neurons while the second layer consists of inhibitory neurons. Both feedforward and feedback interconnections exist between the two layers. Air autonomous system of nonlinear differential equations is introduced to describe the network dynamics, and the stability conditions for some classes of equilibria are investigated in detail. Several simulation results are also presented. It is shown that even those networks which are formed with rather powerless synapses are capable of carrying out input pattern sharpening, temporary information storage, and periodic signal generation.

I. Introduction

Single-layer neural networks with mutually inhibiting interconnections have been studied in a previous paper (Morishita and Yajima, 1972), and further work on this class of networks has been reported by Hadeler (1974). The present paper deals with double-layer networks with similar structure of interconnections. As the term "double-layer" indicates, the network consists of two layers. The first-layer consists of excitatory neurons C]1), C(21), t-(l) and the secondlayer consists of inhibitory neurons C]2), ~22)..... C~ ). The two layers run in parallel in a cortex, providing a one-to-one correspondence between the first-layer and second-layer neurons. Every first-layer neuron Q~) transmits an excitatory signal to the corresponding neuron Qz) and its some neighbours in the secondlayer. On the other hand, every second-layer neuron Q2) transmits an inhibitory signal to every first-layer neuron except the corresponding neuron Q~) and its some neighbours. When a first-layer neuron is firing, every first-layer neuron except the firing neuron itself

receives some inhibitory signals. Thus, this is a type of mutually inhibiting network. A mathematical model of this network is described and its behavior is investigated by means of analysis and computer simulation. It is shown that the doublelayer network is capable of carrying out input pattern sharpening, temporary information storage and periodic signal generation. Thus, information processing capabilities of the double-layer network is essentially the same as those of the single-layer network. However, the double-layer network is not simply a functional equivalence of the single-layer network. In fact, only the double-layer network has the following remarkable properties. First, even those networks which are formed with rather powerless synapses are capable of carrying out any of the three types of information processing. Second, a result of information processing is given as the firing of a group of neurons, not as the firing of single neuron. Third, both excitatory and inhibitory signals can be used as the network output. The cerebral cortex is usually described as one consisting of six laminae, but little is known about how neurons are interconnected in each lamina. However, there is some neuroanatomical and neurophysiological evidence suggesting that basket-type inhibitory neurons in lamina 2 form a double-layer network with star-pyramidal excitatory neurons in the same lamina and basket-type inhibitory neurons in lamina 4 form another double-layer network with pyramidal excitatory neurons in laminae 3 and 5 (Szentfigothai, 1969). Combining this evidence with the theoretical results described above leads us to the idea that it might be the double-layer network that plays the most essential role in information processing performed in the cerebral cortex. At least, this idea does not conflict with the anatomical fact that most neurons in our central nervous system are interconnected with a large number of relatively powerless synapses.

84 II. The Network Model

where q5[y] is a nonlinearity such that

Figure 1 shows a schematic diagram of a double-layer n e t w o r k consisting of 2N neurons 9 The two layers run in parallel, providing a one-to-one correspondence between the first-layer and second-layer neurons. The axon branches of the first-layer n e u r o n ~i 1) terminate on the dendrites of the corresponding neuron Q2I and its 2k neighbours ~//2_)1, ~//2) 1 . . . . . ~ - 2) k , ~2+)k while the axon branches of the second-layer neuron Q2) terminate on the dendrites of every first-layer neuron except the corresponding neuron Q~) and its 2k neighbours C(/D1, ~+ll, .. -, Q -1) k , Ql+lk- Any first-layer C (1) and any second-layer neuron C}2l have the neuron ~j same structure of axon branches as CI 1l and CI 2), respectively. Of course, there are some irregularities in the n e t w o r k structure near the network boundary, but we shall pay little attention to neurons existing near the b o u n d a r y because such neurons are only a small fraction of all the network members. The input signals HI, U2, . . . , Um are assumed to be excitatory. We use the letter y to denote m e m b r a n e potentials, the letter z or u to denote nerve impulse frequencies and the letter w or v to denote the powerfulness of synapses 9 Using the neuron model shown in Figure 2,' we describe the network dynamics as

dyll)(t) z, - ~ - -

i k- 1 + y l l ) ( t ) = --W (1) --W(1)

N

M

E

z}2l(t) + E j=l

i+k

M

--~-yl2)(t) = w(2)

2 j=i

k

y__O

From this equation y(1), can be determined. In addition to this, we can show that all the eigenvalues of A(y(1),, y(2),) have positive real parts if and only if all the eigenvalues of W (1) W(2)~ [y(1)*] -JvI s have positive real parts (see Appendix A). This means that the 1

IN denotes a N x N unit matrix

y*[i;n] 0

n ^

-0

a

2a

Ka

..

Ka

...

Ka-

it follows that

a

Ka

2a

1 /2min(K,n)"

a < amax(K, n) =

(25)

Ka

Bn=

Ka

(20)

n

Ka " 2a a

Ka

...

Ka

...

Ka

2a

a

0

(21)

p+q+n=N.

The eigenvalues of the matrix

This relation gives the necessary and sufficient conditions that B n be positive definite and every equilibrium solution belonging to the class E, be asymptotically stable. The values of amax(K,n) for K = 3 , 5,7 and n = 2, 3,..., 25 have been calculated and the results are shown in Figure 3. When K = 3 and a = 0 . 2 , for example, two, three, four or five adjacent neurons can fire together depending on x, but the firing of more than five adjacent neurons never occur even when x has a flat pattern 9 Figure 3 suggests that

1 lim am,x(K, n ) = K2.

i!l

In fact, we can show that if

can obtained from

1 aN --

(a - 2)z 0

(26)

rl~oo

-

(27)

K 2,

o B.-)d.

o

=(1 - A ) P + q . L B n - A I , I = 0 . (22)

0

(t - )01

Thus, the necessary and sufficient conditions for y* [i; n] to be stable is that B, be positive definite9 Let

-0

1

1

012

2

101 2

K

2

. . . K_

K

1

then B n is positive definite for any K and n, that is, any n u m b e r of adjacent neurons may fire together. Now, consider such a class of equilibrium solutions that a certain number, say L, of neuron groups are firing and every g r o u p consists of n adjacent neurons. We denote this class of equilibrium solutions by E~. If there are more than k not-firing neurons between each firing neuron groups, the stability of such an

a . . . . (K,n)

(23)

C . = a1 ( B . - I.) =

0.3

K

.

.

.

.

. 2

1 K . . . K

2

1

0.2

0 K=3

2rain; the m i n i m u m eigenvalue of B,, I/rain; the m i n i m u m eigenvalue of C n. Since B, and Cn are real and symmetric, it holds that 2mi n = a#mln + 1

(24)

and (23) shows that #min depends on K and n, but not on i. We also note that #min is negative for any K and n.

0.1 K=5

0,0

,

5

'

10

~

15

~

20

l5

2

n

Fig. 3. Numerical values of amax(K,n). If a>amax(K, n), the firing of a group of n adjacent neurons is unstable

87 amax(3,n)

We can show that the m i n i m u m eigenvalue of B~ does not depend on L. A p r o o f is given in Appendix B. Thus, all the equilibrium solutions belonging to the class EL, are asymptotically stable if and only if the m i n i m u m eigenvalue of Be is positive 9 It turns out that this condition is satisfied if and only if

0.3

~-.f amaxi3,n] 0.2

t a < amax(K,/1)

=

n)'

fimin(K,

(29)

~ lt~:_ : : it_

. . . . 0.1

where and

])min(K, n) is

the m i n i m u m eigenvalue of C, - U,

K . . . iI 0,0 a

I

I

I

5

10

15

215

210

n gn =

~mox(5.n) 0.3

K

o•

b

.

The values of a .... ( K , n ) for K = 3 , 5 , 7 and n = 1, 2 ..... 25 have been calculated and the results are shown in Figure 4. It should be noted that

0.2

0.1

9

1

k

amax(K , 1)= ~ .

(30)

Thus, if

1

a> ~,

0.0

I

I

I

5

10

15

2~

20

n

~max(7,n)

(31)

only single group of neurons can fire9 Assume that the network has an equilibrium solution y* [i; n] when the input has a fiat pattern, i.e.,

0.3

,

x=c

From

=C

0.0

I

I

t

= _ 9 I

a... Ka...Ka

1 a

Y* 1 ....

(32)

19) it follows that

y*

O lkk

c>0.

9

5 10 15 20 n c Fig. 4. Numerical values of ~,~x(K, n), shown together with those of a=ax(K, n). If a>dtm.~(K, n), the firing of two or more groups of n adjacent neurons is unstable

Ka...'Ka... a

-5-i_.S

Y*%+q (28)

Ka

9

I=C

1

1

Hi+n--

1

rh

y +nl

a

(34)

9 i

9'

2a

1 I

a

it

2a...Ka...Ka

-Ka. . .Ka. . 9

li+n

a

1

r/i

2a

>(35)

9 I{a

F. . . .

II B.

(33)

/~i+1

.if: .'i-1

L B n =

.i

Ka-

=C,

from the

/~i+l

a

y* y* 9 I

equilibrium solution can be determined eigenvalues of the (n x L) x (n x L) matrix

*/i

1

"I

4~2

+1 (A.8)

[ 2 , ~ l~r {rl +r2 -+ ~/(~1 _%)2 _4~1r2(#_ 1) }, otherwise 9 The real part of 2 is positive if and only if z 1+ z 2 - / ( z

VII. Conclusion

or

A mathematical model of double-layer neural networks with mutually inhibiting interconnections has been presented and its behavior has been made clear by means of analysis and computer simulation. One of the most important results is that even those networks which are formed with rather powerless synapses are capable of carrying out input pattern sharpening, temporary information storage, and periodic signal generation. The authors believe that this type of double-layer network is the basic functional unit of our nervous system.

#>0.

1 -'c2) 2 -4"cl'rz(,a- l) > 0,

(A.9)

(A.10)

It has been shown that all the eigenvalues of A(y(1)*,y (2)*) have positive real parts if and only if all the eigenvalues of W(1)W(2)q)[y(1)*J + I N have positive real parts9

Appendix B Let us define C~ as

C~ =

Appendix A

1

L (B, - I,r)

U,

The eigenvalues of A(y (a)*, y(2),) can be obtained from

9 ." " . i

n•

(B.1)

9

[A(y(l),, y(Z),)- 212u1 = 0 ,

(A.1)

or

n

1

)

vl

(.el _ ~)IN ]

9 U.

C I'

where

1 W(~)

1

1

W (1)W(2)~ [y(1),]

= 0. Un=

Vet (A.2)

9

(B.2)

'ia

~/'

92 where 2mln(" ) denotes the minimum eigenvalue of Hermitian matrix. Since 2mi,(LU,)=0, we obtain

The eigenvalues of C~ can be obtained from

C, - 21, U,

U,

. .

U,

,~mi,(C, - U,) < 2mln(C, + (L-- 1) U,).

Thus, the minimum eigenvalue of C,L equals to that of ( C , - U , ) . It has been shown that the minimum eigenvalue of C~ does not depend on L.

Un gn

.

.

9 u.

(B.7)

C.-2I.

C.- U , - 2 I ,

U,

References 0

0

C,- U,-21,

U,

0

C,+(L-1)U,-)~I,

=IC,-U,-2LI L 1.IC,+(L-1)U,-,~I,I:O.

(B.3)

Thus, the eigenvalues of the n x n matrices C, - U, and C, + (L - 1) U, determine those of the (Ln x Ln) matrix C,z. It is known that when c q < a 2 < ...

Analysis and simulation of double-layer neural networks with mutually inhibiting interconnections.

Biological Cybernetics Biol. Cybernetics25, 8342 (1977) 9 by Springer-Verlag 1977 Analysis and Simulation of Double-Layer Neural Networks with Mutu...
760KB Sizes 0 Downloads 0 Views