Lecture Notes in Computer Science
Download 12.42 Mb. Pdf ko'rish
|
- Bu sahifa navigatsiya:
- Acknowledgments.
Fig. 4. A working hypothesis for the underlying mechanism of sVT. It shows that the transmitter released from an isolated presynaptic terminal (the left-hand side) activates the K +
conductance on the membrane of the dissociated ganglion cell somata we recorded from (the right-hand side). In our preliminary experiments, it was found that the sVT frequency, but not the amplitude, decreased when a low-Ca 2+ solution was superfused over the cell and increased briefly upon application of caffeine. Thus, it is suggested that, at least, the sVT are elicited via an intracellular Ca 2+ -dependent process. Although we cannot rule out the contribution of cytoplasmic Ca 2+ in the cells recorded from to the effects of extracellular low-Ca 2+ and caffeine on the sVT, those experimental results are consistent with the present hypothesis (Fig. 4). The retinal ganglion cells in situ/vivo receive the inhibitory transmitter GABA released from the presynaptic amacrine cells, and the GABAb receptor, which is generally coupled with K + conductance, is known to be expressed in rat ganglion cells [10]. However sVT were not suppressed when GABAb receptor antagonist (2- hydroxysaclofen or SCH 50911) was applied. Further studies remain to be conducted to reveal the underlying mechanisms of the sVT in retinal ganglion cells somata.
The authors shall be grateful to Dr. N. Akaike for his valuable suggestions on our dissociation protocol and lending most of the equipments used in the present experiments, to Dr. K. Hayashi for lending the microscope. This work was partly supported by the Japan Ministry of Education, Science, Sports and
Culture, Grant–in–Aid for Young Scientists (B),
17700398, 2005 to Y.H. References 1. Akaike, N., Moorhouse, A.J.: Techniques: applications of the nerve-bouton preparation in neuropharmacology. Trends in Pharmacological Science 24(1), 44–47 (2003) 2. Armstrong, C.E., Roberts, W.M.: Electrical properties of frog saccular hair cells: distortion by enzymatic dissociation. Journal of Neuroscience 18(8), 2962–2973 (1998)
72 T. Motomura, Y. Hayashida, and N. Murayama 3. Armstrong, C.M., Gilly, W.F.: Access resistance and space clamp problems associated with whole-cell patch clamping. Methods in enzymology 207, 100–122 (1992) 4. Barres, B.A., Silverstein, B.E., Corey, D.P., Chun, L.L.: Immunological, morphological, and electrophysiological variation among retinal ganglion cells purified by panning. Neuron 1(9), 791–803 (1988) 5. Coombs, J.S., Eccles, J.C., Fatt, P.: The specific ionic conductances and the ionic movements across the motoneuronal membrane that produce the inhibitory post-synaptic potential. Journal of Physiology 130(2), 326–374 (1955) 6. Guenther, E., Schmid, S., Grantyn, R., Zrenner, E.: In vitro identification of retinal ganglion cells in culture without the need of dye labeling. Journal of Neuroscience Methods 51(2), 177–181 (1994) 7. Hayashida, Y., Ishida, A.T.: Dopamine receptor activation can reduce voltage-gated Na +
Neurophysiology 92(5), 3134–3141 (2004) 8. Hayashida, Y., Motomura, T., Murayama, N.: Vibrodissociation of rat retinal ganglion cells attached with inhibitory synaptic boutons. Investigative Ophthalmology & Visual Science 47, E-Abstract 3763 (2006) 9. Hayashida, Y., Partida, G.J., Ishida, A.T.: Dissociation of retinal ganglion cells without enzymes. Journal of Neuroscience Methods 137(1), 25–35 (2004) 10. Koulen, P., Malitschek, B., Kuhn, R., Bettler, B., Wassle, H., Brandstatter, J.H.: Presynaptic and postsynaptic localization of GABA(B) receptors in neurons of the rat retina. European Journal of Neuroscience 10(4), 1446–1456 (1998) 11. Mitra, P., Slaughter, M.M.: Mechanism of generation of spontaneous miniature outward currents (SMOCs) in retinal amacrine cells. Journal of General Physiology 119(4), 355– 372 (2002) 12. Motomura, T., Hayashida, Y., Murayama, N.: Mechanical Dissociation of Retinal Neurons with Vibration. IEEJ Transactions on Electronics, Information and Systems 127(10) (in press, 2007) 13. Tabata, T., Ishida, A.T.: A zinc-dependent Cl- current in neuronal somata. Journal of Neuroscience 19(13), 5195–5204 (1999) 14. von Gersdorff, H., Matthews, G.: Dynamics of synaptic vesicle fusion and membrane retrieval in synaptic terminals. Nature 367(6465), 735–739 (1994) 15. Vorobjev, V.S.: Vibrodissociation of sliced mammalian nervous tissue. Journal of Neuroscience Methods 38(2-3), 145–150 (1991)
Region-Based Encoding Method Using Multi-dimensional Gaussians for Networks of Spiking Neurons Lakshmi Narayana Panuku and C. Chandra Sekhar Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai 600 036, India {panuku,chandra}@cs.iitm.ernet.in Abstract. In this paper, we address the issues in representation of con- tinuous valued variables by firing times of neurons in the spiking neural network used for clustering multi-variate data. The existing range-based encoding method encodes each dimension separately. This method does not make use of the correlation among the different variables, and the knowledge of the distribution of data. We propose a region-based en- coding method that places multi-dimensional Gaussian receptive fields in the data-inhabited regions, and captures the correlation among the variables. Effectiveness of the proposed encoding method in clustering the complex 2-dimensional and 3-dimensional data sets is demonstrated. 1 Introduction Artificial neural networks (ANNs) have been shown to have the ability to extract patterns from complex data [1, 2]. Based on the computational units used, these ANN models can be classified into three generations [3]. The McCulloch-Pitts neurons, which are considered as first generation neurons, can only give binary output. The computational units that output continuous values, like sigmoidal units, are considered as the second generation neurons. Biologically, the output of a sigmoidal unit can be interpreted as the firing rate of a neuron. Under the assumption that, in biological neural networks the continuously varying mean firing rate of a neuron (rate code) contains the information about the neuron’s time varying state of excitation, the sigmoidal units can model the computations in biological systems. Recently, the timing of the action potentials or spikes has been recognized as a possible means of neural information coding rather than the average fir- ing rate of the neurons [4, 5, 6]. It is shown that coding with the timing of spikes allows powerful neuronal information processing [7]. These results have generated considerable interest in the third generation time-based neurons like spiking neurons [3]. Various models of spiking neural networks (SNNs) like leaky integrate-fire model, spike response model, and liquid state machine [6, 8] and various learning methods for these models have been reported in the litera- ture [9, 10, 11]. The SNNs have been used in many applications such as signal M. Ishikawa et al. (Eds.): ICONIP 2007, Part I, LNCS 4984, pp. 73–82, 2008. c Springer-Verlag Berlin Heidelberg 2008 74 L.N. Panuku and C.C. Sekhar coincidence detection [2], isolated word recognition [8], and implementation of temporal-RBF networks [12]. In [13], a Hebbian based learning mechanism is proposed for spiking neu- ron models with multi-delay connections, namely multi-delay SNNs (MDSNNs). This learning mechanism is observed to select the connections with matching delays. For clustering the data, this approach is extended in [10] by considering not only the firing or non-firing of a neuron, but also its firing time. A cod- ing scheme to convert an analog input variable into firing times of neurons is proposed. However, this method has limitations in both clustering capacity as well as precision [14]. To overcome this limitation, Bohte, et al., [14], proposed a population coding based encoding method that encodes the values of input vari- ables using multiple overlapping 1-dimensional (1-D) Gaussian receptive fields (GRFs). This method is shown to cluster a number of data sets at low expense in terms of neurons while enhancing clustering capacity and precision. Using this encoding method and a multi-layer MDSNN with lateral connections in the hidden layer, complex data like the interlocking cluster data is clustered. However, this encoding method does not make use of the correlation present among the variables in the multi-variate data. It uniformly places the GRFs along each dimension, leading to a large neuron count and increased computa- tional cost. The boundaries given by the MDSNN with this encoding method are observed to be combinations of linear segments. To overcome these limita- tions, we propose a novel encoding method that places multi-dimensional GRFs in the data-inhabited regions and uses the correlation present in the data. We show that the proposed encoding method helps the MDSNNs in clustering a number of 2-D and 3-D nonlinearly separable data sets, while keeping a low neuron count. Moreover, the cluster boundaries given by the MDSNN with this encoding method are observed to follow the shapes of the clusters. This paper is organized as follows: The architecture of the MDSNN and the Hebbian based learning rule for clustering are described in Section 2. The existing range-based encoding method and its limitations are discussed in Section 3. Section 4 presents the proposed region-based encoding method that uses the multi-dimensional GRFs. The performance of the proposed encoding method for clustering different complex data sets is also given in this section. 2 Multi-delay Spiking Neural Networks The architecture of the MDSNN consists of a fully connected feedforward network of spiking neurons with connections implemented as multiple delayed synaptic terminals, as shown in Fig. 1(a). A connection from pre-synaptic neuron i to post-synaptic neuron j consists of a fixed number (m) of synaptic terminals. Each terminal serves as a subconnection that is associated with a different de- lay and weight. The delay d l of a synaptic terminal l is the difference between the firing time of the pre-synaptic neuron, and the time when the post-synaptic potential (PSP) resulting from terminal l starts raising. Region-Based Encoding Method Using Multi-dimensional Gaussians 75 The time varying impact of the pre-synaptic spike on a post-synaptic neuron is described by a spike response function, ε(.), also referred to as the PSP. The PSP is modeled by the α-function, as in [14]. A neuron j in the network generates a spike when the value of its internal state variable x j , the “membrane potential”, crosses a threshold ϑ. The internal state variable x j (t) is defined as follows: x j (t) = i ∈Γ j m l=1
w l ij ε t − t
i − d
l , (1) where Γ j is the set of neurons pre-synaptic to neuron j, w l ij is the weight of the l th synaptic terminal between the neurons i and j, and t i is the firing time of the pre-synaptic neuron i. The time at which x j (t) crosses the threshold ϑ with a positive slope is the firing time of the neuron j, denoted by t j . w ij 1 w ij l w ij m t i +
d 1 t i + d l t i +
d m 00 00 00 11 11 11 00 00 00 00 11 11 11 11 0 0 0 1 1 1 0 0 1 1 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 1100 11 Input Layer Output Layer 1 1 i i j j I J (a)
b L( Δ
Δ
β 0 (b) 1 T 4 T 6 T 5 T 3 t=0
t=9 T(a)=[ NF, NF, 9, 1, 0, 8, NF, NF ] f(a) 2
4 5 7 6 8 coding threshold a Value of Variable (c) Fig. 1. (a) Architecture of an MDSNN. (b) Learning function L(Δt). (c) Range-based encoding of an input variable with value a into firing times, T (a). For clustering, the weights of the terminals of the connections between the input neurons and the winning neuron, i.e., the output neuron that fires first, are modified using a time-variant of Hebbian learning. The learning rule for the weight w l ij of the synaptic terminal with delay d l is as follows: Δw l ij = ηL(Δt l ij ) = η (1 − b) exp −(Δt l ij
2 /β 2 + b , (2)
where Δt l ij denotes the time difference between the onset of PSP at the l th synaptic terminal, t P SP l i , and the firing time of the winning neuron, t j , i.e, Δt l ij
76 L.N. Panuku and C.C. Sekhar = t P SP
l i −t j = (t
i +d l ) −t j . The parameter c determines the position of the peak, b determines the negative update given to a neuron for which Δt is significantly different from c, and β sets the width of the positive part of the learning function (Fig. 1(b)). The weight of a synaptic terminal is limited to the range 0 to w max ,
values for delays d l is set to 0 − 9 milliseconds with a resolution of 1 millisecond, i.e., m = 9. The parameters b and η are set to −0.2 and 0.01, while c and β are empirically chosen. In [14], an MDSNN with fixed threshold units is considered for the task of clustering. This model, with the above mentioned learning rule (Eqn. 2), could cluster linearly separable data. However, when applied on nonlinearly separable data like the single-ring data and the interlocking cluster data, all the data points are grouped into a single cluster. To overcome this limitation, we use a varying- threshold method [15], in which the threshold of a spiking neuron is initialized to a small, positive value and is gradually increased (in steps of Δϑ) as the learning progresses, until it reaches a maximum value, ϑ max . Moreover, when a multi-layer MDSNN is trained to cluster complex data, the layers are trained using multi-stage learning method [15] in which the n th layer is trained before starting the training of (n + 1) th layer. With these two extensions, the MDSNNs are able to cluster the single-ring data and the interlocking cluster data [15]. In our studies, we use the varying-threshold method and the multi-stage learning method. The values of Δϑ and ϑ max
are determined empirically. 3 Range-Based Encoding When the input variables are continuous valued attributes, it is necessary to encode the value of each variable into firing times of neurons in the input layer of MDSNN. Bohte, et al., [14], proposed an encoding method that encodes the values of input variables by a population code obtained from neurons with graded and overlapping sensitivity profiles. As this method encodes each variable with 1-D GRFs uniformly placed to cover the whole range of values that the variable can take, it is called the range-based encoding method. In this method, the range of values for each input variable is determined. For the range [I min ...I
max ] of a variable, n(> 2) GRFs are used. The center of the i th GRF is set to μ i =
min + ((2i
−3)/2)((I max
− I min
)/(n −2)). One GRF is placed outside the range at each of the two ends. All the GRFs encoding an input variable will have the same width. The width of a GRF is set to σ = (1/γ)((I max − I
min )/(n
− 2)), where γ controls the extent of overlap between the GRFs. For an input variable with value a, the activation value of the i th GRF with center μ i and width σ is given by f i (a) = exp −(a − μ
i ) 2 /(2 σ 2 ) (3) The firing time of the neuron associated with this GRF is inversely proportional to f i
i (a) close to 1.0, the firing time t = 0 milliseconds is assigned. When the activation value of the
Region-Based Encoding Method Using Multi-dimensional Gaussians 77 GRF is small, the firing time t is high indicating that the neuron fires later. In our experiments, the firing time of an input neuron is chosen to be in the range 0 to 9 milliseconds. While converting the activation values of GRFs into firing times, a coding threshold is imposed on the activation value. A GRF that gives an activation value less than the coding threshold will be marked as not-firing (NF), and the corresponding input neuron will not contribute to the membrane potential of the post-synaptic neuron. The range-based encoding method is illustrated in Fig. 1(c). For multi-variate data, each variable is encoded separately, effectively not using the correlation present among the variables. In this encoding method, 1-D GRFs are uniformly placed along an input dimension, without considering the distribution of the data. Hence, when the data is sparse some GRFs, placed in the regions where data is not present, are not effectively used. This results in a high neuron count and computational cost. The widths of the GRFs are derived without using the knowledge of the data distribution, except for the range of values that the variables take. Taking one GRF along an input dimension and quantizing its activation value results in the formation of intervals within the range of values of that variable, such that one or more intervals are mapped onto a particular quantization level. When 2-D data is encoded by taking an array of 1-D GRFs along each input dimension, the input space is quantized into rectangular grids such that all the input patterns falling into a particular rectangular grid have the same vector of quantization levels, and hence the same encoded time vector. Additionally, one or more rectangular grids may have the same encoded time vector. For multi- variate data, the input space is divided into hypercuboids. To demonstrate this, the single-ring data (shown in Fig. 2(a)) is encoded by placing 5 GRFs along each dimension, dividing the input space into grids as shown in Fig. 2(b). A 10-2 MDSNN, having 10 neurons in the input layer and 2 neurons in the output layer, is trained to cluster this data. The space of data points as represented by the output layer neurons is shown in Fig. 2(c). The cluster boundary is observed to be a combination of linear segments defined by the rectangular grid boundaries formed by encoding. The shape of the boundary formed by the MDSNN is significantly different from the desired circle-shaped boundary between the two clusters in the single-ring data. Increasing the number of GRFs used for encoding each dimension may give a boundary that is a combination of smaller linear segments, at the expense of high neuron count. However, this may not result in proper clustering of the data, as the choice of number of GRFs is observed to be crucial in the range-based encoding method. When the range-based encoding method is used along with the varying- threshold method and the multi-stage learning method [15] to cluster complex data sets such as the double-ring data and the spiral data, it is observed that proper subclusters are not formed by the neurons in the hidden layer. As each dimension is encoded separately, the spatially disjoint subsets of data points, as shown by the marked regions in Fig. 3, that have similar encoding along a par- ticular dimension are found to be represented by a single neuron in the hidden 78 L.N. Panuku and C.C. Sekhar −2 −1.5 −1
0 0.5
1 1.5
2 −2 −1.5 −1 −0.5
0 0.5
1 1.5
2 (a)
−1 −0.8−0.6−0.4−0.2 0 0.2 0.4 0.6 0.8 Download 12.42 Mb. Do'stlaringiz bilan baham: |
ma'muriyatiga murojaat qiling