MXPA99011505A - Dynamic synapse for signal processing in neural networks - Google Patents

Dynamic synapse for signal processing in neural networks

Info

Publication number
MXPA99011505A
MXPA99011505A MXPA/A/1999/011505A MX9911505A MXPA99011505A MX PA99011505 A MXPA99011505 A MX PA99011505A MX 9911505 A MX9911505 A MX 9911505A MX PA99011505 A MXPA99011505 A MX PA99011505A
Authority
MX
Mexico
Prior art keywords
signal
processing
dynamic
señales
link
Prior art date
Application number
MXPA/A/1999/011505A
Other languages
Spanish (es)
Inventor
Liaw Jimshih
W Berger Theodore
Original Assignee
University Of Southern California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Southern California filed Critical University Of Southern California
Publication of MXPA99011505A publication Critical patent/MXPA99011505A/en

Links

Abstract

Un sistema de procesamiento de información que tiene procesadores de señales que están interconectados mediante uniones de procesamiento que simulan y extienden redes neuronales biológicas. Como se muestra en la figura, cada unión de procesamiento recibe señales de un procesador de señales y genera una nueva señal a otro procesador de señales. La respuesta de cada unión de procesamiento es determinada por procesos de unión interna y se cambia de manera continua con variación temporal en la señal recibida. Diferentes uniones de procesamiento conectadas para recibir una señal común de un procesador de señales responden de manera diferente para producir diferentes señales para procesadores de señales corriente abajo. Esto transforma un patrón temporal de un tren de señales de picos en un patrón espacio-temporal de eventos de unión y provee un poder computacional exponencial a procesadores de señales. Cada unión de procesamiento de señales puede recibir una señal de retroalimentación de un procesador de señales corriente abajo de modo que un proceso de unión interna pueda ser ajustado para aprender ciertas características adosadas en señales recibidas.

Description

DYNAMIC SINAPSIS FOR SIGNAL PROCESSING IN NEURAL NETWORKS Field of the Invention The present invention relates to the processing of information by signal processors connected by processing junctions, and more particularly, to models of neural networks that simulate and extend biological neural networks. BACKGROUND OF THE INVENTION A biological nervous system comprises a complex network of neurons that receive and process external stimuli to produce, exchange, and store information. A neuron, in its simplest form as a basic unit for a neural network, can be described as a cell body called soma, which has one or more dendrites as input terminals to receive signals, and one or more axons as output terminals to export signals. The soma of a neuron processes the signals received from the dendrites to produce at least one action signal for transmission to other neurons by means of axons. Some neurons have only one axon, which repeatedly divides branches, thus allowing a neuron to communicate with multiple other neurons. A dendrite (or axon) of a neuron, and an axon (or a dendrite) of another neuron, are connected by a biological structure called a synapse. Accordingly, a neural network comprises a plurality of neurons that are interconnected by synapses. The signals are exchanged and processed within this network. Neurons also make anatomical and functional connections to different classes of effector cells, such as muscle, gland, or sensory cells, through other types of biological junctions called neuroeffector junctions. A neuron can emit a certain neurotransmitter in response to an action signal to control a connected effector cell, so that the effector cell reacts accordingly in a desired manner, for example, the contraction of a muscle tissue. The structure and operations of a biological neural network are extremely complex. Many physical, biological, and chemical processes are involved. Different simplified neuronal models have been developed based on certain aspects of the biological nervous systems. See Bose and Liang, "Neural network fundamentáis with graphs, algorithms, and applicatíons", McGraw-Hill (1996). A brain, for example, is a complex system, and can be modeled as a neural network that processes information through the spatial and temporal pattern of neuronal activation. A description of the operation of a general neural network is as follows. An action potential originated by a presynaptic neuron generates synaptic potentials in a postsynaptic neuron. The soma membrane of the postsynaptic neuron integrates these synaptic potentials to produce a summed potential. The soma of the postsynaptic neuron generates another action potential if the summed potential exceeds a potential threshold. This action potential is then propagated through one or more axons as presynaptic potentials for other connecting neurons. The above process forms the basis for the processing, storage, and exchange of information in many models of neural networks. Action potentials and synaptic potentials can form certain patterns or temporal sequences such as spike trains. The time intervals between the potential spikes carry a significant part of the information in a neural network. Another significant part of the information in a neural network is that of the spatial patterns of neuronal activation. This is determined by the spatial distribution of neuronal activation in the network. It is desirable to stimulate both temporal and spatial patterns in a neural network model. See, for example, Deadwyler and collaborators, "Hippocampal ensemble activity during spatial delayed-nonmatch-to-sample performance in rats", Journal of Neuroscience, Volume 16, pages 354-372 (1996) and Thiels et al., "Excitatory stimulation during postsynaptic inhibition induces long-term depression in hippocampus in-vivo", Journal of Neuroscience, Volume 72, pages 3009-3016 ( 1994) and "NMDA receptor-dependent LTD in different subfields of hippocampus in vivo and in vitro", Hippocampus, Volume 6, pages 43-51 (1996). Many neural network models are based on the following two assumptions. First, it is assumed that the synaptic force, that is, the efficiency of a synapse to generate a synaptic potential, is static over a typical time scale to generate an action potential in the neurons. The effectiveness of a synapse is essentially a constant during a signal train. Certain models modify this assumption by allowing slow variation during a processing period of many signal trains. In the second assumption, each neuron that it sends, provides the same signal to all the other neurons with which it connects synaptically. One aspect of the present invention provides an improved neural network model that removes the two previous assumptions. SUMMARY OF THE INVENTION The present invention is embedded in information processing systems and methods that are inspired by, and are configured to, extend certain aspects of a biological neural network. The functions of the signal processors and of the processing junctions that connect the signal processors correspond to the neurons and biological synapses, respectively. Each of the processing signal and junction processors may comprise any or a combination of an optical element, an electronic device, a biological unit, or a chemical material. The systems and methods of processing can also be simulated through the use of one or more computer programs. Each processing junction is configured to dynamically adjust its response force according to the time pattern of a train of input signal pins. Consequently, this processing union changes its response to the input signal, and therefore, simulates a "dynamic synapse". Different processing junctions in general respond differently to the same input signal. This produces different output junction signals. This provides a specific way to transform a temporal pattern of a train of signal pins into a spatio-temporal pattern of binding events. In addition, the network of signal processors and processing junctions can be trained to learn certain features embedded in the input signals. One embodiment of a system for information processing includes a plurality of signal processors connected to communicate with each other, and configured to produce at least one output signal in response to at least one input signal, and a plurality of processing links. arranged to interconnect the signal processors. Each of the processing junctions receives and processes a signal prior to joining from a first signal processor in the network, based on at least one internal joining process, to produce a joining signal that causes a signal subsequent to the joint to a second signal processor in the network. Each processing junction is configured in such a way that the binding signal has a dynamic dependence on the signal prior to joining. At least one of the processing junctions may have another internal joining process, which makes a different contribution to the joining signal, than the internal joining process. Each of the processing junctions can be connected to receive an output signal from the second signal processor, and can be configured to adjust the internal joining process according to the output signal. These and other aspects and advantages of the present invention will become clearer in light of the following detailed description., of the accompanying drawings, and the appended claims. Brief Description of the Drawings Figure 1 is a schematic illustration of a neural network formed by neurons and dynamic synapses.
Figure 2A is a diagram showing a feedback connection with a dynamic synapse from a postsynaptic neuron. Figure 2B is a block diagram illustrating the signal processing of a dynamic synapse with multiple internal synaptic processes. Figure 3A is a diagram showing a general temporal pattern by a neuron to a dynamic synapse. Figure 3B is a diagram showing two facilitated processes of different time scales at a synapse. Figure 3C is a diagram showing the responses of two dynamic inhibitor processes at a synapse as a function of time. Figure 3D is a diagram that illustrates the probability of release as a function of the temporal pattern of a train of spikes, due to the interaction of the synaptic processes of different time scales. Figure 3E is a diagram showing three dynamic synapses connected to a presynaptic neuron, to transform a temporary train of spikes into three different spike trains. Figure 4A is a simplified neural network having two neurons and four dynamic synapses, based on the neural network of Figure 1. Figures 4B-4D show simulated output traces of the four dynamic synapses as a function of time under different responses of the synapse in a simplified network of Figure 4A. Figures 5A and 5B are diagrams showing respectively the sample waveforms of the word "hot" spoken by two different persons. Figure 5C shows the waveform of the cross-correlation between the waveforms for the word "hot" in Figures 5A and 5B. Figure 6A is a schematic showing a neural network model with two layers of neurons for simulation. Figures 6B, 6C, 6D, 6E, and 6F are diagrams respectively showing the cross-correlation functions of the output signals from the output neurons for the word "hot" in the neural network of Figure 6A after training. Figures 7A-7L are diagrams showing the extraction of the invariant characteristics of other test words, by using the neural network of Figure 6A. Figures 8A and 8B respectively show the output signals from the four output neurons, before and after the training of each neuron, to respond in preference to a particular word spoken by different people.
Figure 9A is a diagram showing an implementation of temporal signal processing, using a neural network, based on dynamic synapses. Figure 9B is a diagram showing an implementation of spatial signal processing, using a neural network, based on dynamic synapses. Figure 10 is a diagram showing an implementation of a neural network, based on dynamic synapses for the processing of spatio-temporal information. Detailed Description of the Preferred Modes Certain aspects of the invention have been disclosed by Liaw and Berger in "Dynamic synapse: a new concept of neuronal representation and computation", Hippocampus, Volume 6, pages 591-600 (1996); "Computing with dynamic synapse: a case study of speech recognition", Proceedings of the International Conference on Neural Network, Houston, Texas, June 1997; and "Robust speech recognition with dynamic synapses", Proceedings of the International Joint Conference on Neural Network, Anchorage, Alaska, May 1998. The disclosure of the above references is incorporated herein by reference. The following description uses the terms "neuron" and "signal processor", "synapse", and "processing junction", "neural network" and "network of signal processors", in an approximately synonymous sense. The biological terms "dendrite" and "axon" are also used to respectively represent an input terminal and an output terminal of a signal processor (i.e., a "neuron"). Figure 1 schematically illustrates a neural network 100 based on dynamic synapses. The large circles (for example, 110, 120, etc.) represent neurons, and the ovals pegúenos (for example, 114, 124, etc.) represent dynamic synapses that interconnect different neurons. Effector cells and active neuroeffector junctions are not illustrated here for simplicity. The dynamic synapses each have the ability to continuously change a quantity of response to a received signal according to a time pattern and with the variation in magnitude of the received signal. This is different from many conventional models for neural networks, where the synapses are static, and each provides an essentially constant weighting factor, to change the magnitude of a received signal. Neurons 110 and 120 are connected to a neuron 130 via dynamic synapses 114 and 124, through axons 112 and 122, respectively. A signal emitted by neuron 110, for example, is received and processed by synapse 114, to produce a synaptic signal, which causes a postsynaptic signal to the neuron via a dendrite 130a. The neuron 130 processes the received postsynaptic signals to produce an action potential, and then sends the downstream action potential to other neurons, such as 140, 150, by means of the axon branches, such as 131a, 131b, and the dynamic synapses, such as 132, 134. Any two neurons connected in network 100 can exchange information. Accordingly, the neuron 130 can be connected to an axon 152, to receive the signals from the neuron 150, for example, by means of a dynamic synapse 154. The information is processed by the neurons and the dynamic synapses in the network 100 in multiple levels, including, but not limited to, the synaptic level, the neural level, and the network level. At the synaptic level, each dynamic synapse connected between two neurons (ie, a presynaptic neuron and a postsynaptic neuron with respect to the synapse), also processes information based on a signal received from the presynaptic neuron, a feedback signal from the neuron postsynaptic, and one or more internal synaptic processes inside the synapse. The internal synaptic processes of each synapse respond to variations in the temporal pattern and / or in the magnitude of the presynaptic signal, to produce synaptic signals with temporal patterns and dynamically variable synaptic forces. For example, the synaptic strength of a dynamic synapse can be continuously changed by the temporal pattern of a train of spikes of the input signal. In addition, in general different synapses are configured by variations in their internal synaptic processes, to respond differently to the same presynaptic signal, thus producing different synaptic signals. This provides a specific way to transform a temporal pattern of a train of signal spikes into a spatio-temporal pattern of synaptic events. This ability to transform the pattern to the synaptic level, in turn, results in an exponential computing power at the neural level. Another characteristic of dynamic synapses is their capacity for dynamic learning. Each synapse is connected to receive a feedback signal from its respective postsynaptic neuron, in such a way that the synaptic force is dynamically adjusted in order to adapt to certain characteristics embedded in the received presynaptic signals, based on the output signals of the neuron Postsynaptic This produces appropriate transformation functions for different dynamic synapses, so that features can be learned to perform a desired task, such as recognizing a particular word spoken by different people with different accents. Figure 2A is a diagram illustrating this dynamic learning, wherein the dynamic synapse 210 receives a feedback signal 230 from a postsynaptic neuron 220, to learn a characteristic in a presynaptic signal 202. Dynamic learning is generally implemented by utilizing a group of neurons and dynamic synapses, or the entire network 100 of Figure 1. The neurons of the network 100 of Figure 1 are also configured to process signals. A neuron can be connected to receive signals from two or more dynamic synapses, and / or to send an action potential towards two or more dynamic synapses. Referring to Figure 1, neuron 130 is an example of this neuron. The neuron 110 receives signals only from a synapse 111, and sends signals to the synapse 114. The neuron 150 receives the signals from two dynamic synapses 134 and 156, and sends the signals to the axon 152. Since they connect to other neurons, They can use several models of neurons. See, for example, Chapter 2 of Bose and Liang, supra, and Anderson, "An introduction to neutral networks," Chapter 2, MIT (1997), which are incorporated herein by reference. A simulation model widely used for neurons, is the integrative model. A neuron operates in two stages. First, the postsynaptic signals from the dendrites of the neuron are added together, combining the individual synaptic contributions in an independent way, and adding up algebraically, to produce a resulting level of activity. In the second stage, the activity level is used as an input to a non-linear function that relates the activity level (cell membrane potential) with the output value (average output trigger speed), thus generating a final output activity. Then it is generated in accordance with the same, an action potential. The integrative model can be simplified as a two-state neuron, such as the "integrate and shoot" model of McCulloch-Pitts, where a potential that represents "high" is generated, when the resulting level of activity is higher than a critical threshold, and a potential is generated that represents "low" in another way. A real biological synapse usually includes different types of molecules that respond differently to a presynaptic signal. The dynamics of a particular synapse, therefore, is a combination of responses from all different molecules. A dynamic synapse is then configured to reflect the contributions from all the dynamic processes corresponding to the responses of different types of molecules. A specific implementation of the dynamic synapse can be modeled by the following equations: _ (* > "? K ^ i) * Fita (t) (1) where P? (t) is the release potential (ie, the synaptic potential from the i-th dynamic synapse in response to a presynaptic signal, Ki m (t) is the magnitude of the m-th dynamic process in the i- th synapse, and Fifm (t) is the response function of the mth dynamic process.The response Fi # m (t) is a function of the presynaptic signal, Ap (t), which is an action potential originating from a presynaptic neuron with which the dynamic synapse is connected.
The magnitude of Fi / Ip (t) varies continuously with the time pattern of Ap (t). In certain applications, Ap (t) can be a train of spikes, and the mth process can change the Fim (t) response from one spike to another. Ap (t) can also be the action potential generated by some other neuron, and an example of this will be given later. In addition, Fi / In (t) may also have contributions from other signals, such as the synaptic signal generated by the dynamic synapse itself, or contributions from synaptic signals produced by other synapses. Because one dynamic process can be different from another, Fi / m (t) can have different waveforms and / or response time constants for different processes, and the corresponding magnitude Ki / m (t) can also be different . For a dynamic process m with Ki / m (t), the process is said to be of excitation, because it increases the potential of the postsynaptic signal. Conversely, it is said that a dynamic process m with Kirm (t) < 0 is inhibitor. In general, the behavior of a dynamic synapse is not limited to the characteristics of a biological synapse. For example, a dynamic synapse may have different internal processes. The dynamics of these internal processes can take different forms, such as the speed of elevation, decay, or other aspects of waveforms. A dynamic synapse can also have a faster response time than a biological synapse by using, for example, high-speed VLSI technologies. In addition, different dynamic synapses in a neural network, or connected to a common neuron, may have different numbers of internal synaptic processes. The number of dynamic synapses associated with a neuron is determined by the connectivity of the network. In Figure 1, for example, neuron 130, as shown, is connected to receive signals from three dynamic synapses 114, 154, and 124. The release of a synaptic signal, R ± (t), for the anterior dynamic synapse , it can be modeled in different ways. For example, the integrated models for the neurons can be used directly, or they can be modified for the dynamic synapse. A simple model for the dynamic synapse is a two-state model similar to a neuron model proposed by McCulloch and Pitts: or if? (t) = eif Sjt.t) = (2) .ftP ^ t)] if p1 (t) > ei, where the value of R ± (t) represents the presentation of a synaptic event (ie the release of the neurotransmitter) when R ± (t) is a non-zero value, ffPift)], or the lack of presentation of a synaptic event when Ri (t) = 0, and T ± is a potential threshold for the i-th dynamic synapse. The synaptic signal R (t) causes the generation of a postsynaptic signal, S ^ t), in a respective postsynaptic neuron, by the dynamic synapse. For convenience, f [P. (t)] can be set to 1, so that the synaptic signal Rx (t) is a binary train of spikes with Os and ls. This provides a means of encoding the information into a synaptic signal. Figure 2B is a block diagram illustrating signal processing of a dynamic synapse, with multiple internal synaptic processes. The dynamic synapse receives an action potential 240 from a presynaptic neuron (not shown). Different internal synaptic processes 250, 260, and 270 are shown, which have different variable time magnitudes 250a, 260a, and 270a, respectively. The synapse combines the synaptic processes 250a, 260a, and 270a, to generate a composite synaptic potential 280, which corresponds to the operation of Equation (1). A threshold mechanism 290 of the synapse performs the operation of Equation (2), to produce a synaptic signal 292 of binary pulses. The probability of release of a synaptic signal R. (t) is determined by the dynamic interaction of one or more internal synaptic processes and the temporal pattern of the spike train of the presynaptic signal. Figure 3A shows a presynaptic neuron 300 that sends out a temporal pattern 310 (ie, a train of spikes of action potentials) to a dynamic synapse 320a. Spike intervals affect the interaction of different synaptic processes.
Figure 3B is a diagram showing two facilitating processes of different time scales at a synapse. Figure 3C shows two dynamic inhibitory processes (ie, fast GABAA and slow GABAB). Figure 3D shows that the probability of release is a function of the temporal pattern of a train of spikes, due to the interaction of the synaptic processes of different time scales. Figure 3E further shows that three dynamic synapses 360, 362, 364 connected to a presynaptic neuron 350, transform a three-pronged temporal pattern 352 into three different spike trains 260a, 262a, and 264a, to form a spatio-temporal pattern of separate synaptic events of neurotransmitter release. The ability to dynamically tune the synaptic force as a function of the temporal pattern of neuronal activation results in significant power of representation and processing at the synaptic level. Consider a neuron that is capable of firing at a maximum speed of 100 Hz during a 100 millisecond time window. The temporal patterns that can be encoded in this train of 10-bit spikes are from [00 ... 0] to [11 ... 1], up to a total of 210 patterns. Assuming that a release event at a dynamic synapse by action potential can occur at most, depending on the dynamics of the synaptic mechanisms, the number of temporal patterns that can be encoded by the release events at a dynamic synapse is 210 For a neuron with 100 dynamic synapses, the total number of temporary patterns that can be generated is (210) 100 = 21'000. The number would be even higher if more than one release event were allowed by action potential. The above number represents the theoretical maximum of the coding capacity of neurons with dynamic synapses, and will be reduced due to factors such as noise or low probability of release. Figure 4A shows an example of a simple neuronal network 400, having an excitation neuron 410, and an inhibiting neuron 430, based on the system of Figure 1, and on the dynamic synapses of Equations (1) and (2) ). A total of four dynamic synapses 420a, 420b, 420c, and 420d are used to connect the neurons 410 and 430. Inhibitory neuron 430 sends a feedback modulation signal 432 to the four dynamic synapses. The release potential, Px (t), of the i-th dynamic synapse, can be assumed as a function of four processes: a rapid response, F0, by the synapse to an action potential Ap from the neuron 410, first and second facilitation components Fx and F2 within each dynamic synapse, and modulation of Mod feedback, which is assumed to be inhibitory. The parameter values for these factors, as an example, are selected to be consistent with the time constants of the enabling and inhibiting processes that govern the dynamics of hippocampal synaptic transmission in a study using non-linear analytical procedures. See Berger et al., "Nonlinear systems analysis of network properties of the hippocampal formation", in "Neuro-computing and learning: foundations of adaptive networks", edited by Moore and Gabriel, pages 283-352, MIT Press, Cambridge (1991) and "A biologically-based model of the functional properties of the hippocampus", Neuronal Networks, Volume 7, pages 1031-1064 (1994). Figures 4B-4D show simulated output tracings of the four dynamic synapses as a function of time under different responses of the synapses. In each figure, the upper trace is the train of spikes 412 generated by the neuron 410. The bar diagram on the right hand side represents the relative force, that is, Ki / R1 in Equation (1), of the four synaptic processes for each of the dynamic synapses. The numbers above the bars indicate the relative magnitudes with respect to 'the magnitudes of different processes used for dynamic synapse 420a. For example, in Figure 4B, the number 1.25 of the bar chart for the response for F1 at synapse 420c (ie, third row, second column), means that the magnitude of the contribution of the first facilitation component for synapse 420c, it is 25 percent greater than that for synapse 420a. The bars without numbers above them indicate that the magnitude is equal to aguella of dynamic synapse 420a. The frames enclosing the release events of Figures 4B and 4C are used to indicate the spikes that will disappear in the following figure using different response forces for the synapses. For example, the pin further to the right in the response of synapse 420a in Figure 4B will not be seen in the corresponding trace of Figure 4C. The boxes of Figure 4D, on the other hand, indicate spikes which do not exist in Figure 4C. The specific functions used for the four synaptic processes in the simulation are as follows: the rapid response, F0, to the action potential, Ap, is expressed as: dFn = -F. + kP Ap, dt (3) where tF0 = 0.5 milliseconds, is the time constant of F0 for all dynamic synapses, and KF0 = 10.0 is for synapse 420a, and is scaled proportionally based on the bar diagrams of Figures 4B-4D for other synapses. The dependence over time of F? is : dF. 'fl - * dt ß - * i (t> + * ti (4) where tfl 66.7 milliseconds, is the decay time constant of the first facilitation component of all dynamic synapses, and kfl = 0.16 for synapse 420a. The time dependence of F2 is: dF. f2 1 = -F2 (t) + k dt Í2 (5) where t £ 2 - 300 milliseconds, is the decay time constant of the second facilitation component of all dynamic synapses, and kf2 = 80.0 for synapse 420a. The inhibitory feedback modulation is: dMod ,,. . . (6) where Ainh is the action potential generated by the neuron 430, -Mod 10 milliseconds, is the decay time constant of the feedback modulation facilitation of all dynamic synapses, and kMod = 20.0 is for synapse 420a. Equations (3) - (6) are specific examples of F_, m (t) in Equation (1). In accordance with the above, the release potential at each synapse is a sum of the four contributions, based on Equation (1): P = F0 + F1 + F2 + Mod. (7) A quantum Q (= 1.0) of the neurotransmitter is released if P is greater than a threshold TR (= 1.0), and there is at least one quantum of neurotransmitter in each synapse available for release (ie, the total amount of neurotransmitter, Total, it is greater than a quantum for release). The amount of the neurotransmitter in the synaptic endure, NR, is an example of Ri (t) in Equation (2). After the release of a quantum of neurotransmitter, NR is reduced exponentially with time from the initial quantity of Q: NR = Qexp [~ -] (8) where t0 is a time constant, and it is taken as 1.0 milliseconds for the simulation. After release, the total amount of neurotransmitter is reduced by Q. There is a continuous process to fill the neurotransmitter within each synapse. This process can be simulated as follows: dN. J ^ = tf W x -HTotMi '(9) where Nmax is the maximum amount of neurotransmitter available, and trp is the neurotransmitter fill rate, which is 3.2 and 0.3 ms "1 in the simulation, respectively .. The synaptic signal, NR, causes the generation of a postsynaptic signal, S , in a respective postsynaptic neuron.
The rate of change in the amplitude of the postsynaptic S signal in response to a neurotransmitter release event is proportional to NR: where ts is the time constant of the postsynaptic signal, and it is taken as 0.5 milliseconds for the simulation, and ks is a constant that is 0.5 for the simulation. In general, a postsynaptic signal can be excitation (ks> 0) or inhibition (ks <0). The two neurons 410 and 430 are modeled as "integrate and shoot" units, which have a membrane potential, V, which is the sum of all synaptic potentials, and an action potential, Ap from a presynaptic neuron: . V = _V + Y S (11) 'v dt Y' where tv is the time constant of V, and it takes about 1.5 milliseconds for the simulation. The sum is taken over all the internal synaptic processes. In the simulation, AP = 1 if V > TR, which is 0.1 for the presynaptic neuron 410, and 0.02 for the postsynaptic neuron 430. It is also assumed that the neuron is not in the refractory period (Tref = 2.0 milliseconds), that is, the neuron has not triggered 2 $. within the last 2 millisecond Tref. Referring again to Figures 4B-4D, the parameter values for synapse 420a remain constant in all simulations, and are treated as a basis for comparison with other dynamic synapses. In the first simulation of Figure 4B, only one parameter is changed per terminal, by an amount indicated by the respective bar chart. For example, the contribution of the current action potential (F0) to the release potential is increased by 25 percent for synapse 420b, while the other three parameters remain the same as for synapse 420a. The results are as expected, that is, an increase in F0, Fl r or F2 leads to more release events, while increasing the magnitude of the feedback inhibition reduces the number of release events. The transformation function becomes more sophisticated when more than one synaptic mechanism undergoes changes, as shown in Figure 4C. First, even if the parameters remain constant at synapse 420a, fewer release events occur, due to an overall increase in output from the other three synapses 420b, 420c, 420d, which causes increased activation of the postsynaptic neuron. In turn, this exerts a greater inhibition of dynamic synapses. This exemplifies the way in which synaptic dynamics can be influenced by the dynamics of the network. Second, the differences in the outputs from the dynamic synapses are not merely in the number of release events, but also in their temporal patterns. For example, the second dynamic synapse (420b) responds more vigorously to the first half of the spike train, and less to the second half, while the third terminal (420c) responds more to. The second half. In other words, the transformation of the train of spikes through these two dynamic synapses are qualitatively different. Next, the response of dynamic synapses to different temporal patterns of action potentials is also investigated. This aspect has been tested by moving the ninth action potential in the spike train to a point approximately 20 milliseconds after the third action potential, (marked by the arrows in Figures 4C and 4D). As shown in Figure 4D, the output patterns of all the dynamic synapses are different from the previous ones. There are some changes that are common to all terminals, and yet some are specific to certain terminals only. In addition, due to the interaction of dynamics at the synaptic and network levels, the removal of an action potential (the ninth in Figure 4C) leads to a decrease in release events immediately, and an increase in the events of release at a later time. It is considered that the previous discussion of the computational power of a neuronal system with dynamic synapses is based purely on theoretical bases, and the actual computational capacity of a given neuronal system would certainly be limited by certain practical biological limitations. For example, the representation capacity of 21,000 is based on the assumption that a dynamic synapse is sensitive to the presentation or non-presentation of a single action potential (ie, each "bit") in a train of spikes. In many practical situations, noise can corrupt a train of input spikes, and therefore, can adversely affect the response of a neural network. It is important to determine if dynamic synapses are capable of extracting statistically significant characteristics from noisy spike trains. This problem is particularly acute in biology, given that in order to survive, an animal must extract regularities from an otherwise constantly changing environment. For example, a rat must be able to select from a number of possible routes to navigate to its nest or to a food store. These possible routes include some novel routes, and one or more routes given safely, regardless of variations in a wide variety of conditions, such as lighting, time of day, a passing cloud, a swaying tree, winds, smells, sounds, etcetera. Therefore, hippocampal neurons must extract invariants from the variable input signals. One aspect of the invention is a dynamic learning capability of a neural network, based on dynamic synapses. Referring again to the system 100 of Figure 1, each dynamic synapse is configured according to a dynamic learning algorithm to modify the coefficient, ie, kim (t) in Equation (1), of each synaptic process, with the object of finding an appropriate transformation function for a synapse, by correlating the synaptic dynamics with the activity of the respective postsynaptic neurons. This allows each dynamic synapse to learn and extract a certain characteristic of the input signal that contributes to the recognition of a class of patterns. In addition, the system 1Ú0 of Figure 1 creates a set of characteristics to identify a class of signals during a learning and extraction process, with a specific set of characteristics for each individual class of signals. One modality of the dynamic learning algorithm for the mth process of the i-th dynamic synapse, can be expressed as the following equation: KitB (t + t) = K.? B (t) * ct? - Fl? A l t)? Pj (t) -β- [FitB (t) -F ° .rJ_], (12) where? t is the time elapsed during a learning feedback, am is a learning speed for the mth process, and Apj (= 1 or 0) indicates the presentation (Apj = 1) or • 2 $ -no presentation ( Apj = 0) of an action potential of the postsynaptic neuron j that connects with the i-th dynamic synapse, ßm is a decay constant for the m-th process, and F ° iím is a constant for the m th process of the i-th dynamic synapse. Equation (12) provides feedback from a postsynaptic neuron to the dynamic synapse, and allows a synapse to respond according to a correlation between them. This feedback is illustrated by a dotted line 230 directed from the postsynaptic neuron 220 to the dynamic synapse 210 in Figure 2. The previous learning algorithm improves a response by a dynamic synapse to patterns that persist, by varying the dynamics synaptic according to the correlation of the activation level of the synaptic mechanisms and the postsynaptic neuron. For a given noise input signal, only subpatterns that are constantly present during a learning process can survive, and can be detected by dynamic synapses. This provides a highly dynamic representation of information processing in the neural network. At any stage in an information processing chain, the dynamic synapses of a neuron extract a multitude of statistically significant temporal characteristics from a train of input spikes, and distribute these temporal characteristics to a set of postsynaptic neurons, where the temporary characteristics to generate a set of spike trains for further processing. From the perspective of pattern recognition, each dynamic synapse learns to create a "feature set" to represent a particular component of the input signal. Because assumptions are not made with respect to characteristics, each set of characteristics is created online in a class-specific manner, that is, each class of input signals is described by its own optimal set of characteristics. This dynamic learning algorithm is broad and generally applicable to the recognition of the pattern of spatio-temporal signals. The criteria for modifying the synaptic dynamics may vary according to the objectives of a particular signal processing task. In speech recognition, for example, it may be desirable to increase a correlation between the output patterns of the neural network between different waveforms of the same word spoken by different people in a learning procedure. This reduces the variability of speech signals. Therefore, during the presentation of the same words, the magnitude of the synaptic processes of excitation is increased, and the magnitude of the synaptic processes of inhibition decreases. Conversely, during the presentation of different words, the magnitude of the synaptic processes of excitation decreases, and the magnitude of the synaptic inhibition processes increases. A speech waveform has been used as an example for temporal patterns, in order to examine how well a neural network with dynamic synapses can extract invariants. Two well-known characteristics of a speech waveform are noise and variability. The sample waveforms of the word "hot", spoken by two different people, are shown in Figures 5A and 5B, respectively. Figure 5C shows the waveform of the cross-correlation between the waveforms of Figures 5A and 5B. The correlation indicates a high degree of variations in the wave forms of the word "hot" by the two people. The task includes extracting the invariant features embedded in the waveforms that give rise to a constant perception of the word "hot", and several other words of a standard test "HVD" (H-vowel-D, for example, had, heard, hid). The test words are care, hair, key, heat, kit, hit, kite, height, cot, hot, cut, hut, spoken by two people in a typical research office, without special control of the surrounding noises (it is say, nothing beyond lowering the volume of a radio). First, people's speech is recorded and digitized, and then fed into a computer, which is programmed to simulate a neural network with dynamic synapses. The objective of the test is to recognize the words spoken by multiple people, using a neural network model with dynamic synapses. In order to test the coding capacity of dynamic synapses, two limitations are used in the simulation. First, it is assumed that the neural network is simple and simple. Second, prior processing of speech waveforms is not allowed. Figure 6A is a schematic showing a neuronal network model 600, with two layof neurons for simulation. A first layer of neurons, 610, has five input neurons 610a, 610b, 610c, 610d, and 610e, to receive the unprocessed noisy speech waveforms 602a and 602b from two different people. A second layer 620 of neurons 620a, 620b, 620c, 620d, 620e, and 622, forms an outer layer to produce output signals based on the input signals. Each input neuron of the first layer 610 is connected by six dynamic synapses with all the neurons of the second layer 620, so that there are a total of 30 dynamic synapses 630. The neuron 622 of the second layer 620 is an inhibitory interneuron, and it is connected to produce an inhibitory signal towards each dynamic synapse, as indicated by a feedback line 624. This inhibitory signal serves as the term "A? nh" in Equation (6). Each of the dynamic synapses 630 is also connected to receive a feedback from the output of a respective output neuron in the second layer 620 (not shown). The dynamic synapses and neurons are simulated as described above, and the dynamic learning algorithm of Equation (12) is applied to each dynamic synapse. The speech waveforms are sampled at 8 KHz. The digitized amplitudes are fed to all incoming neurons, and treated as postsynaptic excitation potentials. Network 600 is trained to increase the cross-correlation of the output patterns for the same words, while reducing that for different words. During the learning, the presentation of the speech waveforms is grouped into blocks, where the waveforms of the same word spoken by different people are presented to the network 600, for a total of four times. Network 600 is trained in accordance with the following Hebbian and anti-Hebbian rules. Within a presentation block, the Hebbian rule is applied: if a post-synaptic neuron in the second layer 620 triggafter the arrival of an action potential, the contribution of the synaptic mechanisms of excitation is increased, while the aguella of the inhibition mechanisms. If the postsynaptic neuron does not trigger, then the arousal mechanisms decrease, while the mechanisms of inhibition increase. The magnitude of the change is the product of a previously defined learning rate and the current activation level of a particular synaptic mechanism. In this way, responses to the temporal characteristics that are common in waveforms will be improved, while those to idiosyncratic characteristics will be discouraged.
When the presentation first changes to the next block of waveforms of a new word, the anti-Hebbian rule is applied, changing the sign of the learning velocities m and ßm in Equation (12). This improves the differences between the response to the current word and the response to the previous different word. The results of training the neural network 600 are shown in Figures 6B, 6C, 6D, 6E, and 6F, which respectively correspond to the cross-correlation functions of the output signals from the neurons 620a, 620b, 620c, 620d, and 620e for the word "hot." For example, Figure 6B shows the cross-correlation of the two output patterns by the neuron 620a in response to two "hot" waveforms spoken by two different people. Comparing with the correlation of the gross waveforms of the word "hot" of 5C, which shows almost no correlation at all, each of the output neurons 620a-620e generates temporal patterns that are highly correlated for different waveforms. input, which represent the same word spoken by different people. That is, given two radically different waveforms, which nevertheless comprise a representation of the same word, the network 600 generates temporal patterns that are substantially identical. The extraction of the invariant characteristics of other test words by using the neuronal network 600 is shown in Figures 7A-7L. A significant increase in the cross-correlation of the exit patterns is obtained in all test cases. The previous training of a neural network by using the dynamic learning algorithm of Equation (12), can also make it possible for a trained network to distinguish waveforms of different words. As an example, the neuronal network 600 of Figure 6A produces poorly correlated output signals for different words after training. A neural network based on dynamic synapses can also be trained in certain desired ways. For example, "supervised" learning can be implemented by training different neurons in a network to respond only to different characteristics. Referring again to the simple network 600 of Figure 6A, to the output signals from the neurons 602a ("Ni"), 602b ("N2"), 602c ("N3"), and 602d ("N4"), they can be assigned different "target" words, for example, "hit", "height", "hot". and "hut", respectively. During the training, the Hebbian rule is applied to the dynamic synapses 630 whose target words are present in the input signals, while the anti-Hebbian rule is applied to all the other dynamic synapses 630 whose target words are absent in the signals of entry. Figures 8A and 8B show the output signals from the neurons 602a ("NI"), 602b ("N2"), 602c ("N3"), and 602d ("N4"), before and after the training of each neuron , to respond preferentially to a particular word spoken by different people. Before training, the neurons respond identically to the same word. For example, a total of 20 spikes are produced for each of the neurons in response to the word "hit", and 37 spikes in response to the word "height", etc., as shown in Figure 8A. After the training of neurons 602a, 602b, 602c, and 602d, to respond preferentially to the words "hit", "height", "hat", and "hut", respectively, each trained neuron learns to shoot more spikes for your objective word than for other words. This is shown by the diagonal entries in Figure 8B. For example, the second neuron 602b is trained to respond to the word "height", and produces 34 spikes in the presence of the word "height", while producing less than 30 spikes for other words. The previous simulations of speech recognition are examples of the recognition of the temporal pattern in the processing of temporal signals more general, wherein the input may be continuous, such as a speech waveform, or separate, such as time series data. Figure 9A shows an implementation of temporal signal processing using a neural network, based on dynamic synapses. All incoming neurons receive the same temporal signal. In -3t-response, each input neuron generates a sequence of action potentials (ie, a train of spikes), which has temporal characteristics similar to the input signal. For a given presynaptic spike train, dynamic synapses generate a set of spatio-temporal patterns, due to variations in synaptic dynamics across the dynamic synapses of a neuron. The recognition of the temporal pattern is achieved based on the internally generated spatio-temporal signals. A neural network based on dynamic synapses can also be configured to process spatial signals. Figure 9B shows, an implementation of spatial signal processing, using a neural network based on dynamic synapses. Different input neurons in different places generally receive input signals of different magnitudes. Each input neuron generates a sequence of action potentials with a frequency proportional to the magnitude of a respective received input signal. A dynamic synapse connected to an input neuron produces a different time signal determined by the particular dynamic processes incorporated in the synapse, in response to a train of presynaptic spikes. Accordingly, the combination of the dynamic synapses of the input neurons provides a spatio-temporal signal for the following pattern recognition procedures. In addition, it is contemplated that the techniques and configurations of Figures 9A and 9B may be combined to perform pattern recognition on one or more input signals having characteristics with both spatial and temporal variations. The neural network models described above, based on dynamic synapses, can be implemented by means of devices that have electronic components, optical components, and biochemical components. These components can produce dynamic processes different from the synaptic and neural processes in the biological nervous systems. For example, a dynamic synapse or a neuron can be implemented through the use of RC circuits. This is indicated by Equations (3) - (11), which define the typical responses of RC circuits. The time constants of these circuits can be set at different values from the typical time constants in the biological nervous systems. In addition, electronic sensors, optical sensors, and bio-chemical sensors can be used individually or in combination to receive and process the temporal and / or spatial input stimuli. Although the present invention has been described in detail with reference to the preferred embodiments, various modifications and improvements can be made without departing from the spirit and scope of the invention. For example, Equations (3) - (11) used in the examples have RC circuit responses. Other types of responses can also be used, such as a response in the form of the function to: G (t) = a2te_0ít, where OI is a constant, and may be different for different synaptic processes. For another example, various connection configurations different from the examples shown in Figures 9A and 9B can be used to process the spatio-temporal information. Figure 10 shows another modality of a neural network based on dynamic synapses. In yet another example, the two-state model for the output signal of a dynamic synapse in Equation (2) can be modified to produce spikes of different magnitudes, depending on the values of the release potential. It is intended that these and other variations be encompassed by the following claims.

Claims (21)

  1. CLAIMS 1. A system for processing information, comprising: a plurality of signal processing elements connected to communicate with each other and configured to produce at least one output signal in response to at least one input signal; and a plurality of processing links arranged to interconnect said plurality of signal processing elements to form a network, wherein each of said processing links receives and processes a pre-joining signal of a first signal processing element in said network based on at least one internal joining process to produce a binding signal that varies constantly with at least one parameter of said pre-binding signal. A system as in claim 1, wherein at least one element of an amplitude and a temporal frequency of said binding signal varies with said at least one parameter of said pre-binding signal. A system as in claim 1, wherein said at least one parameter of said pre-joining signal includes at least one magnitude or frequency of said pre-joining signal. A system as in claim 1, wherein at least two junctions of said plurality of processing junctions that are connected to receive signals from a common signal processing element in said network produce different binding signals. A system as in claim 1, wherein at least one junction of said plurality of processing junctions has another internal binding process that makes a contribution to said binding signal different from said at least one internal binding process. A system as in claim 1, wherein each junction of said plurality of processing junctions is connected to receive an output signal from said second signal processing element and configured to adjust said at least one internal binding process according to with said output signal. A system as in claim 6, wherein said network of said plurality of signal processing elements and said plurality of processing links are operable to respond to a specific aspect of said at least one input signal. 8. A system as in claim 6, wherein said network of said plurality of signal processing elements and said plurality of processing links are configured so that a first signal processing element is operable to produce a first output signal. to indicate a first aspect in said at least one input signal and a second signal processing element is operable to produce a second output signal to indicate a second aspect in said at least one input signal. A system as in claim 1, wherein said network of said plurality of signal processing elements and said plurality of processing links are configured to indicate a spatial appearance in said at least one output signal which is attached to said minus an input signal. A system as in claim 1, wherein said network of said plurality of signal processing elements and said plurality of processing links are configured to indicate a temporal aspect in said at least one output signal that is attached to said minus an input signal. 11. A system for processing information, comprising a signal processor and a processing junction connected to communicate with each other to process an input signal received by said processing junction, wherein said processing junction has at least one internal process of junction that responds to said input signal to produce a binding signal that changes continuously according to a temporal change in said input signal and said signal processor is operable to produce an output signal in response to said binding signal. 12. A system as in claim 11, wherein said processing junction is operable to adjust said junction signal according to a variation of magnitude in said input signal. A system as in claim 11, wherein said processing link is operable to adjust said link signal according to a time variation in said input signal. A system as in claim 11, wherein said processing link is configured to have another internal joining process that responds to said input signal to produce another link signal that also has a dependency on said characteristics of said input signal. , said processing joint operating to combine said joining signal and said other joining signal to generate a total joining signal. 15. A system as in claim 11, wherein said processing link is configured to release said link signal only when a magnitude of said link signal is greater than a predetermined link threshold. 16. A system as in claim 11, wherein said processing link is operable to cause said link signal to be either excitatory or inhibitory to said signal processor. A system as in claim 11, wherein said signal processor is configured to release said output signal only when a magnitude of said joint signal is greater than a predetermined threshold of the processor. A system as in claim 11, further comprising a feedback loop arranged to connect said processing junction to said signal processor so that at least a portion of said output signal is fed back to said processing junction, wherein said The processing junction is operable to adjust said junction signal according to said output signal. 19. A system as in claim 18, wherein said processing link is operable to extract a specific aspect of said input signal. A system as in claim 19, wherein said processing link is configured to increase a parameter of said link signal when said specific aspect is present in said input signal and to reduce said parameter of said link signal when said aspect specific is absent from said input signal. A system as in claim 11, wherein at least one of said signal processor and said processing link includes at least one element selected from an electronic device, an optical device, a biological element, or a gum material.
MXPA/A/1999/011505A 1997-06-11 1999-12-10 Dynamic synapse for signal processing in neural networks MXPA99011505A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US60/049,754 1997-06-11

Publications (1)

Publication Number Publication Date
MXPA99011505A true MXPA99011505A (en) 2001-09-07

Family

ID=

Similar Documents

Publication Publication Date Title
KR100567465B1 (en) dynamic synapse for signal processing in neural networks
US20030208451A1 (en) Artificial neural systems with dynamic synapses
US7174325B1 (en) Neural processor
Abeles Synfire chains
Delorme Early cortical orientation selectivity: how fast inhibition decodes the order of spike latencies
Senn Beyond spike timing: the role of nonlinear plasticity and unreliable synapses
Liaw et al. Dynamic synapse: Harnessing the computing power of synaptic dynamics
Susswein et al. Mechanisms underlying fictive feeding in Aplysia: coupling between a large neuron with plateau potentials activity and a spiking neuron
Spencer et al. Compensation for traveling wave delay through selection of dendritic delays using spike-timing-dependent plasticity in a model of the auditory brainstem
Liaw et al. Robust speech recognition with dynamic synapses
Liaw et al. Computing with dynamic synapses: A case study of speech recognition
MXPA99011505A (en) Dynamic synapse for signal processing in neural networks
Medvedev et al. Modeling complex tone perception: grouping harmonics with combination-sensitive neurons
CN117275568A (en) Primary auditory cortex neuron cell release rate curve simulation method and device
Micheli-Tzanakou Nervous System
Cortez et al. Mathematical-Computational Modeling in Behavior’s Study of Repetitive Discharge Neuronal Circuits
Galiautdinov Biological Neural Circuits as Applications in Business and Engineering as a New Approach in Artificial Intelligence
Lysetskiy et al. Temporal-to-spatial dynamic mapping, flexible recognition, and temporal correlations in an olfactory cortex model
Brooking An artificial neural network based on biological principles
Guimarães et al. Stochastic model in neural network-an application in pacemaker GABAergic neurons
Doiron Electrosensory dynamics: Dendrites and delays
Mino et al. Stochastic resonance can induce oscillation in a recurrent Hodgkin-Huxley neuron model with added Gaussian noise
Ezrachi et al. Right-left discrimination in a biologically oriented model of the cockroach escape system
Northmore et al. Temporal processing in artificial dendritic tree neuromorphs
London et al. Synaptic information efficacy: Bridging the cleft between biophysics and function