Review:
In recent years there has been significant interest in adapting techniques from statistical physics, in particular mean field theory, to provide deterministic heuristic algorithms for obtaining approximate solutions to optimization problems. Although ...
Object recognition and sensitive periods: A computational analysis of visual imprinting
Using neural and behavioral constraints from a relatively simple biological visual system, we evaluate the mechanism and behavioral implications of a model of invariant object recognition. Evidence from a variety of methods suggests that a localized ...
Computing stereo disparity and motion with known binocular cell properties
Many models for stereo disparity computation have been proposed, but few can be said to be truly biological. There is also a rich literature devoted to physiological studies of stereopsis. Cells sensitive to binocular disparity have been found in the ...
Integration and differentiation in dynamic recurrent neural networks
Dynamic neural networks with recurrent connections were trained by backpropagation to generate the differential or the leaky integral of a nonrepeating frequency-modulated sinusoidal signal. The trained networks performed these operations on arbitrary ...
A convergence result for learning in recurrent neural networks
We give a rigorous analysis of the convergence properties of a backpropagation algorithm for recurrent networks containing either output or hidden layer recurrence. The conditions permit data generated by stochastic processes with considerable ...
Topology learning solved by extended objects: A neural network model
It is shown that local, extended objects of a metrical topological space shape the receptive fields of competitive neurons to local filters. Self-organized topology learning is then solved with the help of Hebbian learning together with extended objects ...
Dynamics of discrete time, continuous state hopfield networks
The dynamics of discrete time, continuous state Hopfield networks is driven by an energy function. In this paper, we use this tool to prove under mild hypotheses that any trajectory converges to a fixed point for the sequential iteration, and to a cycle ...
Alopex: A correlation-based learning algorithm for feedforward and recurrent neural networks
We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions ...
Duality between learning machines: A bridge between supervised and unsupervised learning
We exhibit a duality between two perceptrons that allows us to compare the theoretical analysis of supervised and unsupervised learning tasks. The first perceptron has one output and is asked to learn a classification of p patterns. The second (dual) ...
Finding the embedding dimension and variable dependencies in time series
We present a general method, the δ-test, which establishes functional dependencies given a sequence of measurements. The approach is based on calculating conditional probabilities from vector component distances. Imposing the requirement of continuity ...
Comparison of some neural network and scattered data approximations: The inverse manipulator kinematics example
This paper compares the application of five different methods for the approximation of the inverse kinematics of a manipulator arm from a number of joint angle/Cartesian coordinate training pairs. The first method is a standard feedforward neural ...
Functionally equivalent feedforward neural networks
For a feedforward perceptron type architecture with a single hidden layer but with a quite general activation function, we characterize the relation between pairs of weight vectors determining networks with the same input-output function.