The goal of this thesis is to improve the mathematical foundations of multilayer neural networks. Chapter 1 presents an overview of feed-forward neural networks and the backpropagation training algorithm. Chapter 2 analyzes the architecture for nonlinear feed-forward neural networks. This analysis determines the theoretical values for the weights and targets for single and multilayer neural networks. This work also examines the role of the hidden layer in two-layer neural networks. The third chapter develops a neural network training algorithm based on Newton's method, instead of gradient descent. This neural network training algorithm, called connectionist nonlinear over-relaxation, has better behavior near the solution than the delta rule or backpropagation. The fourth chapter develops a neural network for image processing applications by drawing analogies with an optical system. This neural network performs shift, rotation and scale invariant processing using a second-order neural network in a manner similar to the optical wedge-ring detector system. The fifth chapter analyzes the temporal behavior of the backpropagation learning algorithm and compares this behavior to temporal effects in holographic memories and psychological experiments on list memorization. The thesis analyzes and attempts to optimize feed-forward neural network training algorithms and architectures.
Recommendations
Clearly defined architectures of neural networks and multilayer perceptron
Neural networks with clearly defined architecture differ in the fact that they make it possible to determine the structure of neural network (number of neurons, layers, connections) on the basis of initial parameters of recognition problem. For these ...
Multilayer perceptron and neural networks
The attempts for solving linear inseparable problems have led to different variations on the number of layers of neurons and activation functions used. The backpropagation algorithm is the most known and used supervised learning algorithm. Also called ...
Classification error of multilayer perceptron neural networks
In subject classification, artificial neural networks (ANNS) are efficient and objective classification methods. Thus, they have been successfully applied to the numerous classification fields. Sometimes, however, classifications do not match the real ...