Beyond exploding and vanishing gradients: analysing RNN training using attractors and smoothness

António H. Ribeiro, Koen Tiels, Luis A. Aguirre, Thomas Schön
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2370-2380, 2020.

Abstract

The exploding and vanishing gradient problem has been the major conceptual principle behind most architecture and training improvements in recurrent neural networks (RNNs) during the last decade. In this paper, we argue that this principle, while powerful, might need some refinement to explain recent developments. We refine the concept of exploding gradients by reformulating the problem in terms of the cost function smoothness, which gives insight into higher-order derivatives and the existence of regions with many close local minima. We also clarify the distinction between vanishing gradients and the need for the RNN to learn attractors to fully use its expressive power. Through the lens of these refinements, we shed new light on recent developments in the RNN field, namely stable RNN and unitary (or orthogonal) RNNs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-ribeiro20a, title = {Beyond exploding and vanishing gradients: analysing RNN training using attractors and smoothness}, author = {Ribeiro, Ant\'onio H. and Tiels, Koen and Aguirre, Luis A. and Sch\"on, Thomas}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {2370--2380}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/ribeiro20a/ribeiro20a.pdf}, url = {https://proceedings.mlr.press/v108/ribeiro20a.html}, abstract = {The exploding and vanishing gradient problem has been the major conceptual principle behind most architecture and training improvements in recurrent neural networks (RNNs) during the last decade. In this paper, we argue that this principle, while powerful, might need some refinement to explain recent developments. We refine the concept of exploding gradients by reformulating the problem in terms of the cost function smoothness, which gives insight into higher-order derivatives and the existence of regions with many close local minima. We also clarify the distinction between vanishing gradients and the need for the RNN to learn attractors to fully use its expressive power. Through the lens of these refinements, we shed new light on recent developments in the RNN field, namely stable RNN and unitary (or orthogonal) RNNs.} }
Endnote
%0 Conference Paper %T Beyond exploding and vanishing gradients: analysing RNN training using attractors and smoothness %A António H. Ribeiro %A Koen Tiels %A Luis A. Aguirre %A Thomas Schön %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-ribeiro20a %I PMLR %P 2370--2380 %U https://proceedings.mlr.press/v108/ribeiro20a.html %V 108 %X The exploding and vanishing gradient problem has been the major conceptual principle behind most architecture and training improvements in recurrent neural networks (RNNs) during the last decade. In this paper, we argue that this principle, while powerful, might need some refinement to explain recent developments. We refine the concept of exploding gradients by reformulating the problem in terms of the cost function smoothness, which gives insight into higher-order derivatives and the existence of regions with many close local minima. We also clarify the distinction between vanishing gradients and the need for the RNN to learn attractors to fully use its expressive power. Through the lens of these refinements, we shed new light on recent developments in the RNN field, namely stable RNN and unitary (or orthogonal) RNNs.
APA
Ribeiro, A.H., Tiels, K., Aguirre, L.A. & Schön, T.. (2020). Beyond exploding and vanishing gradients: analysing RNN training using attractors and smoothness. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:2370-2380 Available from https://proceedings.mlr.press/v108/ribeiro20a.html.

Related Material