Robust training of neural networks using scale invariant architectures
International Conference on Machine Learning, 2022•proceedings.mlr.press
In contrast to SGD, adaptive gradient methods like Adam allow robust training of modern
deep networks, especially large language models. However, the use of adaptivity not only
comes at the cost of extra memory but also raises the fundamental question: can non-
adaptive methods like SGD enjoy similar benefits? In this paper, we provide an affirmative
answer to this question by proposing to achieve both robust and memory-efficient training
via the following general recipe:(1) modify the architecture and make it scale invariant,(2) …
deep networks, especially large language models. However, the use of adaptivity not only
comes at the cost of extra memory but also raises the fundamental question: can non-
adaptive methods like SGD enjoy similar benefits? In this paper, we provide an affirmative
answer to this question by proposing to achieve both robust and memory-efficient training
via the following general recipe:(1) modify the architecture and make it scale invariant,(2) …
Abstract
In contrast to SGD, adaptive gradient methods like Adam allow robust training of modern deep networks, especially large language models. However, the use of adaptivity not only comes at the cost of extra memory but also raises the fundamental question: can non-adaptive methods like SGD enjoy similar benefits? In this paper, we provide an affirmative answer to this question by proposing to achieve both robust and memory-efficient training via the following general recipe:(1) modify the architecture and make it scale invariant,(2) train with SGD and weight decay, and optionally (3) clip the global gradient norm proportional to weight norm multiplied by , where is learning rate and is weight decay. We show that this general approach is robust to rescaling of parameter and loss by proving that its convergence only depends logarithmically on the scale of initialization and loss, whereas the standard SGD might not even converge for many initializations. Following our recipe, we design a scale invariant version of BERT, called SIBERT, which when trained simply by vanilla SGD achieves performance comparable to BERT trained by adaptive methods like Adam on downstream tasks.
proceedings.mlr.press
Showing the best result for this search. See all results