Sequence length is a domain: Length-based overfitting in transformer models
Transformer-based sequence-to-sequence architectures, while achieving state-of-the-art
results on a large number of NLP tasks, can still suffer from overfitting during training. In
practice, this is usually countered either by applying regularization methods (eg dropout, L2-
regularization) or by providing huge amounts of training data. Additionally, Transformer and
other architectures are known to struggle when generating very long sequences. For
example, in machine translation, the neural-based systems perform worse on very long …
results on a large number of NLP tasks, can still suffer from overfitting during training. In
practice, this is usually countered either by applying regularization methods (eg dropout, L2-
regularization) or by providing huge amounts of training data. Additionally, Transformer and
other architectures are known to struggle when generating very long sequences. For
example, in machine translation, the neural-based systems perform worse on very long …
Transformer-based sequence-to-sequence architectures, while achieving state-of-the-art results on a large number of NLP tasks, can still suffer from overfitting during training. In practice, this is usually countered either by applying regularization methods (e.g. dropout, L2-regularization) or by providing huge amounts of training data. Additionally, Transformer and other architectures are known to struggle when generating very long sequences. For example, in machine translation, the neural-based systems perform worse on very long sequences when compared to the preceding phrase-based translation approaches (Koehn and Knowles, 2017). We present results which suggest that the issue might also be in the mismatch between the length distributions of the training and validation data combined with the aforementioned tendency of the neural networks to overfit to the training data. We demonstrate on a simple string editing task and a machine translation task that the Transformer model performance drops significantly when facing sequences of length diverging from the length distribution in the training data. Additionally, we show that the observed drop in performance is due to the hypothesis length corresponding to the lengths seen by the model during training rather than the length of the input sequence.
arxiv.org
Showing the best result for this search. See all results