Training-free Neural Architecture Search for RNNs and Transformers

Aaron Serianni, Jugal Kalita


Abstract
Neural architecture search (NAS) has allowed for the automatic creation of new and effective neural network architectures, offering an alternative to the laborious process of manually designing complex architectures. However, traditional NAS algorithms are slow and require immense amounts of computing power. Recent research has investigated training-free NAS metrics for image classification architectures, drastically speeding up search algorithms. In this paper, we investigate training-free NAS metrics for recurrent neural network (RNN) and BERT-based transformer architectures, targeted towards language modeling tasks. First, we develop a new training-free metric, named hidden covariance, that predicts the trained performance of an RNN architecture and significantly outperforms existing training-free metrics. We experimentally evaluate the effectiveness of the hidden covariance metric on the NAS-Bench-NLP benchmark. Second, we find that the current search space paradigm for transformer architectures is not optimized for training-free neural architecture search. Instead, a simple qualitative analysis can effectively shrink the search space to the best performing architectures. This conclusion is based on our investigation of existing training-free metrics and new metrics developed from recent transformer pruning literature, evaluated on our own benchmark of trained BERT architectures. Ultimately, our analysis shows that the architecture search space and the training-free metric must be developed together in order to achieve effective results. Our source code is available at https://github.com/aaronserianni/training-free-nas.
Anthology ID:
2023.acl-long.142
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2522–2540
Language:
URL:
https://aclanthology.org/2023.acl-long.142
DOI:
10.18653/v1/2023.acl-long.142
Bibkey:
Cite (ACL):
Aaron Serianni and Jugal Kalita. 2023. Training-free Neural Architecture Search for RNNs and Transformers. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2522–2540, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Training-free Neural Architecture Search for RNNs and Transformers (Serianni & Kalita, ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.142.pdf
Video:
 https://aclanthology.org/2023.acl-long.142.mp4