MockingBERT: A Method for Retroactively Adding Resilience to NLP Models

Jan Jezabek, Akash Singh


Abstract
Protecting NLP models against misspellings whether accidental or adversarial has been the object of research interest for the past few years. Existing remediations have typically either compromised accuracy or required full model re-training with each new class of attacks. We propose a novel method of retroactively adding resilience to misspellings to transformer-based NLP models. This robustness can be achieved without the need for re-training of the original NLP model and with only a minimal loss of language understanding performance on inputs without misspellings. Additionally we propose a new efficient approximate method of generating adversarial misspellings, which significantly reduces the cost needed to evaluate a model’s resilience to adversarial attacks.
Anthology ID:
2022.coling-1.411
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
4640–4650
Language:
URL:
https://aclanthology.org/2022.coling-1.411
DOI:
Bibkey:
Cite (ACL):
Jan Jezabek and Akash Singh. 2022. MockingBERT: A Method for Retroactively Adding Resilience to NLP Models. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4640–4650, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
MockingBERT: A Method for Retroactively Adding Resilience to NLP Models (Jezabek & Singh, COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.411.pdf
Code
 akash13singh/resilient_nlp
Data
BookCorpusIMDb Movie ReviewsSSTSST-2SST-5