Linguistic Rule Induction Improves Adversarial and OOD Robustness in Large Language Models

Shuoran Jiang, Qingcai Chen, Yang Xiang, Youcheng Pan, Yukang Lin


Abstract
Ensuring robustness is especially important when AI is deployed in responsible or safety-critical environments. ChatGPT can perform brilliantly in both adversarial and out-of-distribution (OOD) robustness, while other popular large language models (LLMs), like LLaMA-2, ERNIE and ChatGLM, do not perform satisfactorily in this regard. Therefore, it is valuable to study what efforts play essential roles in ChatGPT, and how to transfer these efforts to other LLMs. This paper experimentally finds that linguistic rule induction is the foundation for identifying the cause-effect relationships in LLMs. For LLMs, accurately processing the cause-effect relationships improves its adversarial and OOD robustness. Furthermore, we explore a low-cost way for aligning LLMs with linguistic rules. Specifically, we constructed a linguistic rule instruction dataset to fine-tune LLMs. To further energize LLMs for reasoning step-by-step with the linguistic rule, we construct the task-relevant LingR-based chain-of-thoughts. Experiments showed that LingR-induced LLaMA-13B achieves comparable or better results with GPT-3.5 and GPT-4 on various adversarial and OOD robustness evaluations.
Anthology ID:
2024.lrec-main.924
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
10565–10577
Language:
URL:
https://aclanthology.org/2024.lrec-main.924
DOI:
Bibkey:
Cite (ACL):
Shuoran Jiang, Qingcai Chen, Yang Xiang, Youcheng Pan, and Yukang Lin. 2024. Linguistic Rule Induction Improves Adversarial and OOD Robustness in Large Language Models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 10565–10577, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Linguistic Rule Induction Improves Adversarial and OOD Robustness in Large Language Models (Jiang et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.924.pdf