You Know What I'm Saying: Jailbreak Attack via Implicit Reference

T Wu, L Mei, R Yuan, L Li, W Xue, Y Guo - arXiv preprint arXiv:2410.03857, 2024 - arxiv.org
T Wu, L Mei, R Yuan, L Li, W Xue, Y Guo
arXiv preprint arXiv:2410.03857, 2024arxiv.org
While recent advancements in large language model (LLM) alignment have enabled the
effective identification of malicious objectives involving scene nesting and keyword rewriting,
our study reveals that these methods remain inadequate at detecting malicious objectives
expressed through context within nested harmless objectives. This study identifies a
previously overlooked vulnerability, which we term Attack via Implicit Reference (AIR). AIR
decomposes a malicious objective into permissible objectives and links them through …
While recent advancements in large language model (LLM) alignment have enabled the effective identification of malicious objectives involving scene nesting and keyword rewriting, our study reveals that these methods remain inadequate at detecting malicious objectives expressed through context within nested harmless objectives. This study identifies a previously overlooked vulnerability, which we term Attack via Implicit Reference (AIR). AIR decomposes a malicious objective into permissible objectives and links them through implicit references within the context. This method employs multiple related harmless objectives to generate malicious content without triggering refusal responses, thereby effectively bypassing existing detection techniques.Our experiments demonstrate AIR's effectiveness across state-of-the-art LLMs, achieving an attack success rate (ASR) exceeding 90% on most models, including GPT-4o, Claude-3.5-Sonnet, and Qwen-2-72B. Notably, we observe an inverse scaling phenomenon, where larger models are more vulnerable to this attack method. These findings underscore the urgent need for defense mechanisms capable of understanding and preventing contextual attacks. Furthermore, we introduce a cross-model attack strategy that leverages less secure models to generate malicious contexts, thereby further increasing the ASR when targeting other models.Our code and jailbreak artifacts can be found at https://github.com/Lucas-TY/llm_Implicit_reference.
arxiv.org
Showing the best result for this search. See all results