×
Apr 5, 2022 · This paper proposes an approach to correct adversarial samples for text classification tasks. Our proposed approach combines grammar correction and spelling ...
There are few de- fenses to strengthen model predictions against adversar- ial attacks; popular among them are adversarial train- ing and spelling correction.
The experimental results show that the proposed approach to correct adversarial samples for text classification tasks can effectively counter adversarial ...
People also ask
This approach ensures that the model's outputs remain stable and accurate, even when the input text is subjected to small adversarial perturbations. By doing so ...
Missing: Protect | Show results with:Protect
Mar 17, 2024 · This study compresses the generative pre-trained transformer (GPT) by 65%, saving time and memory without causing performance loss.
Aug 6, 2024 · This RQ aims to investigate the impact of the fine-tuning process on the adversarial robustness of text classification models on HF, ...
Nov 30, 2024 · The experimental results show that this method has a high success rate and strong concealment, effectively reducing the number of attack queries ...
Missing: Protect | Show results with:Protect
Jan 15, 2024 · In this paper, we propose an effective framework for enhancing the robustness of DL models against adversarial attacks.
Missing: Protect | Show results with:Protect
In this article, we proposed a method to protect against gradient-based adversarial attacks. Our method works by iteratively compressing the image with JPEG ...
To defend against such attacks, numerous techniques have been proposed to improve the robustness of language models, especially for text classification models.