Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- ArticleAugust 2024
Generating Adversarial Texts by the Universal Tail Word Addition Attack
AbstractDeep neural networks (DNNs) are vulnerable to adversarial examples, which can mislead models without affecting normal judgment of humans. In the image field, such adversarial examples involve small perturbations that humans rarely notice. However, ...
- research-articleJanuary 2024
WordIllusion: An adversarial text generation algorithm based on human cognitive system
Cognitive Systems Research (COGSR), Volume 83, Issue Chttps://doi.org/10.1016/j.cogsys.2023.101179AbstractAlthough natural language processing technology has shown strong performance in many tasks, it is very vulnerable to adversarial examples, i.e., sentences with some small perturbations can fool AI models. Current adversarial texts for English are ...
- ArticleJuly 2022
Repairing Adversarial Texts Through Perturbation
Theoretical Aspects of Software EngineeringPages 29–48https://doi.org/10.1007/978-3-031-10363-6_3AbstractIt is known that neural networks are subject to attacks through adversarial perturbations. Worse yet, such attacks are impossible to eliminate, i.e., the adversarial perturbation is still possible after applying mitigation methods such as ...
- research-articleSeptember 2021
Adversarial Text Generation for Personality Privacy Protection
DSIT 2021: 2021 4th International Conference on Data Science and Information TechnologyPages 159–165https://doi.org/10.1145/3478905.3478937Protecting the user's personality privacy can effectively interfere with or deceive the attacker's personality analysis, avoid the attacker's use of personality vulnerability, and reduce the success rate of social engineering attacks. However, the ...
- ArticleOctober 2020
Adversarial Text Generation via Probability Determined Word Saliency
Machine Learning for Cyber SecurityPages 562–571https://doi.org/10.1007/978-3-030-62460-6_50AbstractDeep learning (DL) technology has been widely deployed in many fields and achieved great success, but it is not absolutely safe and reliable. It has been proved that research on adversarial attacks can reveal the vulnerability of deep neural ...
- ArticleSeptember 2020
Generating Adversarial Texts for Recurrent Neural Networks
Artificial Neural Networks and Machine Learning – ICANN 2020Pages 39–51https://doi.org/10.1007/978-3-030-61609-0_4AbstractAdversarial examples have received increasing attention recently due to their significant values in evaluating and improving the robustness of deep neural networks. Existing adversarial attack algorithms have achieved good result for most images. ...