Repairing Adversarial Texts Through Perturbation
Abstract
References
Index Terms
- Repairing Adversarial Texts Through Perturbation
Recommendations
Generating Adversarial Texts by the Universal Tail Word Addition Attack
Web and Big DataAbstractDeep neural networks (DNNs) are vulnerable to adversarial examples, which can mislead models without affecting normal judgment of humans. In the image field, such adversarial examples involve small perturbations that humans rarely notice. However, ...
Generating Adversarial Texts for Recurrent Neural Networks
Artificial Neural Networks and Machine Learning – ICANN 2020AbstractAdversarial examples have received increasing attention recently due to their significant values in evaluating and improving the robustness of deep neural networks. Existing adversarial attack algorithms have achieved good result for most images. ...
Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning
Multi-Agent Reinforcement Learning (MARL) is susceptible to Adversarial Machine Learning (AML) attacks. Execution-time AML attacks against MARL are complex due to effects that propagate across time and between agents. To understand the interaction between ...
Comments
Information & Contributors
Information
Published In
Publisher
Springer-Verlag
Berlin, Heidelberg
Publication History
Author Tags
Qualifiers
- Article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0