Composite Adversarial Attacks
DOI:
https://doi.org/10.1609/aaai.v35i10.17075Keywords:
Adversarial Learning & Robustness, Adversarial Attacks & RobustnessAbstract
Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model security. In this paper, a new procedure called Composite Adversarial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms and their hyper-parameters from a candidate pool of 32 base attackers. We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors. Multi-objective NSGA-II genetic algorithm is adopted for finding the strongest attack policy with minimum complexity. The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (6 × faster than AutoAttack), and achieves the new state-of-the-art on linf, l2 and unrestricted adversarial attacks.Downloads
Published
2021-05-18
How to Cite
Mao, X., Chen, Y., Wang, S., Su, H., He, Y., & Xue, H. (2021). Composite Adversarial Attacks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8884-8892. https://doi.org/10.1609/aaai.v35i10.17075
Issue
Section
AAAI Technical Track on Machine Learning III