skip to main content
10.1145/3584376.3584606acmotherconferencesArticle/Chapter ViewAbstractPublication PagesricaiConference Proceedingsconference-collections
research-article

Target adversarial sample generation for malicious traffic classification model

Published: 19 April 2023 Publication History

Abstract

In the face of attacks against sample, machine learning technique is weak, the current flow against malicious classification samples against attack algorithm is numerous, but the vast majority of algorithms only to a certain classification model for design, migration attack ability is low, and is targeted, the target structure against samples as long as can let classification model classification error, The target attack has a clear directivity, which can mislead the model to classify as the designated category of the attacker. Therefore, in real scenes, the target attack will often bring greater threat, so it has more realistic significance. Based on Generative Adversarial Network (GAN), this paper proposes a method of generating target black-box Adversarial samples with norm limited disturbances. The shadow classifier is used to fit multiple Network traffic classification models as the discriminator and training generator. Generating and adding a specific disturbance that does not change the attack characteristics to a certain type of original malicious traffic, thus generating an adversarial sample that can attack multiple malicious traffic at the same time and misclassify it into a specified category by the classification model. The adversarial sample has strong target and good mobility.

References

[1]
Cisco, “The zettabyte era–trends and analysis,” Cisco visual networking white paper, 2017.
[2]
Goodfellow IJ, Shlens J, Szegedy C. Explaining and Harnessing Adversarial Examples[C]//Yoshua Bengio, Yann LeCun. 3rd International Conference on Learning Representations, San Diego, CA, USA, May 7-9, 2015.
[3]
Alexey Kurakin, Ian Goodfellow and Samy Bengio. “Adversarial Machine Learning at Scale” arXiv: Computer Vision and Pattern Recognition, 2016: n. pag.
[4]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik and Ananthram Swami. “The Limitations of Deep Learning in Adversarial Settings” arXiv: Cryptography and Security, 2015: n. pag.
[5]
Nicholas Carlini and David Wagner. “Towards Evaluating the Robustness of Neural Networks” IEEE Symposium on Security and Privacy, 2017.
[6]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi and Pascal Frossard. “DeepFool: a simple and accurate method to fool deep neural networks” arXiv: Learning, 2015: n. pag.
[7]
Nicolas Papernot, Patrick McDaniel and Ian Goodfellow. “Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples” arXiv: Cryptography and Security, 2016: n. pag.
[8]
Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes and Patrick McDaniel. “Adversarial Perturbations Against Deep Neural Networks for Malware Classification” arXiv: Cryptography and Security, 2016: n. pag.
[9]
Li Peiyang, Li Xuan, Chen Junjie, Chen Yongle. Adversarial sample generation for Avoiding botnet Traffic Detection [J/OL]. Computer Engineering and Applications :1-9, 2022, 01, 20.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
RICAI '22: Proceedings of the 2022 4th International Conference on Robotics, Intelligent Control and Artificial Intelligence
December 2022
1396 pages
ISBN:9781450398343
DOI:10.1145/3584376
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 April 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial sample
  2. black box attack
  3. target attack
  4. traffic classification
  5. transferability

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

RICAI 2022

Acceptance Rates

Overall Acceptance Rate 140 of 294 submissions, 48%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 32
    Total Downloads
  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media