Adversarial attack and defense in AI-powered intrusion detection

Tuyen T. Nguyen, Uyen H. Tran, Hoa N. Nguyen
Author affiliations

Authors

  • Tuyen T. Nguyen VNU University of Engineering and Technology, 144 Xuan Thuy, Cau Giay Ward, Ha Noi, Viet Nam
  • Uyen H. Tran VNU University of Engineering and Technology, 144 Xuan Thuy, Cau Giay Ward, Ha Noi, Viet Nam
  • Hoa N. Nguyen VNU University of Engineering and Technology, 144 Xuan Thuy, Cau Giay Ward, Ha Noi, Viet Nam

DOI:

https://doi.org/10.15625/1813-9663/22884

Keywords:

Adversarial machine learning, intrusion detection systems, adversarial attack, adversarial defense.

Abstract

The increasing sophistication of cyberattacks, causing global damages estimated at $9.22 trillion in 2024, highlights the critical importance of robust Intrusion Detection Systems (IDS). AI-driven IDS frameworks, such as APELID, demonstrate impressive detection accuracy leveraging novel machine learning. However, these systems remain vulnerable to adversarial machine learning (AML) attacks, which craft deceptive inputs to bypass detection mechanisms. In this paper, we propose APELID+, an enhanced IDS framework integrating adversarial training and feature squeezing techniques to effectively counter AML threats. We systematically evaluate APELID’s vulnerabilities using comprehensive adversarial attack strategies, including both white-box (FGSM, JSMA, PGD, DeepFool, CW) and black-box attacks (ZOO, HSJA). Experimental results on the CSE-CIC-IDS2018 dataset reveal a significant reduction in APELID’s accuracy (from 99.7% to as low as 1.14% under FGSM attacks). The enhanced APELID+ achieves robust performance, maintaining 98.73% accuracy under combined adversarial conditions, surpassing state-of-the-art methods such as Apollon and RAIDS.

Downloads

Published

11-11-2025

How to Cite

[1]T. T. Nguyen, U. H. Tran, and H. N. Nguyen, “Adversarial attack and defense in AI-powered intrusion detection”, J. Comput. Sci. Cybern., Nov. 2025.

Issue

Section

Articles

Similar Articles

1 2 3 4 5 6 7 8 9 10 > >> 

You may also start an advanced similarity search for this article.