Abstract
Deep learning (DL) has gained popularity in network intrusion detection, due to its strong capability of recognizing subtle differences between normal and malicious network activities. Although a variety of methods have been designed to leverage DL models for security protection, whether these systems are vulnerable to adversarial examples (AEs) is unknown. In this article, we design a novel adversarial attack against DL-based network intrusion detection systems (NIDSs) in the Internet-of-Things environment, with only black-box accesses to the DL model in such NIDS. We introduce two techniques: 1) model extraction is adopted to replicate the black-box model with a small amount of training data and 2) a saliency map is then used to disclose the impact of each packet attribute on the detection results, and the most critical features. This enables us to efficiently generate AEs using conventional methods. With these tehniques, we successfully compromise one state-of-the-art NIDS, Kitsune: the adversary only needs to modify less than 0.005% of bytes in the malicious packets to achieve an average 94.31% attack success rate.
| Original language | English |
|---|---|
| Article number | 9311132 |
| Pages (from-to) | 10327-10335 |
| Number of pages | 9 |
| Journal | IEEE Internet of Things Journal |
| Volume | 8 |
| Issue number | 13 |
| DOIs | |
| Publication status | Published - 1 Jul 2021 |
Keywords
- Adversarial examples (AEs)
- Deep learning (DL)
- Internet of Things (IoT)
- Network intrusion detection
Fingerprint
Dive into the research topics of 'Adversarial Attacks against Network Intrusion Detection in IoT Systems'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver