TY - JOUR
T1 - Laser Guard
T2 - Efficiently Detecting Laser-Based Physical Adversarial Attacks in Autonomous Driving
AU - Chi, Lijun
AU - Msahli, Mounira
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - The fast development of deep learning (DL) enables even resource-constrained devices to tackle complex artificial intelligence (AI) tasks, especially those related to environment perception in autonomous driving systems (ADS). However, AI models deployed in the real world are exposed to the threats of adversarial examples (AE). One specific type of physical attack utilizes laser beams or spots planted on images rather than crafted pixel-level perturbations to manipulate the victim deep neural networks (DNN) prediction. These attacks easily mislead traffic sign recognition and object detection in ADS. Laser-based adversarial attacks are cognitively stealthy but visually conspicuous, invalidating the previous defenses designed for digital attacks. This study considers two state-of-the-art (SOTA) laser-based attacks and establishes a benchmark comprising thousands of AEs. Such AEs have distinct pattern features, significant occupation, high contrast, and low variance. Based on the observation, a lightweight detection framework, Laser Guard, is proposed. Specifically, preprocessing methods are used to approximate the laser-perturbed areas, followed by a statistics-based strategy to determine abnormalities in the given samples. This framework can be applied in a plug-and-play manner with DNNs in intelligent vehicles. Extensive experimental results show that the framework can effectively filter out about 70-75% of laser-based street sign AEs, and extends well to other objects, successfully filtering out 80%. The detection latency of objects AEs is marginal, with the average detection time for laser spots being approximately 24 ms, and for laser beams, it is around 57 ms.
AB - The fast development of deep learning (DL) enables even resource-constrained devices to tackle complex artificial intelligence (AI) tasks, especially those related to environment perception in autonomous driving systems (ADS). However, AI models deployed in the real world are exposed to the threats of adversarial examples (AE). One specific type of physical attack utilizes laser beams or spots planted on images rather than crafted pixel-level perturbations to manipulate the victim deep neural networks (DNN) prediction. These attacks easily mislead traffic sign recognition and object detection in ADS. Laser-based adversarial attacks are cognitively stealthy but visually conspicuous, invalidating the previous defenses designed for digital attacks. This study considers two state-of-the-art (SOTA) laser-based attacks and establishes a benchmark comprising thousands of AEs. Such AEs have distinct pattern features, significant occupation, high contrast, and low variance. Based on the observation, a lightweight detection framework, Laser Guard, is proposed. Specifically, preprocessing methods are used to approximate the laser-perturbed areas, followed by a statistics-based strategy to determine abnormalities in the given samples. This framework can be applied in a plug-and-play manner with DNNs in intelligent vehicles. Extensive experimental results show that the framework can effectively filter out about 70-75% of laser-based street sign AEs, and extends well to other objects, successfully filtering out 80%. The detection latency of objects AEs is marginal, with the average detection time for laser spots being approximately 24 ms, and for laser beams, it is around 57 ms.
KW - Deep learning
KW - adversarial attacks
KW - detection-based defense
KW - laser-based attacks
KW - preprocessing
UR - https://www.scopus.com/pages/publications/85217929860
U2 - 10.1109/ACCESS.2025.3540653
DO - 10.1109/ACCESS.2025.3540653
M3 - Article
AN - SCOPUS:85217929860
SN - 2169-3536
VL - 13
SP - 35219
EP - 35229
JO - IEEE Access
JF - IEEE Access
ER -