On the Formal Evaluation of the Robustness of Neural Networks and Its Pivotal Relevance for AI-Based Safety-Critical Domains

Research output: Contribution to journalArticlepeer-review

Abstract

Neural networks serve as a crucial role in critical tasks, where erroneous outputs can have severe consequences. Traditionally, the validation of neural networks has focused on evaluating their performance across a large set of input points to ensure desired outputs. However, due to the virtually infinite cardinality of the input space, it becomes impractical to exhaustively check all possible inputs. Networks exhibiting strong performance on extensive input samples may fail to generalize correctly in novel scenarios, and remain vulnerable to adversarial attacks. This paper presents the general pipeline of neural network robustness and provides an overview of different domains that work together to achieve robustness guarantees. These domains include evaluating the robustness against adversarial attacks, evaluating the robustness formally and applying defense techniques to enhance the robustness when the model is compromised.

Original languageEnglish
Article number100018
JournalInternational Journal of Network Dynamics and Intelligence
Volume2
Issue number4
DOIs
Publication statusPublished - 1 Jan 2023

Keywords

  • adversarial attacks
  • defense techniques
  • formal robustness guar-anties
  • neural network verification

Fingerprint

Dive into the research topics of 'On the Formal Evaluation of the Robustness of Neural Networks and Its Pivotal Relevance for AI-Based Safety-Critical Domains'. Together they form a unique fingerprint.

Cite this