TY - GEN
T1 - Probabilistic Verification of Neural Networks with Sampling-Based Probability Box Propagation
AU - Chwiałkowski, Marcel
AU - Goubault, Eric
AU - Putot, Sylvie
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
PY - 2026/1/1
Y1 - 2026/1/1
N2 - In probabilistic neural network verification, a well-chosen representation of input uncertainty ensures that theoretical analyses accurately reflect real input perturbations. A recent approach based on probability boxes (p-boxes) [9] is introduced in [10] and unifies set-based and probabilistic information on the inputs. The method allows for obtaining guaranteed probabilistic bounds for property satisfaction on feedforward ReLU networks. However, it suffers from conservatism due to employing set-based propagation methods. In this work we investigate how to sample from p-boxes without loss of information. Based on that, we develop a sampling-based approach for propagating p-boxes through feedforward ReLU networks. We prove that with dense enough coverings of the input p-boxes, the propagated samples accurately represent the output uncertainty and provide error bounds. Additionally, we show how to create coverings for arbitrary p-boxes with various distributions. On the ACAS Xu benchmark we demonstrate that our approach is applicable in practice, both as a standalone verifier and as a way to partially assess the conservatism of the set-based approach of [10].
AB - In probabilistic neural network verification, a well-chosen representation of input uncertainty ensures that theoretical analyses accurately reflect real input perturbations. A recent approach based on probability boxes (p-boxes) [9] is introduced in [10] and unifies set-based and probabilistic information on the inputs. The method allows for obtaining guaranteed probabilistic bounds for property satisfaction on feedforward ReLU networks. However, it suffers from conservatism due to employing set-based propagation methods. In this work we investigate how to sample from p-boxes without loss of information. Based on that, we develop a sampling-based approach for propagating p-boxes through feedforward ReLU networks. We prove that with dense enough coverings of the input p-boxes, the propagated samples accurately represent the output uncertainty and provide error bounds. Additionally, we show how to create coverings for arbitrary p-boxes with various distributions. On the ACAS Xu benchmark we demonstrate that our approach is applicable in practice, both as a standalone verifier and as a way to partially assess the conservatism of the set-based approach of [10].
KW - Neural network verification
KW - Probability boxes
UR - https://www.scopus.com/pages/publications/105021385594
U2 - 10.1007/978-3-031-99991-8_6
DO - 10.1007/978-3-031-99991-8_6
M3 - Conference contribution
AN - SCOPUS:105021385594
SN - 9783031999901
T3 - Lecture Notes in Computer Science
SP - 115
EP - 135
BT - AI Verification - 2nd International Symposium, SAIV 2025, Proceedings
A2 - Giacobbe, Mirco
A2 - Lukina, Anna
PB - Springer Science and Business Media Deutschland GmbH
T2 - 2nd International Symposium on AI Verification, SAIV 2025
Y2 - 21 July 2025 through 22 July 2025
ER -