Open this publication in new window or tab >>2023 (English)In: ICASSP 2023: 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2023Conference paper, Published paper (Refereed)
Abstract [en]
The state-of-the-art machine learning techniques come with limited, if at all any, formal correctness guarantees. This has been demonstrated by adversarial examples in the deep learning domain. To address this challenge, here, we propose a scalable robustness verification framework for Deep Neural Networks (DNNs). The framework relies on Linear Programming (LP) engines and builds on decades of advances in the field for analyzing convex approximations of the original network. The key insight is in the on-demand incremental refinement of these convex approximations. This refinement can be parallelized, making the framework even more scalable. We have implemented a prototype tool to verify the robustness of a large number of DNNs in epileptic seizure detection. We have compared the results with those obtained by two state-of-the-art tools for the verification of DNNs. We show that our framework is consistently more precise than the over-approximation-based tool ERAN and more scalable than the SMT-based tool Reluplex.
Place, publisher, year, edition, pages
IEEE, 2023
Series
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), ISSN 1520-6149, E-ISSN 2379-190X
Keywords
DNNs, verification, approximation, refinement, linear programming, robustness
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-207758 (URN)10.1109/ICASSP49357.2023.10097028 (DOI)001630046900428 ()2-s2.0-86000388130 (Scopus ID)978-1-7281-6327-7 (ISBN)978-1-7281-6328-4 (ISBN)
Conference
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Note
Funding Agencies|Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; European Union (EU) Interreg Program
2024-09-202024-09-202026-02-05