liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees
Department of Electrical and Information Technology, Lund University, Lund, Sweden.
Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-0440-4753
Department of Electrical and Information Technology, Lund University, Lund, Sweden.
2024 (English)In: INTERNATIONAL CONFERENCE ON MACHINE LEARNING, JMLR-JOURNAL MACHINE LEARNING RESEARCH , 2024, Vol. 235Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning techniques often lack formal correctness guarantees, evidenced by the widespread adversarial examples that plague most deep-learning applications. This lack of formal guarantees resulted in several research efforts that aim at verifying Deep Neural Networks (DNNs), with a particular focus on safety-critical applications. However, formal verification techniques still face major scalability and precision challenges. The over-approximation introduced during the formal verification process to tackle the scalability challenge often results in inconclusive analysis. To address this challenge, we propose a novel framework to generate Verification-Friendly Neural Networks (VNNs). We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness. Our proposed framework results in VNNs that are comparable to the original DNNs in terms of prediction performance, while amenable to formal verification techniques. This essentially enables us to establish robustness for more VNNs than their DNN counterparts, in a time-efficient manner.

Place, publisher, year, edition, pages
JMLR-JOURNAL MACHINE LEARNING RESEARCH , 2024. Vol. 235
Series
Proceedings of Machine Learning Research, ISSN 2640-3498
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-207756ISI: 001347135502038OAI: oai:DiVA.org:liu-207756DiVA, id: diva2:1899771
Conference
41st International Conference on Machine Learning, ICML 2024, Vienna, AUSTRIA, JUL 21-27, 2024
Note

Funding Agencies|Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; European Union (EU) Interreg Program

Available from: 2024-09-20 Created: 2024-09-20 Last updated: 2025-03-20

Open Access in DiVA

No full text in DiVA

Authority records

Rezine, Ahmed

Search in DiVA

By author/editor
Rezine, Ahmed
By organisation
Software and SystemsFaculty of Science & Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 106 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf