liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Self-supervised Approach for Adversarial Robustness
Inception Institute of Artificial Intelligence, UAE; Data61-CSIRO, Australia; Australian National University, Australia.
nception Institute of Artificial Intelligence, UAE.
Inception Institute of Artificial Intelligence, UAE.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Inception Institute of Artificial Intelligence, UAE.
Show others and affiliations
2020 (English)In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2020, p. 259-268Conference paper, Published paper (Refereed)
Abstract [en]

Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems e.g., for classification, segmentation and object detection. The vulnerability of DNNs against such attacks can prove a major roadblock towards their real-world deployment. Transferability of adversarial examples demand generalizable defenses that can provide cross-task protection. Adversarial training that enhances robustness by modifying target model’s parameters lacks such generalizability. On the other hand, different input processing based defenses fall short in the face of continuously evolving attacks. In this paper, we take the first step to combine the benefits of both approaches and propose a self-supervised adversarial training mechanism in the input space. By design, our defense is a generalizable approach and provides significant robustness against the unseen adversarial attacks (\eg by reducing the success rate of translation-invariant ensemble attack from 82.6% to 31.9% in comparison to previous state-of-the-art). It can be deployed as a plug-and-play solution to protect a variety of vision systems, as we demonstrate for the case of classification, segmentation and detection.

Place, publisher, year, edition, pages
IEEE, 2020. p. 259-268
Series
Computer Society Conference on Computer Vision and Pattern Recognition, ISSN 2575-7075
Keywords [en]
Perturbation methods;Task analysis;Distortion;Training;Robustness;Feature extraction;Neural networks
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:liu:diva-168130DOI: 10.1109/CVPR42600.2020.00034ISI: 000620679500027ISBN: 978-1-7281-7168-5 (electronic)OAI: oai:DiVA.org:liu-168130DiVA, id: diva2:1458576
Conference
Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13-19 June 2020
Available from: 2020-08-17 Created: 2020-08-17 Last updated: 2021-03-15

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Khan, Fahad Shahbaz

Search in DiVA

By author/editor
Khan, Fahad Shahbaz
By organisation
Computer VisionFaculty of Science & Engineering
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 40 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf