liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Stylized Adversarial Defense
Mohamed bin Zayed Univ Artificial Intelligence, U Arab Emirates; Australian Natl Univ, Australia.
Mohamed bin Zayed Univ Artificial Intelligence, U Arab Emirates; Australian Natl Univ, Australia.
Monash Univ, Australia.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Mohamed bin Zayed Univ Artificial Intelligence, U Arab Emirates; Mohamed bin Zayed Univ Artificial Intelligence, U Arab Emirates.
Show others and affiliations
2023 (English)In: IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 45, no 5, p. 6403-6414Article in journal (Refereed) Published
Abstract [en]

Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the input images. To address this vulnerability, adversarial training creates perturbation patterns and includes them in the training set to robustify the model. In contrast to existing adversarial training methods that only use class-boundary information (e.g., using a cross-entropy loss), we propose to exploit additional information from the feature space to craft stronger adversaries that are in turn used to learn a robust model. Specifically, we use the style and content information of the target sample from another class, alongside its class-boundary information to create adversarial perturbations. We apply our proposed multi-task objective in a deeply supervised manner, extracting multi-scale feature knowledge to create maximally separating adversaries. Subsequently, we propose a max-margin adversarial training approach that minimizes the distance between source image and its adversary and maximizes the distance between the adversary and the target image. Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses, generalizes well to naturally occurring corruptions and data distributional shifts, and retains the models accuracy on clean examples.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2023. Vol. 45, no 5, p. 6403-6414
Keywords [en]
Training; Perturbation methods; Robustness; Multitasking; Predictive models; Computational modeling; Visualization; Adversarial training; style transfer; max-margin learning; adversarial attacks; multi-task objective
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:liu:diva-193967DOI: 10.1109/TPAMI.2022.3207917ISI: 000964792800068PubMedID: 36121953OAI: oai:DiVA.org:liu-193967DiVA, id: diva2:1758313
Available from: 2023-05-22 Created: 2023-05-22 Last updated: 2025-02-07

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textPubMed

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
In the same journal
IEEE Transactions on Pattern Analysis and Machine Intelligence
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 75 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf