liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
On Generating Transferable Targeted Perturbations
Australian Natl Univ, Australia; Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates.
Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates.
Monash Univ, Australia.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates.
Show others and affiliations
2021 (English)In: 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), IEEE , 2021, p. 7688-7697Conference paper, Published paper (Refereed)
Abstract [en]

While the untargeted black-box transferability of adversarial perturbations has been extensively studied before, changing an unseen models decisions to a specific targeted class remains a challenging feat. In this paper, we propose a new generative approach for highly transferable targeted perturbations (TTP). We note that the existing methods are less suitable for this task due to their reliance on class-boundary information that changes from one model to another, thus reducing transferability. In contrast, our approach matches the perturbed image distribution with that of the target class, leading to high targeted transferability rates. To this end, we propose a new objective function that not only aligns the global distributions of source and target images, but also matches the local neighbourhood structure between the two domains. Based on the proposed objective, we train a generator function that can adaptively synthesize perturbations specific to a given input. Our generative approach is independent of the source or target domain labels, while consistently performs well against state-of-the-art methods on a wide range of attack settings. As an example, we achieve 32.63% target transferability from (an adversarially weak) VGG19(BN) to (a strong) WideResNet on ImageNet val. set, which is 4x higher than the previous best generative attack and 16x better than instance-specific iterative attack. Code is available at: https://github.com/Muzammal-Naseer/TTP.

Place, publisher, year, edition, pages
IEEE , 2021. p. 7688-7697
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:liu:diva-187617DOI: 10.1109/ICCV48922.2021.00761ISI: 000797698907090ISBN: 9781665428125 (electronic)ISBN: 9781665428132 (print)OAI: oai:DiVA.org:liu-187617DiVA, id: diva2:1691110
Conference
18th IEEE/CVF International Conference on Computer Vision (ICCV), ELECTR NETWORK, oct 11-17, 2021
Available from: 2022-08-29 Created: 2022-08-29 Last updated: 2025-02-07

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 68 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf