liu.seSearch for publications in DiVA
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Cross-Domain Transferability of Adversarial Perturbations
Australian Natl Univ, Australia; Incept Inst Artificial Intelligence, U Arab Emirates.
Australian Natl Univ, Australia; Incept Inst Artificial Intelligence, U Arab Emirates.
Incept Inst Artificial Intelligence, U Arab Emirates.
Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten. Incept Inst Artificial Intelligence, U Arab Emirates.
Vise andre og tillknytning
2019 (engelsk)Inngår i: ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), NEURAL INFORMATION PROCESSING SYSTEMS (NIPS) , 2019, Vol. 32Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Adversarial examples reveal the blind spots of deep neural networks (DNNs) and represent a major concern for security-critical applications. The transferability of adversarial examples makes real-world attacks possible in black-box settings, where the attacker is forbidden to access the internal parameters of the model. The underlying assumption in most adversary generation methods, whether learning an instance-specific or an instance-agnostic perturbation, is the direct or indirect reliance on the original domain-specific data distribution. In this work, for the first time, we demonstrate the existence of domain-invariant adversaries, thereby showing common adversarial space among different datasets and models. To this end, we propose a framework capable of launching highly transferable attacks that crafts adversarial patterns to mislead networks trained on entirely different domains. For instance, an adversarial function learned on Paintings, Cartoons or Medical images can successfully perturb ImageNet samples to fool the classifier, with success rates as high as similar to 99% (l(infinity) <= 10). The core of our proposed adversarial function is a generative network that is trained using a relativistic supervisory signal that enables domain-invariant perturbations. Our approach sets the new state-of-the-art for fooling rates, both under the white-box and black-box scenarios. Furthermore, despite being an instance-agnostic perturbation function, our attack outperforms the conventionally much stronger instance-specific attack methods.

sted, utgiver, år, opplag, sider
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS) , 2019. Vol. 32
Serie
Advances in Neural Information Processing Systems, ISSN 1049-5258
HSV kategori
Identifikatorer
URN: urn:nbn:se:liu:diva-167712ISI: 000535866904053OAI: oai:DiVA.org:liu-167712DiVA, id: diva2:1454554
Konferanse
33rd Conference on Neural Information Processing Systems (NeurIPS)
Tilgjengelig fra: 2020-07-17 Laget: 2020-07-17 Sist oppdatert: 2020-07-17

Open Access i DiVA

Fulltekst mangler i DiVA

Søk i DiVA

Av forfatter/redaktør
Khan, Fahad
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric

urn-nbn
Totalt: 84 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf