liu.seSearch for publications in DiVA
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks
Univ Calif Davis, CA 95616 USA.
Visa Res, CA 94306 USA.
Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.ORCID-id: 0000-0002-6382-2752
Univ Calif Davis, CA 95616 USA.
2023 (engelsk)Inngår i: ACM Transactions on Interactive Intelligent Systems, ISSN 2160-6455, E-ISSN 2160-6463, Vol. 13, nr 4, artikkel-id 20Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Adversarial attacks on a convolutional neural network (CNN)-injecting human-imperceptible perturbations into an input image-could fool a high-performance CNN into making incorrect predictions. The success of adversarial attacks raises serious concerns about the robustness of CNNs, and prevents them from being used in safety-critical applications, such asmedical diagnosis and autonomous driving. Ourwork introduces a visual analytics approach to understanding adversarial attacks by answering two questions: (1) Which neurons are more vulnerable to attacks? and (2) Which image features do these vulnerable neurons capture during the prediction? For the first question, we introduce multiple perturbation-based measures to break down the attacking magnitude into individual CNN neurons and rank the neurons by their vulnerability levels. For the second, we identify image features (e.g., cat ears) that highly stimulate a user-selected neuron to augment and validate the neuron's responsibility. Furthermore, we support an interactive exploration of a large number of neurons by aiding with hierarchical clustering based on the neurons' roles in the prediction. To this end, a visual analytics system is designed to incorporate visual reasoning for interpreting adversarial attacks. We validate the effectiveness of our system through multiple case studies aswell as feedback from domain experts.

sted, utgiver, år, opplag, sider
ASSOC COMPUTING MACHINERY , 2023. Vol. 13, nr 4, artikkel-id 20
Emneord [en]
Convolutional neural networks; adversarial attack; explainable machine learning
HSV kategori
Identifikatorer
URN: urn:nbn:se:liu:diva-201028DOI: 10.1145/3587470ISI: 001153515100001OAI: oai:DiVA.org:liu-201028DiVA, id: diva2:1840204
Merknad

Funding Agencies|National Institute of Health [1R01CA270454-01, 1R01CA273058-01]; Knut and Alice Wallenberg Foundation [KAW 2019.0024]

Tilgjengelig fra: 2024-02-22 Laget: 2024-02-22 Sist oppdatert: 2025-02-09

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekst

Søk i DiVA

Av forfatter/redaktør
Fujiwara, Takanori
Av organisasjonen
I samme tidsskrift
ACM Transactions on Interactive Intelligent Systems

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 43 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf