liu.seSök publikationer i DiVA
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks
Univ Calif Davis, CA 95616 USA.
Visa Res, CA 94306 USA.
Linköpings universitet, Institutionen för teknik och naturvetenskap, Medie- och Informationsteknik. Linköpings universitet, Tekniska fakulteten.ORCID-id: 0000-0002-6382-2752
Univ Calif Davis, CA 95616 USA.
2023 (Engelska)Ingår i: ACM Transactions on Interactive Intelligent Systems, ISSN 2160-6455, E-ISSN 2160-6463, Vol. 13, nr 4, artikel-id 20Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Adversarial attacks on a convolutional neural network (CNN)-injecting human-imperceptible perturbations into an input image-could fool a high-performance CNN into making incorrect predictions. The success of adversarial attacks raises serious concerns about the robustness of CNNs, and prevents them from being used in safety-critical applications, such asmedical diagnosis and autonomous driving. Ourwork introduces a visual analytics approach to understanding adversarial attacks by answering two questions: (1) Which neurons are more vulnerable to attacks? and (2) Which image features do these vulnerable neurons capture during the prediction? For the first question, we introduce multiple perturbation-based measures to break down the attacking magnitude into individual CNN neurons and rank the neurons by their vulnerability levels. For the second, we identify image features (e.g., cat ears) that highly stimulate a user-selected neuron to augment and validate the neuron's responsibility. Furthermore, we support an interactive exploration of a large number of neurons by aiding with hierarchical clustering based on the neurons' roles in the prediction. To this end, a visual analytics system is designed to incorporate visual reasoning for interpreting adversarial attacks. We validate the effectiveness of our system through multiple case studies aswell as feedback from domain experts.

Ort, förlag, år, upplaga, sidor
ASSOC COMPUTING MACHINERY , 2023. Vol. 13, nr 4, artikel-id 20
Nyckelord [en]
Convolutional neural networks; adversarial attack; explainable machine learning
Nationell ämneskategori
Robotik och automation
Identifikatorer
URN: urn:nbn:se:liu:diva-201028DOI: 10.1145/3587470ISI: 001153515100001OAI: oai:DiVA.org:liu-201028DiVA, id: diva2:1840204
Anmärkning

Funding Agencies|National Institute of Health [1R01CA270454-01, 1R01CA273058-01]; Knut and Alice Wallenberg Foundation [KAW 2019.0024]

Tillgänglig från: 2024-02-22 Skapad: 2024-02-22 Senast uppdaterad: 2025-02-09

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltext

Sök vidare i DiVA

Av författaren/redaktören
Fujiwara, Takanori
Av organisationen
Medie- och InformationsteknikTekniska fakulteten
I samma tidskrift
ACM Transactions on Interactive Intelligent Systems
Robotik och automation

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 43 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf