Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks
2023 (English)In: ACM Transactions on Interactive Intelligent Systems, ISSN 2160-6455, E-ISSN 2160-6463, Vol. 13, no 4, article id 20Article in journal (Refereed) Published
Abstract [en]
Adversarial attacks on a convolutional neural network (CNN)-injecting human-imperceptible perturbations into an input image-could fool a high-performance CNN into making incorrect predictions. The success of adversarial attacks raises serious concerns about the robustness of CNNs, and prevents them from being used in safety-critical applications, such asmedical diagnosis and autonomous driving. Ourwork introduces a visual analytics approach to understanding adversarial attacks by answering two questions: (1) Which neurons are more vulnerable to attacks? and (2) Which image features do these vulnerable neurons capture during the prediction? For the first question, we introduce multiple perturbation-based measures to break down the attacking magnitude into individual CNN neurons and rank the neurons by their vulnerability levels. For the second, we identify image features (e.g., cat ears) that highly stimulate a user-selected neuron to augment and validate the neuron's responsibility. Furthermore, we support an interactive exploration of a large number of neurons by aiding with hierarchical clustering based on the neurons' roles in the prediction. To this end, a visual analytics system is designed to incorporate visual reasoning for interpreting adversarial attacks. We validate the effectiveness of our system through multiple case studies aswell as feedback from domain experts.
Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY , 2023. Vol. 13, no 4, article id 20
Keywords [en]
Convolutional neural networks; adversarial attack; explainable machine learning
National Category
Robotics
Identifiers
URN: urn:nbn:se:liu:diva-201028DOI: 10.1145/3587470ISI: 001153515100001OAI: oai:DiVA.org:liu-201028DiVA, id: diva2:1840204
Note
Funding Agencies|National Institute of Health [1R01CA270454-01, 1R01CA273058-01]; Knut and Alice Wallenberg Foundation [KAW 2019.0024]
2024-02-222024-02-222024-02-22