liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Xu, Yichu
    et al.
    Wuhan Univ, Peoples R China; Hubei Luojia Lab, Peoples R China.
    Xu, Yonghao
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Jiao, Hongzan
    Wuhan Univ, Peoples R China; Wuhan Univ, Peoples R China.
    Gao, Zhi
    Wuhan Univ, Peoples R China.
    Zhang, Lefei
    Wuhan Univ, Peoples R China; Hubei Luojia Lab, Peoples R China.
    S³ANet: Spatial–Spectral Self-Attention Learning Network for Defending Against Adversarial Attacks in Hyperspectral Image Classification2024In: IEEE Transactions on Geoscience and Remote Sensing, ISSN 0196-2892, E-ISSN 1558-0644, Vol. 62, article id 5512913Article in journal (Refereed)
    Abstract [en]

    Deep neural networks have demonstrated impressive capabilities in hyperspectral image (HSI) classification tasks. However, they are highly vulnerable to adversarial attacks, raising significant security concerns, especially in the remote sensing community. Even subtle adversarial perturbations that are imperceptible to human observers can mislead deep learning (DL) models and result in incorrect predictions. Therefore, ensuring the robustness of DL models has become a critical focus in addressing security-related remote sensing tasks. Considerable progress has been made in defending against adversarial attacks in HSI classification. Nevertheless, existing methods primarily concentrate on spatial relationships between pixels while overlooking the valuable spectral information present in HSI. Besides, these methods are usually limited to a specific scale and cannot accommodate the precise classification demands for ground objects with various scales. To address these limitations, we propose a spatial-spectral self-attention learning network (S3ANet) for defending against adversarial attacks in HSI classification. Our S3ANet incorporates a pyramid spatial attention learning module to effectively capture spatial dependency at multiple scales. In addition, it utilizes a global spectral transformer to establish correlations between pixels in the spectral dimension. By employing the defense method of spatial-spectral fusion, our model can effectively address adversarial attacks from a comprehensive perspective, seamlessly integrating both spatial and spectral information. Extensive experiments conducted on four benchmark HSI datasets illustrate that the proposed S3ANet achieves competitive performance compared to state-of-the-art methods when faced with adversarial attacks. The code is available online at https://github.com/YichuXu/S3ANet.

  • 2.
    Bai, Tao
    et al.
    Nanyang Technol Univ, Singapore.
    Cao, Yiming
    Singapore Management Univ, Singapore.
    Xu, Yonghao
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Wen, Bihan
    Nanyang Technol Univ, Singapore.
    Stealthy Adversarial Examples for Semantic Segmentation in Remote Sensing2024In: IEEE Transactions on Geoscience and Remote Sensing, ISSN 0196-2892, E-ISSN 1558-0644, Vol. 62, article id 5614817Article in journal (Refereed)
    Abstract [en]

    Deep learning methods have been proven effective in remote sensing image analysis and interpretation, where semantic segmentation plays a vital role. These deep segmentation methods are susceptible to adversarial attacks, while most of the existing attack methods tend to manipulate the image globally, leading to noticeable perturbations and chaotic segmentation. In this work, we propose a novel stealthy attack for semantic segmentation (SASS), which can largely increase the effectiveness and stealthiness from the existing attack methods on remote sensing images. SASS manipulates specific victim classes or objects of interest while preserving the original segmentation results for other classes or objects. In practice, as different inference mechanisms, overlapped inference, can be applied in segmentation, the efficacy of SASS may be degraded. To this end, we further introduce the masked SASS (MSASS), which generates augmented adversarial perturbations that only affect victim areas. We evaluate the effectiveness of SASS and MSASS using four state-of-the-art semantic segmentation models on the Vaihingen and Zurich Summer datasets. Extensive experiments demonstrate that our SASS and MSASS methods achieve superior attack performances on victim areas while maintaining high accuracies of other areas (drop less than 2%). The detection success rates of adversarial examples for segmentation, as characterized by Xiao et al., significantly drop from 97.78% for the untargeted projected gradient descent (PGD) attack to 28.71% for our MSASS method on the Zurich Summer dataset. Our work contributes to the field of adversarial attacks in semantic segmentation for remote sensing images by improving stealthiness, flexibility, and robustness. We anticipate that our findings will inspire the development of defense methods to enhance the security and reliability of semantic segmentation models against our stealthy attack.

  • 3.
    Xu, Yonghao
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Institute of Advanced Research in Artificial Intelligence (IARAI), Vienna, Austria.
    Yu, Weikang
    Machine Learning Group, Helmholtz-Zentrum Dresden-Rossendorf, Helmholtz Institute Freiberg for Resource Technology, Freiberg, Germany.
    Ghamisi, Pedram
    Institute of Advanced Research in Artificial Intelligence (IARAI), Vienna, Austria.
    Kopp, Michael
    Institute of Advanced Research in Artificial Intelligence (IARAI), Vienna, Austria.
    Hochreiter, Sepp
    Institute of Advanced Research in Artificial Intelligence (IARAI), Vienna, Austria.
    Txt2Img-MHN: Remote Sensing Image Generation From Text Using Modern Hopfield Networks2023In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 32, p. 5737-5750Article in journal (Refereed)
    Abstract [en]

    The synthesis of high-resolution remote sensing images based on text descriptions has great potential in many practical application scenarios. Although deep neural networks have achieved great success in many important remote sensing tasks, generating realistic remote sensing images from text descriptions is still very difficult. To address this challenge, we propose a novel text-to-image modern Hopfield network (Txt2Img-MHN). The main idea of Txt2Img-MHN is to conduct hierarchical prototype learning on both text and image embeddings with modern Hopfield layers. Instead of directly learning concrete but highly diverse text-image joint feature representations for different semantics, Txt2Img-MHN aims to learn the most representative prototypes from text-image embeddings, achieving a coarse-to-fine learning strategy. These learned prototypes can then be utilized to represent more complex semantics in the text-to-image generation task. To better evaluate the realism and semantic consistency of the generated images, we further conduct zero-shot classification on real remote sensing data using the classification model trained on synthesized images. Despite its simplicity, we find that the overall accuracy in the zero-shot classification may serve as a good metric to evaluate the ability to generate an image from text. Extensive experiments on the benchmark remote sensing text-image dataset demonstrate that the proposed Txt2Img-MHN can generate more realistic remote sensing images than existing methods. Code and pre-trained models are available online ( https://github.com/YonghaoXu/Txt2Img-MHN ).

    Download full text (pdf)
    fulltext
1 - 3 of 3
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf