liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Visual attention methods in deep learning: An in-depth survey
Ajmal Univ South Australia, Australia; Fayoum Univ, Egypt.
King Fahad Univ Petr & Minerals, Saudi Arabia; SDAIA KFUPM Joint Res Ctr Artificial Intelligence, Saudi Arabia.
Univ Canberra, Australia.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Mohamed Bin Zayed Univ Artificial Intelligence, U Arab Emirates.
Show others and affiliations
2024 (English)In: Information Fusion, ISSN 1566-2535, E-ISSN 1872-6305, Vol. 108, article id 102417Article in journal (Refereed) Published
Abstract [en]

Inspired by the human cognitive system, attention is a mechanism that imitates the human cognitive awareness about specific information, amplifying critical details to focus more on the essential aspects of data. Deep learning has employed attention to boost performance for many applications. Interestingly, the same attention design can suit processing different data modalities and can easily be incorporated into large networks. Furthermore, multiple complementary attention mechanisms can be incorporated into one network. Hence, attention techniques have become extremely attractive. However, the literature lacks a comprehensive survey on attention techniques to guide researchers in employing attention in their deep models. Note that, besides being demanding in terms of training data and computational resources, transformers only cover a single category in self-attention out of the many categories available. We fill this gap and provide an in-depth survey of 50 attention techniques, categorizing them by their most prominent features. We initiate our discussion by introducing the fundamental concepts behind the success of the attention mechanism. Next, we furnish some essentials such as the strengths and limitations of each attention category, describe their fundamental building blocks, basic formulations with primary usage, and applications specifically for computer vision. We also discuss the challenges and general open questions related to attention mechanisms. Finally, we recommend possible future research directions for deep attention. All the information about visual attention methods in deep learning is provided at https://github.com/saeed-anwar/VisualAttention

Place, publisher, year, edition, pages
ELSEVIER , 2024. Vol. 108, article id 102417
Keywords [en]
Attention mechanisms; Deep attention; Attention modules; learning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-210191DOI: 10.1016/j.inffus.2024.102417ISI: 001362518700001OAI: oai:DiVA.org:liu-210191DiVA, id: diva2:1917774
Note

Funding Agencies|Australian Council Future Fellowship Award by Australian Government [FT210100268]

Available from: 2024-12-03 Created: 2024-12-03 Last updated: 2024-12-03

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
In the same journal
Information Fusion
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 53 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf