Visual attention methods in deep learning: An in-depth surveyShow others and affiliations
2024 (English)In: Information Fusion, ISSN 1566-2535, E-ISSN 1872-6305, Vol. 108, article id 102417Article in journal (Refereed) Published
Abstract [en]
Inspired by the human cognitive system, attention is a mechanism that imitates the human cognitive awareness about specific information, amplifying critical details to focus more on the essential aspects of data. Deep learning has employed attention to boost performance for many applications. Interestingly, the same attention design can suit processing different data modalities and can easily be incorporated into large networks. Furthermore, multiple complementary attention mechanisms can be incorporated into one network. Hence, attention techniques have become extremely attractive. However, the literature lacks a comprehensive survey on attention techniques to guide researchers in employing attention in their deep models. Note that, besides being demanding in terms of training data and computational resources, transformers only cover a single category in self-attention out of the many categories available. We fill this gap and provide an in-depth survey of 50 attention techniques, categorizing them by their most prominent features. We initiate our discussion by introducing the fundamental concepts behind the success of the attention mechanism. Next, we furnish some essentials such as the strengths and limitations of each attention category, describe their fundamental building blocks, basic formulations with primary usage, and applications specifically for computer vision. We also discuss the challenges and general open questions related to attention mechanisms. Finally, we recommend possible future research directions for deep attention. All the information about visual attention methods in deep learning is provided at https://github.com/saeed-anwar/VisualAttention
Place, publisher, year, edition, pages
ELSEVIER , 2024. Vol. 108, article id 102417
Keywords [en]
Attention mechanisms; Deep attention; Attention modules; learning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-210191DOI: 10.1016/j.inffus.2024.102417ISI: 001362518700001OAI: oai:DiVA.org:liu-210191DiVA, id: diva2:1917774
Note
Funding Agencies|Australian Council Future Fellowship Award by Australian Government [FT210100268]
2024-12-032024-12-032024-12-03