liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Multimodal Multi-Head Convolutional Attention with Various Kernel Sizes for Medical Image Super-Resolution
Univ Bucharest, Romania.
Univ Bucharest, Romania.
Carol Davila Univ Med & Pharm, Romania; Coltea Hosp, Romania.
Carol Davila Univ Med & Pharm, Romania; Coltea Hosp, Romania.
Show others and affiliations
2023 (English)In: 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), IEEE COMPUTER SOC , 2023, p. 2194-2204Conference paper, Published paper (Refereed)
Abstract [en]

Super-resolving medical images can help physicians in providing more accurate diagnostics. In many situations, computed tomography (CT) or magnetic resonance imaging (MRI) techniques capture several scans (modes) during a single investigation, which can jointly be used (in a multimodal fashion) to further boost the quality of super-resolution results. To this end, we propose a novel multimodal multi-head convolutional attention module to super-resolve CT and MRI scans. Our attention module uses the convolution operation to perform joint spatial-channel attention on multiple concatenated input tensors, where the kernel (receptive field) size controls the reduction rate of the spatial attention, and the number of convolutional filters controls the reduction rate of the channel attention, respectively. We introduce multiple attention heads, each head having a distinct receptive field size corresponding to a particular reduction rate for the spatial attention. We integrate our multimodal multi-head convolutional attention (MMHCA) into two deep neural architectures for super-resolution and conduct experiments on three data sets. Our empirical results show the superiority of our attention module over the state-of-the-art attention mechanisms used in super-resolution. Moreover, we conduct an ablation study to assess the impact of the components involved in our attention module, e.g. the number of inputs or the number of heads. Our code is freely available at https://github.com/lilygeorgescu/MHCA.

Place, publisher, year, edition, pages
IEEE COMPUTER SOC , 2023. p. 2194-2204
Series
IEEE Winter Conference on Applications of Computer Vision, ISSN 2472-6737
National Category
Telecommunications
Identifiers
URN: urn:nbn:se:liu:diva-196946DOI: 10.1109/WACV56688.2023.00223ISI: 000971500202030ISBN: 9781665493468 (electronic)ISBN: 9781665493475 (print)OAI: oai:DiVA.org:liu-196946DiVA, id: diva2:1792453
Conference
23rd IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, jan 03-07, 2023
Note

Funding Agencies| [2014-2021]; [24/2020]

Available from: 2023-08-29 Created: 2023-08-29 Last updated: 2023-08-29

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
Telecommunications

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 21 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf