liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation
Mohamed Bin Zayed Univ Artificial Intelligence MB, U Arab Emirates.
Mohamed Bin Zayed Univ Artificial Intelligence MB, U Arab Emirates.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering. Mohamed Bin Zayed Univ Artificial Intelligence MB, U Arab Emirates.
Mohamed Bin Zayed Univ Artificial Intelligence MB, U Arab Emirates; Australian Natl Univ, Australia.
2024 (English)In: MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT XII, SPRINGER INTERNATIONAL PUBLISHING AG , 2024, Vol. 15012, p. 229-239Conference paper, Published paper (Refereed)
Abstract [en]

Deep neural networks have significantly improved volumetric medical segmentation, but they generally require large-scale annotated data to achieve better performance, which can be expensive and prohibitive to obtain. To address this limitation, existing works typically perform transfer learning or design dedicated pretraining-finetuning stages to learn representative features. However, the mismatch between the source and target domain can make it challenging to learn optimal representation for volumetric data, while the multi-stage training demands higher compute as well as careful selection of stage-specific design choices. In contrast, we propose a universal training framework called MedContext that is architecture-agnostic and can be incorporated into any existing training framework for 3D medical segmentation. Our approach effectively learns self-supervised contextual cues jointly with the supervised voxel segmentation task without requiring large-scale annotated volumetric medical data or dedicated pretraining-finetuning stages. The proposed approach induces contextual knowledge in the network by learning to reconstruct the missing organ or parts of an organ in the output segmentation space. The effectiveness of MedContext is validated across multiple 3D medical datasets and four state-of-the-art model architectures. Our approach demonstrates consistent gains in segmentation performance across datasets and architectures even in fewshot scenarios. Our code is available at https://github.com/hananshafi/medcontext.

Place, publisher, year, edition, pages
SPRINGER INTERNATIONAL PUBLISHING AG , 2024. Vol. 15012, p. 229-239
Series
Lecture Notes in Computer Science, ISSN 0302-9743
Keywords [en]
Volumetric medical segmentation; Masked image modeling; Knowledge distillation
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-210347DOI: 10.1007/978-3-031-72390-2_22ISI: 001344002100022ISBN: 9783031723896 (print)ISBN: 9783031723902 (electronic)OAI: oai:DiVA.org:liu-210347DiVA, id: diva2:1920001
Conference
27th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Palmeraie Conf Ctr, Marrakesh, MOROCCO, oct 06-10, 2024
Available from: 2024-12-10 Created: 2024-12-10 Last updated: 2024-12-10

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Khan, Fahad
By organisation
Computer VisionFaculty of Science & Engineering
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 46 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf