liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Learning Representations with Contrastive Self-Supervised Learning for Histopathology Applications
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Sectra AB, Sweden.ORCID iD: 0000-0003-1066-3070
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).ORCID iD: 0000-0002-7765-1747
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV). Sectra AB, Sweden.ORCID iD: 0000-0002-9368-0177
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).ORCID iD: 0000-0002-9217-9997
2022 (English)In: The Journal of Machine Learning for Biomedical Imaging, E-ISSN 2766-905X, Vol. 1, article id 023Article in journal (Other academic) Published
Abstract [en]

Unsupervised learning has made substantial progress over the last few years, especially by means of contrastive self-supervised learning. The dominating dataset for benchmarking self-supervised learning has been ImageNet, for which recent methods are approaching the performance achieved by fully supervised training. The ImageNet dataset is however largely object-centric, and it is not clear yet what potential those methods have on widely different datasets and tasks that are not object-centric, such as in digital pathology.While self-supervised learning has started to be explored within this area with encouraging results, there is reason to look closer at how this setting differs from natural images and ImageNet. In this paper we make an in-depth analysis of contrastive learning for histopathology, pin-pointing how the contrastive objective will behave differently due to the characteristics of histopathology data. Using SimCLR and H&E stained images as a representative setting for contrastive self-supervised learning in histopathology, we bring forward a number of considerations, such as view generation for the contrastive objectiveand hyper-parameter tuning. In a large battery of experiments, we analyze how the downstream performance in tissue classification will be affected by these considerations. The results point to how contrastive learning can reduce the annotation effort within digital pathology, but that the specific dataset characteristics need to be considered. To take full advantage of the contrastive learning objective, different calibrations of view generation and hyper-parameters are required. Our results pave the way for realizing the full potential of self-supervised learning for histopathology applications. Code and trained models are available at https://github.com/k-stacke/ssl-pathology.

Place, publisher, year, edition, pages
Melba (The Journal of Machine Learning for Biomedical Imaging) , 2022. Vol. 1, article id 023
National Category
Medical Imaging
Identifiers
URN: urn:nbn:se:liu:diva-189163OAI: oai:DiVA.org:liu-189163DiVA, id: diva2:1702938
Available from: 2022-10-12 Created: 2022-10-12 Last updated: 2025-02-09

Open Access in DiVA

fulltext(26972 kB)294 downloads
File information
File name FULLTEXT01.pdfFile size 26972 kBChecksum SHA-512
8ea54e7a40405e797a89658ea98bc6d587f9e453a51f2378f078e67b20a768c6079b170ddcd032cd2c1534dfea78a671d9ba3eadc45097ce61e616ad92eb65e7
Type fulltextMimetype application/pdf

Other links

Publisher´s full text

Authority records

Stacke, KarinUnger, JonasLundström, ClaesEilertsen, Gabriel

Search in DiVA

By author/editor
Stacke, KarinUnger, JonasLundström, ClaesEilertsen, Gabriel
By organisation
Media and Information TechnologyFaculty of Science & EngineeringCenter for Medical Image Science and Visualization (CMIV)
Medical Imaging

Search outside of DiVA

GoogleGoogle Scholar
Total: 294 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 559 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf