liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A Study of Deep Learning Colon Cancer Detection in Limited Data Access Scenarios
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0003-0298-937X
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0003-1066-3070
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-9217-9997
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).ORCID iD: 0000-0002-7014-8874
Show others and affiliations
2020 (English)Conference paper, Poster (with or without abstract) (Refereed)
Place, publisher, year, edition, pages
2020.
National Category
Medical Image Processing
Identifiers
URN: urn:nbn:se:liu:diva-169838OAI: oai:DiVA.org:liu-169838DiVA, id: diva2:1469072
Conference
International Conference on Learning Representations (ICLR) Workshop on AI for Overcoming Global Disparities in Cancer Care (AI4CC)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2020-09-20 Created: 2020-09-20 Last updated: 2023-04-03
In thesis
1. Synthetic data for visual machine learning: A data-centric approach
Open this publication in new window or tab >>Synthetic data for visual machine learning: A data-centric approach
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Deep learning allows computers to learn from observations, or else training data. Successful application development requires skills in neural network design, adequate computational resources, and a training data distribution that covers the application do-main. We are currently witnessing an artificial intelligence (AI) outbreak with enough computational power to train very deep networks and build models that achieve similar or better than human performance. The crucial factor for the algorithms to succeed has proven to be the training data fed to the learning process. Too little or low quality or out-of-the-target distribution data will lead to poorly performing models no matter the capacity and the data regularization methods.

This thesis is a data-centric approach to AI and presents a set of contributions related to synthesizing images for training supervised visual machine learning. It is motivated by the profound potential of synthetic data in cases of low availability of captured data, expensive acquisition and annotation, and privacy and ethical issues. The presented work aims to generate images similar to samples drawn from the target distribution and evaluate the generated data as the sole training data source and in conjunction with captured imagery. For this, two synthesis methods are explored: computer graphics and generative modeling. Computer graphics-based generation methods and synthetic datasets for computer vision tasks are thoroughly reviewed. In the same context, a system employing procedural modeling and physically-based rendering is introduced for data generation for urban scene understanding. The scheme is flexible, easily scalable, and produces complex and diverse images with pixel-perfect annotations at no cost. Generative Adversarial Networks (GANs) are also used to generate images for small data scenarios augmentation. The strategy advances the model’s performance and robustness. Finally, ensembles of independently trained GANs investigate ways to improve images’ diversity and create synthetic data to serve as the only training source.

The application areas of the presented contributions relate to two image modalities, natural and histopathology images, to cover different aspects in the generation methods and the tasks’ characteristics and requirements. There are showcased synthesized examples of natural images for automotive applications and weather classification, and histopathology images for breast cancer and colon adenocarcinoma metastasis detection. This thesis, as a whole, promotes data-centric supervised deep learning development by highlighting the potential of synthetic data as a training data resource. It emphasizes the control over the formation process, the ability of multi-modality formats, and the automatic generation of annotations.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2022. p. 115
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2202
Keywords
Training data, Synthetic images, Computer graphics, Generative modeling, Natural images, Histopathology, Digital pathology, Machine learning, Deep learning
National Category
Medical Image Processing
Identifiers
urn:nbn:se:liu:diva-182336 (URN)10.3384/9789179291754 (DOI)9789179291747 (ISBN)9789179291754 (ISBN)
Public defence
2022-02-14, Domteatern, Visualiseringscenter C, Kungsgatan 54, Norrköping, 09:15 (English)
Opponent
Supervisors
Note

ISBN for PDF has been added in the PDF-version.

Available from: 2022-01-17 Created: 2022-01-17 Last updated: 2023-04-03Bibliographically approved
2. Deep Learning for Digital Pathology in Limited Data Scenarios
Open this publication in new window or tab >>Deep Learning for Digital Pathology in Limited Data Scenarios
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The impressive technical advances seen for machine learning algorithms in combination with the digitalization of medical images in the radiology and pathology departments show great promise in introducing powerful image analysis tools for image diagnostics. In particular, deep learning, a subfield within machine learning, has shown great success, advancing fields such as image classification and detection. However, these types of algorithms are only used to a very small extent in clinical practice. 

One reason is that the unique nature of radiology and pathology images and the clinical setting in which they are acquired poses challenges not seen in other image domains. Differences relate to capturing methods, as well as the image contents. In addition, these datasets are not only unique on a per-image basis but as a collective dataset. Characteristics such as size, class balance, and availability of annotated labels make creating robust and generalizable deep learning methods a challenge. 

This thesis investigates how deep learning models can be trained for applications in this domain, with particular focus on histopathology data. We investigate how domain shift between different scanners causes performance drop, and present ways of mitigating this. We also present a method to detect when domain shift occurs between different datasets. Another hurdle is the shortage of labeled data for medical applications, and this thesis looks at two different approaches to solving this problem. The first approach investigates how labeled data from one organ and cancer type can boost cancer classification in another organ where labeled data is scarce. The second approach looks at a specific type of unsupervised learning method, self-supervised learning, where the model is trained on unlabeled data. For both of these approaches, we present strategies to handle low-data regimes that may greatly increase the availability to build deep learning models for a wider range of applications. 

Furthermore, deep learning technology enables us to go beyond traditional medical domains, and combine the data from both radiology and pathology. This thesis presents a method for improved cancer characterization on contrast-enhanced CT by incorporating corresponding pathology data during training. The method shows the potential of im-proving future healthcare by intergraded diagnostics made possible by machine-learning technology. 

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2022. p. 60
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2253
Keywords
Medical imaging, Digital pathology, Radiology, Machine learning, Deep learning.
National Category
Cancer and Oncology
Identifiers
urn:nbn:se:liu:diva-189009 (URN)10.3384/9789179294748 (DOI)9789179294731 (ISBN)9789179294748 (ISBN)
Public defence
2022-11-14, Kåkenhus, K3, Campus Norrköping, Norrköping, 09:15 (English)
Opponent
Supervisors
Available from: 2022-10-07 Created: 2022-10-07 Last updated: 2023-04-03Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

https://arxiv.org/abs/2005.10326

Authority records

Tsirikoglou, ApostoliaStacke, KarinEilertsen, GabrielLindvall, MartinUnger, Jonas

Search in DiVA

By author/editor
Tsirikoglou, ApostoliaStacke, KarinEilertsen, GabrielLindvall, MartinUnger, Jonas
By organisation
Media and Information TechnologyFaculty of Science & EngineeringCenter for Medical Image Science and Visualization (CMIV)
Medical Image Processing

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 678 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf