liu.seSearch for publications in DiVA
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Ensembles of GANs for synthetic training data generation
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-9217-9997
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0003-0298-937X
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).ORCID iD: 0000-0002-9368-0177
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-7765-1747
2021 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Insufficient training data is a major bottleneck for most deep learning practices, not least in medical imaging where data is difficult to collect and publicly available datasets are scarce due to ethics and privacy. This work investigates the use of synthetic images, created by generative adversarial networks (GANs), as the only source of training data. We demonstrate that for this application, it is of great importance to make use of multiple GANs to improve the diversity of the generated data, i.e. to sufficiently cover the data distribution. While a single GAN can generate seemingly diverse image content, training on this data in most cases lead to severe over-fitting. We test the impact of ensembled GANs on synthetic 2D data as well as common image datasets (SVHN and CIFAR-10), and using both DCGANs and progressively growing GANs. As a specific use case, we focus on synthesizing digital pathology patches to provide anonymized training data.

Place, publisher, year, edition, pages
2021.
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:liu:diva-175900OAI: oai:DiVA.org:liu-175900DiVA, id: diva2:1557585
Conference
ICLR 2021 workshop on Synthetic Data Generation: Quality, Privacy, Bias
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Vinnova, grant 2019-05144 and grant 2017-02447(AIDA)ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsAvailable from: 2021-05-26 Created: 2021-05-26 Last updated: 2022-01-17
In thesis
1. Synthetic data for visual machine learning: A data-centric approach
Open this publication in new window or tab >>Synthetic data for visual machine learning: A data-centric approach
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Deep learning allows computers to learn from observations, or else training data. Successful application development requires skills in neural network design, adequate computational resources, and a training data distribution that covers the application do-main. We are currently witnessing an artificial intelligence (AI) outbreak with enough computational power to train very deep networks and build models that achieve similar or better than human performance. The crucial factor for the algorithms to succeed has proven to be the training data fed to the learning process. Too little or low quality or out-of-the-target distribution data will lead to poorly performing models no matter the capacity and the data regularization methods.

This thesis is a data-centric approach to AI and presents a set of contributions related to synthesizing images for training supervised visual machine learning. It is motivated by the profound potential of synthetic data in cases of low availability of captured data, expensive acquisition and annotation, and privacy and ethical issues. The presented work aims to generate images similar to samples drawn from the target distribution and evaluate the generated data as the sole training data source and in conjunction with captured imagery. For this, two synthesis methods are explored: computer graphics and generative modeling. Computer graphics-based generation methods and synthetic datasets for computer vision tasks are thoroughly reviewed. In the same context, a system employing procedural modeling and physically-based rendering is introduced for data generation for urban scene understanding. The scheme is flexible, easily scalable, and produces complex and diverse images with pixel-perfect annotations at no cost. Generative Adversarial Networks (GANs) are also used to generate images for small data scenarios augmentation. The strategy advances the model’s performance and robustness. Finally, ensembles of independently trained GANs investigate ways to improve images’ diversity and create synthetic data to serve as the only training source.

The application areas of the presented contributions relate to two image modalities, natural and histopathology images, to cover different aspects in the generation methods and the tasks’ characteristics and requirements. There are showcased synthesized examples of natural images for automotive applications and weather classification, and histopathology images for breast cancer and colon adenocarcinoma metastasis detection. This thesis, as a whole, promotes data-centric supervised deep learning development by highlighting the potential of synthetic data as a training data resource. It emphasizes the control over the formation process, the ability of multi-modality formats, and the automatic generation of annotations.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2022. p. 115
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2202
Keywords
Training data, Synthetic images, Computer graphics, Generative modeling, Natural images, Histopathology, Digital pathology, Machine learning, Deep learning
National Category
Medical Image Processing
Identifiers
urn:nbn:se:liu:diva-182336 (URN)10.3384/9789179291754 (DOI)9789179291747 (ISBN)9789179291754 (ISBN)
Public defence
2022-02-14, Domteatern, Visualiseringscenter C, Kungsgatan 54, Norrköping, 09:15 (English)
Opponent
Supervisors
Note

ISBN for PDF has been added in the PDF-version.

Available from: 2022-01-17 Created: 2022-01-17 Last updated: 2023-04-03Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

https://arxiv.org/abs/2104.11797

Authority records

Eilertsen, GabrielTsirikoglou, ApostoliaLundström, ClaesUnger, Jonas

Search in DiVA

By author/editor
Eilertsen, GabrielTsirikoglou, ApostoliaLundström, ClaesUnger, Jonas
By organisation
Media and Information TechnologyFaculty of Science & EngineeringCenter for Medical Image Science and Visualization (CMIV)
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 458 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf