liu.seSök publikationer i DiVA
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Tsirikoglou, ApostoliaORCID iD iconorcid.org/0000-0003-0298-937x
Publikationer (8 of 8) Visa alla publikationer
Tsirikoglou, A. (2022). Synthetic data for visual machine learning: A data-centric approach. (Doctoral dissertation). Linköping: Linköping University Electronic Press
Öppna denna publikation i ny flik eller fönster >>Synthetic data for visual machine learning: A data-centric approach
2022 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Deep learning allows computers to learn from observations, or else training data. Successful application development requires skills in neural network design, adequate computational resources, and a training data distribution that covers the application do-main. We are currently witnessing an artificial intelligence (AI) outbreak with enough computational power to train very deep networks and build models that achieve similar or better than human performance. The crucial factor for the algorithms to succeed has proven to be the training data fed to the learning process. Too little or low quality or out-of-the-target distribution data will lead to poorly performing models no matter the capacity and the data regularization methods.

This thesis is a data-centric approach to AI and presents a set of contributions related to synthesizing images for training supervised visual machine learning. It is motivated by the profound potential of synthetic data in cases of low availability of captured data, expensive acquisition and annotation, and privacy and ethical issues. The presented work aims to generate images similar to samples drawn from the target distribution and evaluate the generated data as the sole training data source and in conjunction with captured imagery. For this, two synthesis methods are explored: computer graphics and generative modeling. Computer graphics-based generation methods and synthetic datasets for computer vision tasks are thoroughly reviewed. In the same context, a system employing procedural modeling and physically-based rendering is introduced for data generation for urban scene understanding. The scheme is flexible, easily scalable, and produces complex and diverse images with pixel-perfect annotations at no cost. Generative Adversarial Networks (GANs) are also used to generate images for small data scenarios augmentation. The strategy advances the model’s performance and robustness. Finally, ensembles of independently trained GANs investigate ways to improve images’ diversity and create synthetic data to serve as the only training source.

The application areas of the presented contributions relate to two image modalities, natural and histopathology images, to cover different aspects in the generation methods and the tasks’ characteristics and requirements. There are showcased synthesized examples of natural images for automotive applications and weather classification, and histopathology images for breast cancer and colon adenocarcinoma metastasis detection. This thesis, as a whole, promotes data-centric supervised deep learning development by highlighting the potential of synthetic data as a training data resource. It emphasizes the control over the formation process, the ability of multi-modality formats, and the automatic generation of annotations.

Ort, förlag, år, upplaga, sidor
Linköping: Linköping University Electronic Press, 2022. s. 115
Serie
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2202
Nyckelord
Training data, Synthetic images, Computer graphics, Generative modeling, Natural images, Histopathology, Digital pathology, Machine learning, Deep learning
Nationell ämneskategori
Medicinsk bildvetenskap
Identifikatorer
urn:nbn:se:liu:diva-182336 (URN)10.3384/9789179291754 (DOI)9789179291747 (ISBN)9789179291754 (ISBN)
Disputation
2022-02-14, Domteatern, Visualiseringscenter C, Kungsgatan 54, Norrköping, 09:15 (Engelska)
Opponent
Handledare
Anmärkning

ISBN for PDF has been added in the PDF-version.

Tillgänglig från: 2022-01-17 Skapad: 2022-01-17 Senast uppdaterad: 2025-02-09Bibliografiskt granskad
Eilertsen, G., Tsirikoglou, A., Lundström, C. & Unger, J. (2021). Ensembles of GANs for synthetic training data generation. In: : . Paper presented at ICLR 2021 workshop on Synthetic Data Generation: Quality, Privacy, Bias.
Öppna denna publikation i ny flik eller fönster >>Ensembles of GANs for synthetic training data generation
2021 (Engelska)Konferensbidrag, Muntlig presentation med publicerat abstract (Refereegranskat)
Abstract [en]

Insufficient training data is a major bottleneck for most deep learning practices, not least in medical imaging where data is difficult to collect and publicly available datasets are scarce due to ethics and privacy. This work investigates the use of synthetic images, created by generative adversarial networks (GANs), as the only source of training data. We demonstrate that for this application, it is of great importance to make use of multiple GANs to improve the diversity of the generated data, i.e. to sufficiently cover the data distribution. While a single GAN can generate seemingly diverse image content, training on this data in most cases lead to severe over-fitting. We test the impact of ensembled GANs on synthetic 2D data as well as common image datasets (SVHN and CIFAR-10), and using both DCGANs and progressively growing GANs. As a specific use case, we focus on synthesizing digital pathology patches to provide anonymized training data.

Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:liu:diva-175900 (URN)
Konferens
ICLR 2021 workshop on Synthetic Data Generation: Quality, Privacy, Bias
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Vinnova, grant 2019-05144 and grant 2017-02447(AIDA)ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Tillgänglig från: 2021-05-26 Skapad: 2021-05-26 Senast uppdaterad: 2022-01-17
Tsirikoglou, A., Gladh, M., Sahlin, D., Eilertsen, G. & Unger, J. (2021). Generative inter-class transformations for imbalanced data weather classification. London Imaging Meeting, 2021, 16-20
Öppna denna publikation i ny flik eller fönster >>Generative inter-class transformations for imbalanced data weather classification
Visa övriga...
2021 (Engelska)Ingår i: London Imaging Meeting, E-ISSN 2694-118X, Vol. 2021, s. 16-20Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

This paper presents an evaluation of how data augmentation and inter-class transformations can be used to synthesize training data in low-data scenarios for single-image weather classification. In such scenarios, augmentations is a critical component, but there is a limit to how much improvements can be gained using classical augmentation strategies. Generative adversarial networks (GAN) have been demonstrated to generate impressive results, and have also been successful as a tool for data augmentation, but mostly for images of limited diversity, such as in medical applications. We investigate the possibilities in using generative augmentations for balancing a small weather classification dataset, where one class has a reduced number of images. We compare intra-class augmentations by means of classical transformations as well as noise-to-image GANs, to interclass augmentations where images from another class are transformed to the underrepresented class. The results show that it is possible to take advantage of GANs for inter-class augmentations to balance a small dataset for weather classification. This opens up for future work on GAN-based augmentations in scenarios where data is both diverse and scarce.

Ort, förlag, år, upplaga, sidor
Springfield, USA: Society for Imaging Science and Technology, 2021
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:liu:diva-182334 (URN)10.2352/issn.2694-118X.2021.LIM-16 (DOI)
Anmärkning

Funding: This project was funded by Knut and Alice Wallenberg Foundation, Wallenberg Autonomous Systems and Software Program, the strategic research environment ELLIIT, and ‘AI for Climate Adaptation’ through VINNOVA grant 2020-03388.

Tillgänglig från: 2022-01-17 Skapad: 2022-01-17 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Tsirikoglou, A., Stacke, K., Eilertsen, G., Lindvall, M. & Unger, J. (2020). A Study of Deep Learning Colon Cancer Detection in Limited Data Access Scenarios. In: : . Paper presented at International Conference on Learning Representations (ICLR) Workshop on AI for Overcoming Global Disparities in Cancer Care (AI4CC).
Öppna denna publikation i ny flik eller fönster >>A Study of Deep Learning Colon Cancer Detection in Limited Data Access Scenarios
Visa övriga...
2020 (Engelska)Konferensbidrag, Poster (med eller utan abstract) (Refereegranskat)
Nationell ämneskategori
Medicinsk bildvetenskap
Identifikatorer
urn:nbn:se:liu:diva-169838 (URN)
Konferens
International Conference on Learning Representations (ICLR) Workshop on AI for Overcoming Global Disparities in Cancer Care (AI4CC)
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Tillgänglig från: 2020-09-20 Skapad: 2020-09-20 Senast uppdaterad: 2025-02-09
Tsirikoglou, A., Eilertsen, G. & Unger, J. (2020). A Survey of Image Synthesis Methods for Visual Machine Learning. Computer graphics forum (Print), 39(6), 426-451
Öppna denna publikation i ny flik eller fönster >>A Survey of Image Synthesis Methods for Visual Machine Learning
2020 (Engelska)Ingår i: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, nr 6, s. 426-451Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Image synthesis designed for machine learning applications provides the means to efficiently generate large quantities of training data while controlling the generation process to provide the best distribution and content variety. With the demands of deep learning applications, synthetic data have the potential of becoming a vital component in the training pipeline. Over the last decade, a wide variety of training data generation methods has been demonstrated. The potential of future development calls to bring these together for comparison and categorization. This survey provides a comprehensive list of the existing image synthesis methods for visual machine learning. These are categorized in the context of image generation, using a taxonomy based on modelling and rendering, while a classification is also made concerning the computer vision applications they are used. We focus on the computer graphics aspects of the methods, to promote future image generation for machine learning. Finally, each method is assessed in terms of quality and reported performance, providing a hint on its expected learning potential. The report serves as a comprehensive reference, targeting both groups of the applications and data development sides. A list of all methods and papers reviewed herein can be found at https://computergraphics.on.liu.se/image_synthesis_methods_for_visual_machine_learning/.

Ort, förlag, år, upplaga, sidor
John Wiley & Sons, 2020
Nyckelord
methods and applications
Nationell ämneskategori
Datorgrafik och datorseende
Identifikatorer
urn:nbn:se:liu:diva-169839 (URN)10.1111/cgf.14047 (DOI)000565504000001 ()2-s2.0-85090446425 (Scopus ID)
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Anmärkning

Funding agencies: strategic research environment ELLIIT; Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation

Tillgänglig från: 2020-09-20 Skapad: 2020-09-20 Senast uppdaterad: 2025-02-07Bibliografiskt granskad
Tsirikoglou, A., Kronander, J., Wrenninge, M. & Unger, J. (2017). Procedural modeling and physically based rendering for synthetic data generation in automotive applications.
Öppna denna publikation i ny flik eller fönster >>Procedural modeling and physically based rendering for synthetic data generation in automotive applications
2017 (Engelska)Övrigt (Övrigt vetenskapligt)
Abstract [en]

We present an overview and evaluation of a new, systematic approach for generation of highly realistic, annotated synthetic data for training of deep neural networks in computer vision tasks. The main contribution is a procedural world modeling approach enabling high variability coupled with physically accurate image synthesis, and is a departure from the hand-modeled virtual worlds and approximate image synthesis methods used in real-time applications. The benefits of our approach include flexible, physically accurate and scalable image synthesis, implicit wide coverage of classes and features, and complete data introspection for annotations, which all contribute to quality and cost efficiency. To evaluate our approach and the efficacy of the resulting data, we use semantic segmentation for autonomous vehicles and robotic navigation as the main application, and we train multiple deep learning architectures using synthetic data with and without fine tuning on organic (i.e. real-world) data. The evaluation shows that our approach improves the neural network’s performance and that even modest implementation efforts produce state-of-the-art results.

Förlag
s. 13
Serie
arXiv.org ; 1710.06270
Nationell ämneskategori
Annan teknik
Identifikatorer
urn:nbn:se:liu:diva-165751 (URN)
Tillgänglig från: 2020-05-19 Skapad: 2020-05-19 Senast uppdaterad: 2025-02-18Bibliografiskt granskad
Tsirikoglou, A., Kronander, J., Larsson, P., Tongbuasirilai, T., Gardner, A. & Unger, J. (2016). Differential appearance editing for measured BRDFs. In: : . Paper presented at THE 43RD INTERNATIONAL CONFERENCE AND EXHIBITION ON Computer Graphics & Interactive Techniques, ANAHEIM, CALIFORNIA, 24-28 JULY, 2016. New York, NY, USA, Article ID 51.
Öppna denna publikation i ny flik eller fönster >>Differential appearance editing for measured BRDFs
Visa övriga...
2016 (Engelska)Konferensbidrag, Muntlig presentation med publicerat abstract (Övrigt vetenskapligt)
Abstract [en]

Data driven reflectance models using BRDF data measured from real materials, e.g. [Matusik et al. 2003], are becoming increasingly popular in product visualization, digital design and other applications driven by the need for predictable rendering and highly realistic results. Although recent analytic, parametric BRDFs provide good approximations for many materials, some effects are still not captured well [Löw et al. 2012]. Thus, it is hard to accurately model real materials using analytic models, even if the parameters are fitted to data. In practice, it is often desirable to apply small edits to the measured data for artistic purposes, or to model similar materials that are not available in measured form. A drawback of data driven models is that they are often difficult to edit and do not easily lend themselves well to artistic adjustments. Existing editing techniques for measured data [Schmidt et al. 2014], often use complex decompositions making them difficult to use in practice.

Ort, förlag, år, upplaga, sidor
New York, NY, USA: , 2016
Serie
SIGGRAPH ’16
Nyckelord
data-driven BRDFs, material editing
Nationell ämneskategori
Teknisk mekanik
Identifikatorer
urn:nbn:se:liu:diva-163324 (URN)10.1145/2897839.2927455 (DOI)9781450342827 (ISBN)
Konferens
THE 43RD INTERNATIONAL CONFERENCE AND EXHIBITION ON Computer Graphics & Interactive Techniques, ANAHEIM, CALIFORNIA, 24-28 JULY, 2016
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Tillgänglig från: 2020-05-19 Skapad: 2020-05-19 Senast uppdaterad: 2022-12-28Bibliografiskt granskad
Tsirikoglou, A., Ekeberg, S., Vikström, J., Kronander, J. & Unger, J. (2014). S(wi)SS: A flexible and robust sub-surface scattering shader. In: Morten Fjeld (Ed.), Proceedings of SIGRAD 2014: . Paper presented at SIGRAD 2014, June 12-13, 2014, Gothenburg, Sweden.
Öppna denna publikation i ny flik eller fönster >>S(wi)SS: A flexible and robust sub-surface scattering shader
Visa övriga...
2014 (Engelska)Ingår i: Proceedings of SIGRAD 2014 / [ed] Morten Fjeld, 2014Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

S(wi)SS is a new, flexible artist friendly multi-layered sub-surface scattering shader that simulates accurately subsurface scattering for a large range of translucent materials. It is a physically motivated multi-layered approach where the sub-surface scattering effect is generated using one to three layers. It enables seamless mixing of the classical dipole, the better dipole and the quantized diffusion reflectance model in the sub-surface scattering layers, and additionally provides the scattering coming of front and back illumination, as well as all the BSDFcomponents, in separate render channels enabling the artist to either use them physically accurately or tweak them independently during compositing to produce the desired result. To demonstrate the usefulness of our approach, we show a set of high quality rendering results from different user scenarios.

Nyckelord
Realistic image synthesis, sub-surface scattering
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:liu:diva-106939 (URN)
Konferens
SIGRAD 2014, June 12-13, 2014, Gothenburg, Sweden
Projekt
VPS
Forskningsfinansiär
Stiftelsen för strategisk forskning (SSF)
Tillgänglig från: 2014-05-27 Skapad: 2014-05-27 Senast uppdaterad: 2021-05-26Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0003-0298-937x

Sök vidare i DiVA

Visa alla publikationer