liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Tsirikoglou, ApostoliaORCID iD iconorcid.org/0000-0003-0298-937x
Publications (8 of 8) Show all publications
Tsirikoglou, A. (2022). Synthetic data for visual machine learning: A data-centric approach. (Doctoral dissertation). Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>Synthetic data for visual machine learning: A data-centric approach
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Deep learning allows computers to learn from observations, or else training data. Successful application development requires skills in neural network design, adequate computational resources, and a training data distribution that covers the application do-main. We are currently witnessing an artificial intelligence (AI) outbreak with enough computational power to train very deep networks and build models that achieve similar or better than human performance. The crucial factor for the algorithms to succeed has proven to be the training data fed to the learning process. Too little or low quality or out-of-the-target distribution data will lead to poorly performing models no matter the capacity and the data regularization methods.

This thesis is a data-centric approach to AI and presents a set of contributions related to synthesizing images for training supervised visual machine learning. It is motivated by the profound potential of synthetic data in cases of low availability of captured data, expensive acquisition and annotation, and privacy and ethical issues. The presented work aims to generate images similar to samples drawn from the target distribution and evaluate the generated data as the sole training data source and in conjunction with captured imagery. For this, two synthesis methods are explored: computer graphics and generative modeling. Computer graphics-based generation methods and synthetic datasets for computer vision tasks are thoroughly reviewed. In the same context, a system employing procedural modeling and physically-based rendering is introduced for data generation for urban scene understanding. The scheme is flexible, easily scalable, and produces complex and diverse images with pixel-perfect annotations at no cost. Generative Adversarial Networks (GANs) are also used to generate images for small data scenarios augmentation. The strategy advances the model’s performance and robustness. Finally, ensembles of independently trained GANs investigate ways to improve images’ diversity and create synthetic data to serve as the only training source.

The application areas of the presented contributions relate to two image modalities, natural and histopathology images, to cover different aspects in the generation methods and the tasks’ characteristics and requirements. There are showcased synthesized examples of natural images for automotive applications and weather classification, and histopathology images for breast cancer and colon adenocarcinoma metastasis detection. This thesis, as a whole, promotes data-centric supervised deep learning development by highlighting the potential of synthetic data as a training data resource. It emphasizes the control over the formation process, the ability of multi-modality formats, and the automatic generation of annotations.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2022. p. 115
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2202
Keywords
Training data, Synthetic images, Computer graphics, Generative modeling, Natural images, Histopathology, Digital pathology, Machine learning, Deep learning
National Category
Medical Imaging
Identifiers
urn:nbn:se:liu:diva-182336 (URN)10.3384/9789179291754 (DOI)9789179291747 (ISBN)9789179291754 (ISBN)
Public defence
2022-02-14, Domteatern, Visualiseringscenter C, Kungsgatan 54, Norrköping, 09:15 (English)
Opponent
Supervisors
Note

ISBN for PDF has been added in the PDF-version.

Available from: 2022-01-17 Created: 2022-01-17 Last updated: 2025-02-09Bibliographically approved
Eilertsen, G., Tsirikoglou, A., Lundström, C. & Unger, J. (2021). Ensembles of GANs for synthetic training data generation. In: : . Paper presented at ICLR 2021 workshop on Synthetic Data Generation: Quality, Privacy, Bias.
Open this publication in new window or tab >>Ensembles of GANs for synthetic training data generation
2021 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Insufficient training data is a major bottleneck for most deep learning practices, not least in medical imaging where data is difficult to collect and publicly available datasets are scarce due to ethics and privacy. This work investigates the use of synthetic images, created by generative adversarial networks (GANs), as the only source of training data. We demonstrate that for this application, it is of great importance to make use of multiple GANs to improve the diversity of the generated data, i.e. to sufficiently cover the data distribution. While a single GAN can generate seemingly diverse image content, training on this data in most cases lead to severe over-fitting. We test the impact of ensembled GANs on synthetic 2D data as well as common image datasets (SVHN and CIFAR-10), and using both DCGANs and progressively growing GANs. As a specific use case, we focus on synthesizing digital pathology patches to provide anonymized training data.

National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-175900 (URN)
Conference
ICLR 2021 workshop on Synthetic Data Generation: Quality, Privacy, Bias
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Vinnova, grant 2019-05144 and grant 2017-02447(AIDA)ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2021-05-26 Created: 2021-05-26 Last updated: 2022-01-17
Tsirikoglou, A., Gladh, M., Sahlin, D., Eilertsen, G. & Unger, J. (2021). Generative inter-class transformations for imbalanced data weather classification. London Imaging Meeting, 2021, 16-20
Open this publication in new window or tab >>Generative inter-class transformations for imbalanced data weather classification
Show others...
2021 (English)In: London Imaging Meeting, E-ISSN 2694-118X, Vol. 2021, p. 16-20Article in journal (Refereed) Published
Abstract [en]

This paper presents an evaluation of how data augmentation and inter-class transformations can be used to synthesize training data in low-data scenarios for single-image weather classification. In such scenarios, augmentations is a critical component, but there is a limit to how much improvements can be gained using classical augmentation strategies. Generative adversarial networks (GAN) have been demonstrated to generate impressive results, and have also been successful as a tool for data augmentation, but mostly for images of limited diversity, such as in medical applications. We investigate the possibilities in using generative augmentations for balancing a small weather classification dataset, where one class has a reduced number of images. We compare intra-class augmentations by means of classical transformations as well as noise-to-image GANs, to interclass augmentations where images from another class are transformed to the underrepresented class. The results show that it is possible to take advantage of GANs for inter-class augmentations to balance a small dataset for weather classification. This opens up for future work on GAN-based augmentations in scenarios where data is both diverse and scarce.

Place, publisher, year, edition, pages
Springfield, USA: Society for Imaging Science and Technology, 2021
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-182334 (URN)10.2352/issn.2694-118X.2021.LIM-16 (DOI)
Note

Funding: This project was funded by Knut and Alice Wallenberg Foundation, Wallenberg Autonomous Systems and Software Program, the strategic research environment ELLIIT, and ‘AI for Climate Adaptation’ through VINNOVA grant 2020-03388.

Available from: 2022-01-17 Created: 2022-01-17 Last updated: 2025-02-07Bibliographically approved
Tsirikoglou, A., Stacke, K., Eilertsen, G., Lindvall, M. & Unger, J. (2020). A Study of Deep Learning Colon Cancer Detection in Limited Data Access Scenarios. In: : . Paper presented at International Conference on Learning Representations (ICLR) Workshop on AI for Overcoming Global Disparities in Cancer Care (AI4CC).
Open this publication in new window or tab >>A Study of Deep Learning Colon Cancer Detection in Limited Data Access Scenarios
Show others...
2020 (English)Conference paper, Poster (with or without abstract) (Refereed)
National Category
Medical Imaging
Identifiers
urn:nbn:se:liu:diva-169838 (URN)
Conference
International Conference on Learning Representations (ICLR) Workshop on AI for Overcoming Global Disparities in Cancer Care (AI4CC)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2020-09-20 Created: 2020-09-20 Last updated: 2025-02-09
Tsirikoglou, A., Eilertsen, G. & Unger, J. (2020). A Survey of Image Synthesis Methods for Visual Machine Learning. Computer graphics forum (Print), 39(6), 426-451
Open this publication in new window or tab >>A Survey of Image Synthesis Methods for Visual Machine Learning
2020 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 6, p. 426-451Article in journal (Refereed) Published
Abstract [en]

Image synthesis designed for machine learning applications provides the means to efficiently generate large quantities of training data while controlling the generation process to provide the best distribution and content variety. With the demands of deep learning applications, synthetic data have the potential of becoming a vital component in the training pipeline. Over the last decade, a wide variety of training data generation methods has been demonstrated. The potential of future development calls to bring these together for comparison and categorization. This survey provides a comprehensive list of the existing image synthesis methods for visual machine learning. These are categorized in the context of image generation, using a taxonomy based on modelling and rendering, while a classification is also made concerning the computer vision applications they are used. We focus on the computer graphics aspects of the methods, to promote future image generation for machine learning. Finally, each method is assessed in terms of quality and reported performance, providing a hint on its expected learning potential. The report serves as a comprehensive reference, targeting both groups of the applications and data development sides. A list of all methods and papers reviewed herein can be found at https://computergraphics.on.liu.se/image_synthesis_methods_for_visual_machine_learning/.

Place, publisher, year, edition, pages
John Wiley & Sons, 2020
Keywords
methods and applications
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-169839 (URN)10.1111/cgf.14047 (DOI)000565504000001 ()2-s2.0-85090446425 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Funding agencies: strategic research environment ELLIIT; Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation

Available from: 2020-09-20 Created: 2020-09-20 Last updated: 2025-02-07Bibliographically approved
Tsirikoglou, A., Kronander, J., Wrenninge, M. & Unger, J. (2017). Procedural modeling and physically based rendering for synthetic data generation in automotive applications.
Open this publication in new window or tab >>Procedural modeling and physically based rendering for synthetic data generation in automotive applications
2017 (English)Other (Other academic)
Abstract [en]

We present an overview and evaluation of a new, systematic approach for generation of highly realistic, annotated synthetic data for training of deep neural networks in computer vision tasks. The main contribution is a procedural world modeling approach enabling high variability coupled with physically accurate image synthesis, and is a departure from the hand-modeled virtual worlds and approximate image synthesis methods used in real-time applications. The benefits of our approach include flexible, physically accurate and scalable image synthesis, implicit wide coverage of classes and features, and complete data introspection for annotations, which all contribute to quality and cost efficiency. To evaluate our approach and the efficacy of the resulting data, we use semantic segmentation for autonomous vehicles and robotic navigation as the main application, and we train multiple deep learning architectures using synthetic data with and without fine tuning on organic (i.e. real-world) data. The evaluation shows that our approach improves the neural network’s performance and that even modest implementation efforts produce state-of-the-art results.

Publisher
p. 13
Series
arXiv.org ; 1710.06270
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:liu:diva-165751 (URN)
Available from: 2020-05-19 Created: 2020-05-19 Last updated: 2025-02-18Bibliographically approved
Tsirikoglou, A., Kronander, J., Larsson, P., Tongbuasirilai, T., Gardner, A. & Unger, J. (2016). Differential appearance editing for measured BRDFs. In: : . Paper presented at THE 43RD INTERNATIONAL CONFERENCE AND EXHIBITION ON Computer Graphics & Interactive Techniques, ANAHEIM, CALIFORNIA, 24-28 JULY, 2016. New York, NY, USA, Article ID 51.
Open this publication in new window or tab >>Differential appearance editing for measured BRDFs
Show others...
2016 (English)Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

Data driven reflectance models using BRDF data measured from real materials, e.g. [Matusik et al. 2003], are becoming increasingly popular in product visualization, digital design and other applications driven by the need for predictable rendering and highly realistic results. Although recent analytic, parametric BRDFs provide good approximations for many materials, some effects are still not captured well [Löw et al. 2012]. Thus, it is hard to accurately model real materials using analytic models, even if the parameters are fitted to data. In practice, it is often desirable to apply small edits to the measured data for artistic purposes, or to model similar materials that are not available in measured form. A drawback of data driven models is that they are often difficult to edit and do not easily lend themselves well to artistic adjustments. Existing editing techniques for measured data [Schmidt et al. 2014], often use complex decompositions making them difficult to use in practice.

Place, publisher, year, edition, pages
New York, NY, USA: , 2016
Series
SIGGRAPH ’16
Keywords
data-driven BRDFs, material editing
National Category
Applied Mechanics
Identifiers
urn:nbn:se:liu:diva-163324 (URN)10.1145/2897839.2927455 (DOI)9781450342827 (ISBN)
Conference
THE 43RD INTERNATIONAL CONFERENCE AND EXHIBITION ON Computer Graphics & Interactive Techniques, ANAHEIM, CALIFORNIA, 24-28 JULY, 2016
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2020-05-19 Created: 2020-05-19 Last updated: 2022-12-28Bibliographically approved
Tsirikoglou, A., Ekeberg, S., Vikström, J., Kronander, J. & Unger, J. (2014). S(wi)SS: A flexible and robust sub-surface scattering shader. In: Morten Fjeld (Ed.), Proceedings of SIGRAD 2014: . Paper presented at SIGRAD 2014, June 12-13, 2014, Gothenburg, Sweden.
Open this publication in new window or tab >>S(wi)SS: A flexible and robust sub-surface scattering shader
Show others...
2014 (English)In: Proceedings of SIGRAD 2014 / [ed] Morten Fjeld, 2014Conference paper, Published paper (Refereed)
Abstract [en]

S(wi)SS is a new, flexible artist friendly multi-layered sub-surface scattering shader that simulates accurately subsurface scattering for a large range of translucent materials. It is a physically motivated multi-layered approach where the sub-surface scattering effect is generated using one to three layers. It enables seamless mixing of the classical dipole, the better dipole and the quantized diffusion reflectance model in the sub-surface scattering layers, and additionally provides the scattering coming of front and back illumination, as well as all the BSDFcomponents, in separate render channels enabling the artist to either use them physically accurately or tweak them independently during compositing to produce the desired result. To demonstrate the usefulness of our approach, we show a set of high quality rendering results from different user scenarios.

Keywords
Realistic image synthesis, sub-surface scattering
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-106939 (URN)
Conference
SIGRAD 2014, June 12-13, 2014, Gothenburg, Sweden
Projects
VPS
Funder
Swedish Foundation for Strategic Research
Available from: 2014-05-27 Created: 2014-05-27 Last updated: 2021-05-26Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0298-937x

Search in DiVA

Show all publications