liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Publications (10 of 76) Show all publications
Kavoosighafi, B., Frisvad, J. R., Hajisharif, S., Unger, J. & Miandji, E. (2023). SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions. In: : . Paper presented at Eurographics Symposium on Rendering (EGSR), Delft, The Netherlands, 28 - 30 June, 2023.
Open this publication in new window or tab >>SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming atcompact storage, real-time rendering performance, and high image quality. Our model is trained once, using a small trainingset, and then used to obtain a sparse tensor containing the model parameters. Our technique exploits redundancies in the dataacross all dimensions simultaneously, as opposed to existing methods that use only angular information and ignore correlationsin the spatial domain. We show that our model admits efficient angular interpolation directly in the model space, rather thanthe BTF space, leading to a notably higher rendering speed than in previous work. Additionally, the high quality-storage costtradeoff enabled by our method facilitates controlling the image quality, storage cost, and rendering speed using a singleparameter, the number of coefficients. Previous methods rely on a fixed number of latent variables for training and testing,hence limiting the potential for achieving a favorable quality-storage cost tradeoff and scalability. Our experimental resultsdemonstrate that our method outperforms existing methods both quantitatively and qualitatively, as well as achieving a highercompression ratio and rendering speed.

National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-195283 (URN)
Conference
Eurographics Symposium on Rendering (EGSR), Delft, The Netherlands, 28 - 30 June, 2023
Available from: 2023-06-19 Created: 2023-06-19 Last updated: 2023-06-28Bibliographically approved
Vrotsou, K., Navarra, C., Kucher, K., Fedorov, I., Schück, F., Unger, J. & Neset, T.-S. (2023). Towards a Volunteered Geographic Information-Facilitated Visual Analytics Pipeline to Improve Impact-Based Weather Warning Systems. Atmosphere, 14(7), Article ID 1141.
Open this publication in new window or tab >>Towards a Volunteered Geographic Information-Facilitated Visual Analytics Pipeline to Improve Impact-Based Weather Warning Systems
Show others...
2023 (English)In: Atmosphere, ISSN 2073-4433, E-ISSN 2073-4433, Vol. 14, no 7, article id 1141Article in journal (Refereed) Published
Abstract [en]

Extreme weather events, such as flooding, are expected to increase in frequency and intensity. Therefore, the prediction of extreme weather events, assessment of their local impacts in urban environments, and implementation of adaptation measures are becoming high-priority challenges for local, regional, and national agencies and authorities. To manage these challenges, access to accurate weather warnings and information about the occurrence, extent, and impacts of extreme weather events are crucial. As a result, in addition to official sources of information for prediction and monitoring, citizen volunteered geographic information (VGI) has emerged as a complementary source of valuable information. In this work, we propose the formulation of an approach to complement the impact-based weather warning system that has been introduced in Sweden in 2021 by making use of such alternative sources of data. We present and discuss design considerations and opportunities towards the creation of a visual analytics (VA) pipeline for the identification and exploration of extreme weather events and their impacts from VGI texts and images retrieved from social media. The envisioned VA pipeline incorporates three main steps: (1) data collection, (2) image/text classification and analysis, and (3) visualization and exploration through an interactive visual interface. We envision that our work has the potential to support three processes that involve multiple stakeholders of the weather warning system: (1) the validation of previously issued warnings, (2) local and regional assessment-support documentation, and (3) the monitoring of ongoing events. The results of this work could thus generate information that is relevant to climate adaptation decision making and provide potential support for the future development of national weather warning systems.

Place, publisher, year, edition, pages
MDPI, 2023
Keywords
weather warning systems, flooding, volunteered geographic information, visualization, visual analytics, artificial intelligence, machine learning, natural language processing, classification, social media
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-196332 (URN)10.3390/atmos14071141 (DOI)001037893300001 ()
Projects
AI4ClimateAdaptation
Funder
Vinnova, 2020-03388
Note

This research was funded by Sweden's Innovation Agency, VINNOVA, grant number 2020-03388, 'AI for Climate Adaptation'.

Available from: 2023-07-18 Created: 2023-07-18 Last updated: 2023-12-07
Hanji, P., Mantiuk, R. K., Eilertsen, G., Hajisharif, S. & Unger, J. (2022). Comparison of single image HDR reconstruction methods — the caveats of quality assessment. In: ACM SIGGRAPH ’22 Conference Proceedings: . Paper presented at SIGGRAPH 2022, Vancouver, BC, Canada 8-11 aug, 2022.
Open this publication in new window or tab >>Comparison of single image HDR reconstruction methods — the caveats of quality assessment
Show others...
2022 (English)In: ACM SIGGRAPH ’22 Conference Proceedings, 2022Conference paper, Published paper (Refereed)
Abstract [en]

As the problem of reconstructing high dynamic range (HDR) imagesfrom a single exposure has attracted much research effort, it isessential to provide a robust protocol and clear guidelines on howto evaluate and compare new methods. In this work, we comparedsix recent single image HDR reconstruction (SI-HDR) methodsin a subjective image quality experiment on an HDR display. Wefound that only two methods produced results that are, on average,more preferred than the unprocessed single exposure images. Whenthe same methods are evaluated using image quality metrics, astypically done in papers, the metric predictions correlate poorlywith subjective quality scores. The main reason is a significant toneand color difference between the reference and reconstructed HDRimages. To improve the predictions of image quality metrics, we propose correcting for the inaccuracies of the estimated cameraresponse curve before computing quality values. We further analyzethe sources of prediction noise when evaluating SI-HDR methodsand demonstrate that existing metrics can reliably predict onlylarge quality differences.

Keywords
High dynamic range, inverse problems, image quality metrics
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-186401 (URN)10.1145/3528233.3530729 (DOI)9781450393379 (ISBN)
Conference
SIGGRAPH 2022, Vancouver, BC, Canada 8-11 aug, 2022
Note

Funding: This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement N° 725253–EyeCode)

Available from: 2022-06-23 Created: 2022-06-23 Last updated: 2023-09-29Bibliographically approved
Jönsson, D., Kronander, J., Unger, J., Schön, T. & Wrenninge, M. (2022). Direct Transmittance Estimation in Heterogeneous Participating Media Using Approximated Taylor Expansions. Paper presented at Jul;28(7):2602-2614. IEEE Transactions on Visualization and Computer Graphics, 28(7), 2602-2614
Open this publication in new window or tab >>Direct Transmittance Estimation in Heterogeneous Participating Media Using Approximated Taylor Expansions
Show others...
2022 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 28, no 7, p. 2602-2614Article in journal (Refereed) Published
Abstract [en]

Evaluating the transmittance between two points along a ray is a key component in solving the light transport through heterogeneous participating media and entails computing an intractable exponential of the integrated medium's extinction coefficient. While algorithms for estimating this transmittance exist, there is a lack of theoretical knowledge about their behaviour, which also prevent new theoretically sound algorithms from being developed. For this purpose, we introduce a new class of unbiased transmittance estimators based on random sampling or truncation of a Taylor expansion of the exponential function. In contrast to classical tracking algorithms, these estimators are non-analogous to the physical light transport process and directly sample the underlying extinction function without performing incremental advancement. We present several versions of the new class of estimators, based on either importance sampling or Russian roulette to provide finite unbiased estimators of the infinite Taylor series expansion. We also show that the well known ratio tracking algorithm can be seen as a special case of the new class of estimators. Lastly, we conduct performance evaluations on both the central processing unit (CPU) and the graphics processing unit (GPU), and the results demonstrate that the new algorithms outperform traditional algorithms for heterogeneous mediums.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
Media, Taylor series, Rendering (computer graphics), Estimation, Upper bound, Monte Carlo methods
National Category
Signal Processing Media and Communication Technology Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-178602 (URN)10.1109/TVCG.2020.3035516 (DOI)000801853400005 ()33141672 (PubMedID)
Conference
Jul;28(7):2602-2614
Funder
Knut and Alice Wallenberg Foundation, 2013-0076Swedish e‐Science Research CenterWallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic Research, RIT15-0012ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Note

Funding: Knut and Alice Wallenberg Foundation (KAW) [2013-0076]; SeRC (Swedish e-Science Research Center); Wallenberg AI, Autonomous Systems and Software Program (WASP); Swedish Foundation for Strategic Research (SSF) via the project ASSEMBLE [RIT15-0012]; ELLIIT environment for strategic research in Sweden

Available from: 2021-08-24 Created: 2021-08-24 Last updated: 2023-01-13Bibliographically approved
Stacke, K., Unger, J., Lundström, C. & Eilertsen, G. (2022). Learning Representations with Contrastive Self-Supervised Learning for Histopathology Applications. The Journal of Machine Learning for Biomedical Imaging, 1, Article ID 023.
Open this publication in new window or tab >>Learning Representations with Contrastive Self-Supervised Learning for Histopathology Applications
2022 (English)In: The Journal of Machine Learning for Biomedical Imaging, E-ISSN 2766-905X, Vol. 1, article id 023Article in journal (Other academic) Published
Abstract [en]

Unsupervised learning has made substantial progress over the last few years, especially by means of contrastive self-supervised learning. The dominating dataset for benchmarking self-supervised learning has been ImageNet, for which recent methods are approaching the performance achieved by fully supervised training. The ImageNet dataset is however largely object-centric, and it is not clear yet what potential those methods have on widely different datasets and tasks that are not object-centric, such as in digital pathology.While self-supervised learning has started to be explored within this area with encouraging results, there is reason to look closer at how this setting differs from natural images and ImageNet. In this paper we make an in-depth analysis of contrastive learning for histopathology, pin-pointing how the contrastive objective will behave differently due to the characteristics of histopathology data. Using SimCLR and H&E stained images as a representative setting for contrastive self-supervised learning in histopathology, we bring forward a number of considerations, such as view generation for the contrastive objectiveand hyper-parameter tuning. In a large battery of experiments, we analyze how the downstream performance in tissue classification will be affected by these considerations. The results point to how contrastive learning can reduce the annotation effort within digital pathology, but that the specific dataset characteristics need to be considered. To take full advantage of the contrastive learning objective, different calibrations of view generation and hyper-parameters are required. Our results pave the way for realizing the full potential of self-supervised learning for histopathology applications. Code and trained models are available at https://github.com/k-stacke/ssl-pathology.

Place, publisher, year, edition, pages
Melba (The Journal of Machine Learning for Biomedical Imaging), 2022
National Category
Medical Image Processing
Identifiers
urn:nbn:se:liu:diva-189163 (URN)
Available from: 2022-10-12 Created: 2022-10-12 Last updated: 2023-04-03
Eilertsen, G., Tsirikoglou, A., Lundström, C. & Unger, J. (2021). Ensembles of GANs for synthetic training data generation. In: : . Paper presented at ICLR 2021 workshop on Synthetic Data Generation: Quality, Privacy, Bias.
Open this publication in new window or tab >>Ensembles of GANs for synthetic training data generation
2021 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Insufficient training data is a major bottleneck for most deep learning practices, not least in medical imaging where data is difficult to collect and publicly available datasets are scarce due to ethics and privacy. This work investigates the use of synthetic images, created by generative adversarial networks (GANs), as the only source of training data. We demonstrate that for this application, it is of great importance to make use of multiple GANs to improve the diversity of the generated data, i.e. to sufficiently cover the data distribution. While a single GAN can generate seemingly diverse image content, training on this data in most cases lead to severe over-fitting. We test the impact of ensembled GANs on synthetic 2D data as well as common image datasets (SVHN and CIFAR-10), and using both DCGANs and progressively growing GANs. As a specific use case, we focus on synthesizing digital pathology patches to provide anonymized training data.

National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-175900 (URN)
Conference
ICLR 2021 workshop on Synthetic Data Generation: Quality, Privacy, Bias
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Vinnova, grant 2019-05144 and grant 2017-02447(AIDA)ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2021-05-26 Created: 2021-05-26 Last updated: 2022-01-17
Tsirikoglou, A., Gladh, M., Sahlin, D., Eilertsen, G. & Unger, J. (2021). Generative inter-class transformations for imbalanced data weather classification. London Imaging Meeting, 2021, 16-20
Open this publication in new window or tab >>Generative inter-class transformations for imbalanced data weather classification
Show others...
2021 (English)In: London Imaging Meeting, E-ISSN 2694-118X, Vol. 2021, p. 16-20Article in journal (Refereed) Published
Abstract [en]

This paper presents an evaluation of how data augmentation and inter-class transformations can be used to synthesize training data in low-data scenarios for single-image weather classification. In such scenarios, augmentations is a critical component, but there is a limit to how much improvements can be gained using classical augmentation strategies. Generative adversarial networks (GAN) have been demonstrated to generate impressive results, and have also been successful as a tool for data augmentation, but mostly for images of limited diversity, such as in medical applications. We investigate the possibilities in using generative augmentations for balancing a small weather classification dataset, where one class has a reduced number of images. We compare intra-class augmentations by means of classical transformations as well as noise-to-image GANs, to interclass augmentations where images from another class are transformed to the underrepresented class. The results show that it is possible to take advantage of GANs for inter-class augmentations to balance a small dataset for weather classification. This opens up for future work on GAN-based augmentations in scenarios where data is both diverse and scarce.

Place, publisher, year, edition, pages
Springfield, USA: Society for Imaging Science and Technology, 2021
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-182334 (URN)10.2352/issn.2694-118X.2021.LIM-16 (DOI)
Note

Funding: This project was funded by Knut and Alice Wallenberg Foundation, Wallenberg Autonomous Systems and Software Program, the strategic research environment ELLIIT, and ‘AI for Climate Adaptation’ through VINNOVA grant 2020-03388.

Available from: 2022-01-17 Created: 2022-01-17 Last updated: 2023-04-03Bibliographically approved
Baravdish, G., Unger, J. & Miandji, E. (2021). GPU Accelerated SL0 for Multidimensional Signals. In: 50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING WORKSHOP PROCEEDINGS - ICPP WORKSHOPS 21: . Paper presented at 50th International Conference on Parallel Processing (ICPP), ELECTR NETWORK, aug 09-12, 2021. ASSOC COMPUTING MACHINERY, Article ID 28.
Open this publication in new window or tab >>GPU Accelerated SL0 for Multidimensional Signals
2021 (English)In: 50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING WORKSHOP PROCEEDINGS - ICPP WORKSHOPS 21, ASSOC COMPUTING MACHINERY , 2021, article id 28Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we propose a novel GPU-based method for highly parallel compressed sensing of n-dimensional (nD) signals based on the smoothed l(0) (SL0) algorithm. We demonstrate the efficiency of our approach by showing several examples of nD tensor reconstructions. Moreover, we also consider the traditional 1D compressed sensing, and compare the results. We show that the multidimensional SL0 algorithm is computationally superior compared to the 1D variant due to the small dictionary sizes per dimension. This allows us to fully utilize the GPU and perform massive batch-wise computations, which is not possible for the 1D compressed sensing using SL0. For our evaluations, we use light field and light field video data sets. We show that we gain more than an order of magnitude speedup for both one-dimensional as well as multidimensional data points compared to a parallel CPU implementation. Finally, we present a theoretical analysis of the SL0 algorithm for nD signals, which generalizes previous work for 1D signals.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY, 2021
Series
International Conference on Parallel Processing Workshops, ISSN 1530-2016
Keywords
GPGPU; Multidimensional signal processing; Compressed sensing
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-179559 (URN)10.1145/3458744.3474048 (DOI)9781450384414 (ISBN)
Conference
50th International Conference on Parallel Processing (ICPP), ELECTR NETWORK, aug 09-12, 2021
Note

Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation

Available from: 2021-09-24 Created: 2021-09-24 Last updated: 2022-02-09
Stacke, K., Eilertsen, G., Unger, J. & Lundström, C. (2021). Measuring Domain Shift for Deep Learning in Histopathology. IEEE journal of biomedical and health informatics, 25(2), 325-336
Open this publication in new window or tab >>Measuring Domain Shift for Deep Learning in Histopathology
2021 (English)In: IEEE journal of biomedical and health informatics, ISSN 2168-2194, E-ISSN 2168-2208, Vol. 25, no 2, p. 325-336Article in journal (Refereed) Published
Abstract [en]

The high capacity of neural networks allows fitting models to data with high precision, but makes generalization to unseen data a challenge. If a domain shift exists, i.e. differences in image statistics between training and test data, care needs to be taken to ensure reliable deployment in real-world scenarios. In digital pathology, domain shift can be manifested in differences between whole-slide images, introduced by for example differences in acquisition pipeline - between medical centers or over time. In order to harness the great potential presented by deep learning in histopathology, and ensure consistent model behavior, we need a deeper understanding of domain shift and its consequences, such that a model's predictions on new data can be trusted. This work focuses on the internal representation learned by trained convolutional neural networks, and shows how this can be used to formulate a novel measure - the representation shift - for quantifying the magnitude of model specific domain shift. We perform a study on domain shift in tumor classification of hematoxylin and eosin stained images, by considering different datasets, models, and techniques for preparing data in order to reduce the domain shift. The results show how the proposed measure has a high correlation with drop in performance when testing a model across a large number of different types of domain shifts, and how it improves on existing techniques for measuring data shift and uncertainty. The proposed measure can reveal how sensitive a model is to domain variations, and can be used to detect new data that a model will have problems generalizing to. We see techniques for measuring, understanding and overcoming the domain shift as a crucial step towards reliable use of deep learning in the future clinical pathology applications.

Place, publisher, year, edition, pages
IEEE, 2021
Keywords
deep learning, machine learning, domain shift, histopathology
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-170816 (URN)10.1109/JBHI.2020.3032060 (DOI)000616310200003 ()
Note

Funding:  Wallenberg AI and Autonomous Systems and Software Program (WASP-AI); research environment ELLIIT; AIDA VinnovaVinnova [2017-02447]

Available from: 2020-10-23 Created: 2020-10-23 Last updated: 2023-04-03
Tsirikoglou, A., Stacke, K., Eilertsen, G., Lindvall, M. & Unger, J. (2020). A Study of Deep Learning Colon Cancer Detection in Limited Data Access Scenarios. In: : . Paper presented at International Conference on Learning Representations (ICLR) Workshop on AI for Overcoming Global Disparities in Cancer Care (AI4CC).
Open this publication in new window or tab >>A Study of Deep Learning Colon Cancer Detection in Limited Data Access Scenarios
Show others...
2020 (English)Conference paper, Poster (with or without abstract) (Refereed)
National Category
Medical Image Processing
Identifiers
urn:nbn:se:liu:diva-169838 (URN)
Conference
International Conference on Learning Representations (ICLR) Workshop on AI for Overcoming Global Disparities in Cancer Care (AI4CC)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2020-09-20 Created: 2020-09-20 Last updated: 2023-04-03
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7765-1747

Search in DiVA

Show all publications