liu.seSearch for publications in DiVA
Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
Link to record
Permanent link

Direct link
Publications (10 of 86) Show all publications
Kavoosighafi, B., Mantiuk, R. K., Hajisharif, S., Miandji, E. & Unger, J. (2025). A Neural Quality Metric for BRDF Models. Journal of Physics, Conference Series, 3128(1), 012015-012015
Open this publication in new window or tab >>A Neural Quality Metric for BRDF Models
Show others...
2025 (English)In: Journal of Physics, Conference Series, ISSN 1742-6588, E-ISSN 1742-6596, Vol. 3128, no 1, p. 012015-012015Article in journal (Refereed) Published
Abstract [en]

Accurately evaluating the quality of bidirectional reflectance distribution function (BRDF) models is essential for photo-realistic rendering. Traditional BRDF-space metrics often employ numerical error measures that fail to capture perceptual differences evident in rendered images. In this paper, we introduce the first perceptually informed neural quality metric for BRDF evaluation that operates directly in BRDF space, eliminating the need for rendering during quality assessment. Our metric is implemented as a compact multi-layer perceptron (MLP), trained on a dataset of measured BRDFs supplemented with synthetically generated data and labelled using a perceptually validated image-space metric. The network takes as input paired samples of reference and approximated BRDFs and predicts their perceptual quality in terms of just-objectionable-difference (JOD) scores. We show that our neural metric achieves significantly higher correlation with human judgments than existing BRDF-space metrics. While its performance as a loss function for BRDF fitting remains limited, the proposed metric offers a perceptually grounded alternative for evaluating BRDF models.

National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-219316 (URN)10.1088/1742-6596/3128/1/012015 (DOI)
Available from: 2025-11-06 Created: 2025-11-06 Last updated: 2025-12-18
Kavoosighafi, B., Hajisharif, S., Unger, J. & Miandji, E. (2025). Adaptive Sampling for BRDF Acquisition. Computer graphics forum (Print), Article ID e70289.
Open this publication in new window or tab >>Adaptive Sampling for BRDF Acquisition
2025 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, article id e70289Article in journal (Refereed) Published
Abstract [en]

The bidirectional reflectance distribution function (BRDF) describes the ratio of incoming radiance to outgoing radiance for all possible pairs of incoming and outgoing directions, defined over a spatial point. BRDF plays a key role in appearance modelling in computer graphics. Precise BRDF representation typically involves collecting millions of samples from incoming and outgoing directions, taking several hours or days of measurement using a gonioreflectometer. In this paper, we present an adaptive sampling framework for fast and accurate acquisition of BRDFs, where the number of measurements adapts to the complexity of the underlying BRDF function. We enhance the sampling efficiency of existing BRDF sampling techniques by accounting for the diverse reflectance properties of different materials. To achieve this, we categorise BRDFs in measured datasets into distinct clusters based on their sparsity and extract the necessary number of measurements for faithful reconstruction. Using a lightweight neural network, we predict the material's cluster from a single image, which allows us to determine the optimal sample count and sampling pattern, that is, the light/camera configuration. Our evaluation and analysis, compared to state-of-the-art methods, demonstrate a notable performance boost, particularly for challenging materials like specular BRDFs.

Place, publisher, year, edition, pages
Wiley, 2025
Keywords
rendering; rendering; reflectance and shading models
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-219333 (URN)10.1111/cgf.70289 (DOI)001610733800001 ()2-s2.0-105021258578 (Scopus ID)
Note

Funding Agencies|Marie Sklstrok;odowska Curie [956585]

Available from: 2025-11-07 Created: 2025-11-07 Last updated: 2026-01-26
Navarra, C., Kucher, K., Neset, T.-S., Greve Villaro, C., Schück, F., Unger, J. & Vrotsou, K. (2025). Leveraging Visual Analytics of Volunteered Geographic Information to Support Impact-Based Weather Warning Systems. International Journal of Disaster Risk Reduction, 126, Article ID 105562.
Open this publication in new window or tab >>Leveraging Visual Analytics of Volunteered Geographic Information to Support Impact-Based Weather Warning Systems
Show others...
2025 (English)In: International Journal of Disaster Risk Reduction, E-ISSN 2212-4209, Vol. 126, article id 105562Article in journal (Refereed) Published
Abstract [en]

As extreme weather events such as floods, storms, and heatwaves proliferate, local and regional authorities face challenges in predicting, monitoring, and assessing these events and their impacts. The introduction of impact-based warning services requires detailed, location-specific information on local vulnerability and impacts. This necessitates complementing conventional data with insights from local actors, and to explore novel methods for relevant public data monitoring through social media and news outlets. This paper presents a visual analytics pipeline that was co-developed with practitioners, aiming to detect impacts of extreme weather events, particularly floods, using Volunteered Geographic Information (VGI). The pipeline steps include: collecting VGI from social media, classifying and analysing the data, and visualizing it through an interactive interface. An empirical evaluation study was performed with meteorological and hydrological experts to assess the developed visual interface. The study collected and analysed feedback on the usability of the interface and identified interaction patterns from the experiment’s screen recordings.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
visualization, classification, Volunteered Geographic Information (VGI), social media data, extreme weather events, flooding
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-213966 (URN)10.1016/j.ijdrr.2025.105562 (DOI)001503844100001 ()2-s2.0-105006939009 (Scopus ID)
Projects
AI4ClimateAdaptation
Funder
Vinnova, 2020-03388
Note

This research was funded by Sweden's Innovation Agency, VINNOVA, grant number 2020-03388, 'AI for Climate Adaptation'.

Available from: 2025-05-27 Created: 2025-05-27 Last updated: 2025-09-11
Karthikeyan, N. C., Unger, J. & Eilertsen, G. (2025). Towards Controllable Image Generation through Representation-Conditioned Diffusion Models. In: Towards Controllable Image Generation through Representation-Conditioned Diffusion Models: . Paper presented at The 42nd Swedish Symposium on Image Analysis/ The 8th Swedish Symposium on Deep Learning.
Open this publication in new window or tab >>Towards Controllable Image Generation through Representation-Conditioned Diffusion Models
2025 (English)In: Towards Controllable Image Generation through Representation-Conditioned Diffusion Models, 2025Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Diffusion models have emerged as powerful tools for high-quality image generation and editing, but guiding these models to produce specific outputs remains a challenge. Conventional approaches rely on conditioning mechanisms, such as text prompts or semantic maps, which require extensively annotated datasets. In this preliminary work, we explore diffusion models conditioned on representations from a pre-trained self-supervised model. The self-conditioning mechanism not only improves the quality of unconditional image generation, but also provides a representation space that can be used to control the generation. We explore this conditioning space by identifying directions of variations, and demonstrate promising properties in terms of smoothness and disentanglement.

Keywords
Generative Models, Diffusion Models, Representation-Conditioning
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-217452 (URN)
Conference
The 42nd Swedish Symposium on Image Analysis/ The 8th Swedish Symposium on Deep Learning
Available from: 2025-09-08 Created: 2025-09-08 Last updated: 2025-12-19
Baravdish, G., Eilertsen, G., Jaroudi, R., Johansson, T., Malý, L. & Unger, J. (2024). A Hybrid Sobolev Gradient Method for Learning NODEs. Operations Research Forum, 5, Article ID 91.
Open this publication in new window or tab >>A Hybrid Sobolev Gradient Method for Learning NODEs
Show others...
2024 (English)In: Operations Research Forum, E-ISSN 2662-2556, Vol. 5, article id 91Article in journal (Refereed) Published
Abstract [en]

The inverse problem of supervised reconstruction of depth-variable (time-dependent) parameters in ordinary differential equations is considered, with the typical application of finding weights of a neural ordinary differential equation (NODE) for a residual network with time continuous layers. The differential equation is treated as an abstract and isolated entity, termed a standalone NODE (sNODE), to facilitate for a wide range of applications. The proposed parameter reconstruction is performed by minimizing a cost functional covering a variety of loss functions and penalty terms. Regularization via penalty terms is incorporated to enhance ethical and trustworthy AI formulations. A nonlinear conjugate gradient mini-batch optimization scheme (NCG) is derived for the training having the benefit of including a sensitivity problem. The model (differential equation)-based approach is thus combined with a data-driven learning procedure. Mathematical properties are stated for the differential equation and the cost functional. The adjoint problem needed is derived together with the sensitivity problem. The sensitivity problem itself can estimate changes in the output under perturbation of the trained parameters. To preserve smoothness during the iterations, the Sobolev gradient is calculated and incorporated. Numerical results are included to validate the procedure for a NODE and synthetic datasets and compared with standard gradient approaches. For stability, using the sensitivity problem, a strategy for adversarial attacks is constructed, and it is shown that the given method with Sobolev gradients is more robust than standard approaches for parameter identification.

Place, publisher, year, edition, pages
Switzerland: Springer Nature, 2024
Keywords
Adversarial attacks, Deep learning, Inverse problems, Neural ordinary differential equations, Sobolev gradient
National Category
Mathematics Computer Sciences
Identifiers
urn:nbn:se:liu:diva-208091 (URN)10.1007/s43069-024-00377-x (DOI)2-s2.0-85205866958 (Scopus ID)
Available from: 2024-10-02 Created: 2024-10-02 Last updated: 2025-04-23Bibliographically approved
Neset, T.-S., Andersson, L., Edström, M. M., Vrotsou, K., Greve Villaro, C., Navarra, C., . . . Linnér, B.-O. (2024). AI för klimatanpassning: Hur kan nya digitala teknologier stödja klimatanpassning?. Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>AI för klimatanpassning: Hur kan nya digitala teknologier stödja klimatanpassning?
Show others...
2024 (Swedish)Report (Other academic)
Abstract [sv]

Tillgång till vädervarningar med information om förväntade konsekvenser av vädret är nödvändigt för god krisberedskap hos myndigheter, kommuner, näringsliv och privatpersoner. Vidareutveckling av varningssystem som fokuserar på förväntade störningar (konsekvensbaserade varningssystem) är därför en viktig komponent i samhällets hantering av klimatförändringar. Forskningsprojektet AI för klimatanpassning (AI4CA) har analyserat möjligheter och hinder med att inkludera AI-baserad text- och bildanalys som stöd till SMHI:s konsekvensbaserade vädervarningssystem och på sikt även stödja långsiktig klimatanpassning. 

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2024
Series
CSPR Brief, E-ISSN 2004-9560 ; 2024:1
National Category
Climate Science
Identifiers
urn:nbn:se:liu:diva-203955 (URN)10.3384/brief-203955 (DOI)
Available from: 2024-05-30 Created: 2024-05-30 Last updated: 2025-02-07Bibliographically approved
Kavoosighafi, B., Hajisharif, S., Miandji, E., Baravdish, G., Cao, W. & Unger, J. (2024). Deep SVBRDF Acquisition and Modelling: A Survey. Computer graphics forum (Print), 43(6)
Open this publication in new window or tab >>Deep SVBRDF Acquisition and Modelling: A Survey
Show others...
2024 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 43, no 6Article in journal (Refereed) Published
Abstract [en]

Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine-learning-driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high-quality measurements of bi-directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi-directional Reflectance Distribution Functions (SVBRDFs). Learning-based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State-of-the-Art Report (STAR) presents an in-depth overview of the state-of-the-art in machine-learning-driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real-world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at . Papers surveyed in this study with a focus on the extraction of BRDF or SVBRDF from a few measurements, classifying them according to their specific geometries and lighting conditions. Whole-scene refers to techniques that capture entire indoor or outdoor outside the scope of this survey. image

Place, publisher, year, edition, pages
WILEY, 2024
Keywords
modelling; appearance modelling; rendering
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-207835 (URN)10.1111/cgf.15199 (DOI)001312821700001 ()
Note

Funding Agencies|European Union [956585]

Available from: 2024-09-25 Created: 2024-09-25 Last updated: 2025-05-22
Eilertsen, G., Jönsson, D., Unger, J. & Ynnerman, A. (2024). Model-invariant Weight Distribution Descriptors for Visual Exploration of Neural Networks en Masse. In: Christian Tominski, Manuela Waldner, and Bei Wang (Ed.), EuroVis 2024 - Short Papers: . Paper presented at EuroVis. Eurographics - European Association for Computer Graphics
Open this publication in new window or tab >>Model-invariant Weight Distribution Descriptors for Visual Exploration of Neural Networks en Masse
2024 (English)In: EuroVis 2024 - Short Papers / [ed] Christian Tominski, Manuela Waldner, and Bei Wang, Eurographics - European Association for Computer Graphics, 2024Conference paper, Published paper (Refereed)
Abstract [en]

We present a neural network representation which can be used for visually analyzing the similarities and differences in a large corpus of trained neural networks. The focus is on architecture-invariant comparisons based on network weights, estimating similarities of the statistical footprints encoded by the training setups and stochastic optimization procedures. To make this possible, we propose a novel visual descriptor of neural network weights. The visual descriptor considers local weight statistics in a model-agnostic manner by encoding the distribution of weights over different model depths. We show how such a representation can extract descriptive information, is robust to different parameterizations of a model, and is applicable to different architecture specifications. The descriptor is used to create a model atlas by projecting a model library to a 2D representation, where clusters can be found based on similar weight properties. A cluster analysis strategy makes it possible to understand the weight properties of clusters and how these connect to the different datasets and hyper-parameters used to train the models.

Place, publisher, year, edition, pages
Eurographics - European Association for Computer Graphics, 2024
Keywords
machine learning, deep learning, visualization
National Category
Computer and Information Sciences Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-205660 (URN)10.2312/evs.20241068 (DOI)978-3-03868-251-6 (ISBN)
Conference
EuroVis
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-06-28 Created: 2024-06-28 Last updated: 2025-08-20
Cao, W., Miandji, E. & Unger, J. (2024). Multidimensional Compressed Sensing for Spectral Light Field Imaging. In: Petia Radeva, A. Furnari, Kadi Bouatouch, A. Augusto Sousa (Ed.), Multidimensional Compressed Sensing for Spectral Light Field Imaging: . Paper presented at In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISAPP 2024, Rome,Feb 27-Feb 29 2024. (pp. 349-356). Rome, Italy: Institute for Systems and Technologies of Information, Control and Communication, 4
Open this publication in new window or tab >>Multidimensional Compressed Sensing for Spectral Light Field Imaging
2024 (English)In: Multidimensional Compressed Sensing for Spectral Light Field Imaging / [ed] Petia Radeva, A. Furnari, Kadi Bouatouch, A. Augusto Sousa, Rome, Italy: Institute for Systems and Technologies of Information, Control and Communication, 2024, Vol. 4, p. 8p. 349-356Conference paper, Published paper (Refereed)
Abstract [en]

This paper considers a compressive multi-spectral light field camera model that utilizes a one-hot spectral-coded mask and a microlens array to capture spatial, angular, and spectral information using a singlemonochrome sensor. We propose a model that employs compressed sensing techniques to reconstruct thecomplete multi-spectral light field from undersampled measurements. Unlike previous work where a lightfield is vectorized to a 1D signal, our method employs a 5D basis and a novel 5D measurement model, hence,matching the intrinsic dimensionality of multispectral light fields. We mathematically and empirically showthe equivalence of 5D and 1D sensing models, and most importantly that the 5D framework achieves or-ders of magnitude faster reconstruction while requiring a small fraction of the memory. Moreover, our newmultidimensional sensing model opens new research directions for designing efficient visual data acquisitionalgorithms and hardware.

Place, publisher, year, edition, pages
Rome, Italy: Institute for Systems and Technologies of Information, Control and Communication, 2024. p. 8
Keywords
Spectral light field, Compressive sensing
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-201273 (URN)10.5220/0012431300003660 (DOI)978-989-758-679-8 (ISBN)
Conference
In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISAPP 2024, Rome,Feb 27-Feb 29 2024.
Available from: 2024-03-03 Created: 2024-03-03 Last updated: 2025-02-18
Lei, D., Miandji, E., Unger, J. & Hotz, I. (2024). Sparse q-ball imaging towards efficient visual exploration of HARDI data. Computer graphics forum (Print), 43(3), Article ID e15082.
Open this publication in new window or tab >>Sparse q-ball imaging towards efficient visual exploration of HARDI data
2024 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 43, no 3, article id e15082Article in journal (Refereed) Published
Abstract [en]

Diffusion-weighted magnetic resonance imaging (D-MRI) is a technique to measure the diffusion of water, in biological tissues. It is used to detect microscopic patterns, such as neural fibers in the living human brain, with many medical and neuroscience applications e.g. for fiber tracking. In this paper, we consider High-Angular Resolution Diffusion Imaging (HARDI) which provides one of the richest representations of water diffusion. It records the movement of water molecules by measuring diffusion under 64 or more directions. A key challenge is that it generates high-dimensional, large, and complex datasets. In our work, we develop a novel representation that exploits the inherent sparsity of the HARDI signal by approximating it as a linear sum of basic atoms in an overcomplete data-driven dictionary using only a sparse set of coefficients. We show that this approach can be efficiently integrated into the standard q-ball imaging pipeline to compute the diffusion orientation distribution function (ODF). Sparse representations have the potential to reduce the size of the data while also giving some insight into the data. To explore the results, we provide a visualization of the atoms of the dictionary and their frequency in the data to highlight the basic characteristics of the data. We present our proposed pipeline and demonstrate its performance on 5 HARDI datasets.

Place, publisher, year, edition, pages
WILEY, 2024
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-204924 (URN)10.1111/cgf.15082 (DOI)001239278600001 ()
Note

Funding Agencies|Swedish Research Council (VR)

Available from: 2024-06-17 Created: 2024-06-17 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7765-1747

Search in DiVA

Show all publications