liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Publications (10 of 82) Show all publications
Baravdish, G., Eilertsen, G., Jaroudi, R., Johansson, T., Malý, L. & Unger, J. (2024). A Hybrid Sobolev Gradient Method for Learning NODEs. Operations Research Forum, 5, Article ID 91.
Open this publication in new window or tab >>A Hybrid Sobolev Gradient Method for Learning NODEs
Show others...
2024 (English)In: Operations Research Forum, E-ISSN 2662-2556, Vol. 5, article id 91Article in journal (Refereed) Published
Abstract [en]

The inverse problem of supervised reconstruction of depth-variable (time-dependent) parameters in ordinary differential equations is considered, with the typical application of finding weights of a neural ordinary differential equation (NODE) for a residual network with time continuous layers. The differential equation is treated as an abstract and isolated entity, termed a standalone NODE (sNODE), to facilitate for a wide range of applications. The proposed parameter reconstruction is performed by minimizing a cost functional covering a variety of loss functions and penalty terms. Regularization via penalty terms is incorporated to enhance ethical and trustworthy AI formulations. A nonlinear conjugate gradient mini-batch optimization scheme (NCG) is derived for the training having the benefit of including a sensitivity problem. The model (differential equation)-based approach is thus combined with a data-driven learning procedure. Mathematical properties are stated for the differential equation and the cost functional. The adjoint problem needed is derived together with the sensitivity problem. The sensitivity problem itself can estimate changes in the output under perturbation of the trained parameters. To preserve smoothness during the iterations, the Sobolev gradient is calculated and incorporated. Numerical results are included to validate the procedure for a NODE and synthetic datasets and compared with standard gradient approaches. For stability, using the sensitivity problem, a strategy for adversarial attacks is constructed, and it is shown that the given method with Sobolev gradients is more robust than standard approaches for parameter identification.

Place, publisher, year, edition, pages
Switzerland: Springer Nature, 2024
Keywords
Adversarial attacks, Deep learning, Inverse problems, Neural ordinary differential equations, Sobolev gradient
National Category
Mathematics Computer Sciences
Identifiers
urn:nbn:se:liu:diva-208091 (URN)10.1007/s43069-024-00377-x (DOI)2-s2.0-85205866958 (Scopus ID)
Available from: 2024-10-02 Created: 2024-10-02 Last updated: 2024-12-12Bibliographically approved
Neset, T.-S., Andersson, L., Edström, M. M., Vrotsou, K., Greve Villaro, C., Navarra, C., . . . Linnér, B.-O. (2024). AI för klimatanpassning: Hur kan nya digitala teknologier stödja klimatanpassning?. Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>AI för klimatanpassning: Hur kan nya digitala teknologier stödja klimatanpassning?
Show others...
2024 (Swedish)Report (Other academic)
Abstract [sv]

Tillgång till vädervarningar med information om förväntade konsekvenser av vädret är nödvändigt för god krisberedskap hos myndigheter, kommuner, näringsliv och privatpersoner. Vidareutveckling av varningssystem som fokuserar på förväntade störningar (konsekvensbaserade varningssystem) är därför en viktig komponent i samhällets hantering av klimatförändringar. Forskningsprojektet AI för klimatanpassning (AI4CA) har analyserat möjligheter och hinder med att inkludera AI-baserad text- och bildanalys som stöd till SMHI:s konsekvensbaserade vädervarningssystem och på sikt även stödja långsiktig klimatanpassning. 

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2024
Series
CSPR Brief, E-ISSN 2004-9560 ; 2024:1
National Category
Climate Science
Identifiers
urn:nbn:se:liu:diva-203955 (URN)10.3384/brief-203955 (DOI)
Available from: 2024-05-30 Created: 2024-05-30 Last updated: 2025-02-07Bibliographically approved
Kavoosighafi, B., Hajisharif, S., Miandji, E., Baravdish, G., Cao, W. & Unger, J. (2024). Deep SVBRDF Acquisition and Modelling: A Survey. Computer graphics forum (Print), 43(6)
Open this publication in new window or tab >>Deep SVBRDF Acquisition and Modelling: A Survey
Show others...
2024 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 43, no 6Article in journal (Refereed) Published
Abstract [en]

Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine-learning-driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high-quality measurements of bi-directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi-directional Reflectance Distribution Functions (SVBRDFs). Learning-based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State-of-the-Art Report (STAR) presents an in-depth overview of the state-of-the-art in machine-learning-driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real-world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at . Papers surveyed in this study with a focus on the extraction of BRDF or SVBRDF from a few measurements, classifying them according to their specific geometries and lighting conditions. Whole-scene refers to techniques that capture entire indoor or outdoor outside the scope of this survey. image

Place, publisher, year, edition, pages
WILEY, 2024
Keywords
modelling; appearance modelling; rendering
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-207835 (URN)10.1111/cgf.15199 (DOI)001312821700001 ()
Note

Funding Agencies|European Union [956585]

Available from: 2024-09-25 Created: 2024-09-25 Last updated: 2024-10-07
Eilertsen, G., Jönsson, D., Unger, J. & Ynnerman, A. (2024). Model-invariant Weight Distribution Descriptors for Visual Exploration of Neural Networks en Masse. In: Christian Tominski, Manuela Waldner, and Bei Wang (Ed.), EuroVis 2024 - Short Papers: . Paper presented at EuroVis. Eurographics - European Association for Computer Graphics
Open this publication in new window or tab >>Model-invariant Weight Distribution Descriptors for Visual Exploration of Neural Networks en Masse
2024 (English)In: EuroVis 2024 - Short Papers / [ed] Christian Tominski, Manuela Waldner, and Bei Wang, Eurographics - European Association for Computer Graphics, 2024Conference paper, Published paper (Refereed)
Abstract [en]

We present a neural network representation which can be used for visually analyzing the similarities and differences in a large corpus of trained neural networks. The focus is on architecture-invariant comparisons based on network weights, estimating similarities of the statistical footprints encoded by the training setups and stochastic optimization procedures. To make this possible, we propose a novel visual descriptor of neural network weights. The visual descriptor considers local weight statistics in a model-agnostic manner by encoding the distribution of weights over different model depths. We show how such a representation can extract descriptive information, is robust to different parameterizations of a model, and is applicable to different architecture specifications. The descriptor is used to create a model atlas by projecting a model library to a 2D representation, where clusters can be found based on similar weight properties. A cluster analysis strategy makes it possible to understand the weight properties of clusters and how these connect to the different datasets and hyper-parameters used to train the models.

Place, publisher, year, edition, pages
Eurographics - European Association for Computer Graphics, 2024
Keywords
machine learning, deep learning, visualization
National Category
Computer and Information Sciences Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-205660 (URN)10.2312/evs.20241068 (DOI)978-3-03868-251-6 (ISBN)
Conference
EuroVis
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-06-28 Created: 2024-06-28 Last updated: 2025-02-18
Cao, W., Miandji, E. & Unger, J. (2024). Multidimensional Compressed Sensing for Spectral Light Field Imaging. In: Petia Radeva, A. Furnari, Kadi Bouatouch, A. Augusto Sousa (Ed.), Multidimensional Compressed Sensing for Spectral Light Field Imaging: . Paper presented at In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISAPP 2024, Rome,Feb 27-Feb 29 2024. (pp. 349-356). Rome, Italy: Institute for Systems and Technologies of Information, Control and Communication, 4
Open this publication in new window or tab >>Multidimensional Compressed Sensing for Spectral Light Field Imaging
2024 (English)In: Multidimensional Compressed Sensing for Spectral Light Field Imaging / [ed] Petia Radeva, A. Furnari, Kadi Bouatouch, A. Augusto Sousa, Rome, Italy: Institute for Systems and Technologies of Information, Control and Communication, 2024, Vol. 4, p. 8p. 349-356Conference paper, Published paper (Refereed)
Abstract [en]

This paper considers a compressive multi-spectral light field camera model that utilizes a one-hot spectral-coded mask and a microlens array to capture spatial, angular, and spectral information using a singlemonochrome sensor. We propose a model that employs compressed sensing techniques to reconstruct thecomplete multi-spectral light field from undersampled measurements. Unlike previous work where a lightfield is vectorized to a 1D signal, our method employs a 5D basis and a novel 5D measurement model, hence,matching the intrinsic dimensionality of multispectral light fields. We mathematically and empirically showthe equivalence of 5D and 1D sensing models, and most importantly that the 5D framework achieves or-ders of magnitude faster reconstruction while requiring a small fraction of the memory. Moreover, our newmultidimensional sensing model opens new research directions for designing efficient visual data acquisitionalgorithms and hardware.

Place, publisher, year, edition, pages
Rome, Italy: Institute for Systems and Technologies of Information, Control and Communication, 2024. p. 8
Keywords
Spectral light field, Compressive sensing
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-201273 (URN)10.5220/0012431300003660 (DOI)978-989-758-679-8 (ISBN)
Conference
In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISAPP 2024, Rome,Feb 27-Feb 29 2024.
Available from: 2024-03-03 Created: 2024-03-03 Last updated: 2025-02-18
Lei, D., Miandji, E., Unger, J. & Hotz, I. (2024). Sparse q-ball imaging towards efficient visual exploration of HARDI data. Computer graphics forum (Print), 43(3), Article ID e15082.
Open this publication in new window or tab >>Sparse q-ball imaging towards efficient visual exploration of HARDI data
2024 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 43, no 3, article id e15082Article in journal (Refereed) Published
Abstract [en]

Diffusion-weighted magnetic resonance imaging (D-MRI) is a technique to measure the diffusion of water, in biological tissues. It is used to detect microscopic patterns, such as neural fibers in the living human brain, with many medical and neuroscience applications e.g. for fiber tracking. In this paper, we consider High-Angular Resolution Diffusion Imaging (HARDI) which provides one of the richest representations of water diffusion. It records the movement of water molecules by measuring diffusion under 64 or more directions. A key challenge is that it generates high-dimensional, large, and complex datasets. In our work, we develop a novel representation that exploits the inherent sparsity of the HARDI signal by approximating it as a linear sum of basic atoms in an overcomplete data-driven dictionary using only a sparse set of coefficients. We show that this approach can be efficiently integrated into the standard q-ball imaging pipeline to compute the diffusion orientation distribution function (ODF). Sparse representations have the potential to reduce the size of the data while also giving some insight into the data. To explore the results, we provide a visualization of the atoms of the dictionary and their frequency in the data to highlight the basic characteristics of the data. We present our proposed pipeline and demonstrate its performance on 5 HARDI datasets.

Place, publisher, year, edition, pages
WILEY, 2024
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-204924 (URN)10.1111/cgf.15082 (DOI)001239278600001 ()
Note

Funding Agencies|Swedish Research Council (VR)

Available from: 2024-06-17 Created: 2024-06-17 Last updated: 2025-02-07Bibliographically approved
Kavoosighafi, B., Frisvad, J. R., Hajisharif, S., Unger, J. & Miandji, E. (2023). SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions. In: : . Paper presented at Eurographics Symposium on Rendering (EGSR), Delft, The Netherlands, 28 - 30 June, 2023 (pp. 37-50). The Eurographics Association
Open this publication in new window or tab >>SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming atcompact storage, real-time rendering performance, and high image quality. Our model is trained once, using a small trainingset, and then used to obtain a sparse tensor containing the model parameters. Our technique exploits redundancies in the dataacross all dimensions simultaneously, as opposed to existing methods that use only angular information and ignore correlationsin the spatial domain. We show that our model admits efficient angular interpolation directly in the model space, rather thanthe BTF space, leading to a notably higher rendering speed than in previous work. Additionally, the high quality-storage costtradeoff enabled by our method facilitates controlling the image quality, storage cost, and rendering speed using a singleparameter, the number of coefficients. Previous methods rely on a fixed number of latent variables for training and testing,hence limiting the potential for achieving a favorable quality-storage cost tradeoff and scalability. Our experimental resultsdemonstrate that our method outperforms existing methods both quantitatively and qualitatively, as well as achieving a highercompression ratio and rendering speed.

Place, publisher, year, edition, pages
The Eurographics Association, 2023
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-195283 (URN)10.2312/sr.20231123 (DOI)978-3-03868-229-5 (ISBN)
Conference
Eurographics Symposium on Rendering (EGSR), Delft, The Netherlands, 28 - 30 June, 2023
Available from: 2023-06-19 Created: 2023-06-19 Last updated: 2025-02-18Bibliographically approved
Vrotsou, K., Navarra, C., Kucher, K., Fedorov, I., Schück, F., Unger, J. & Neset, T.-S. (2023). Towards a Volunteered Geographic Information-Facilitated Visual Analytics Pipeline to Improve Impact-Based Weather Warning Systems. Atmosphere, 14(7), Article ID 1141.
Open this publication in new window or tab >>Towards a Volunteered Geographic Information-Facilitated Visual Analytics Pipeline to Improve Impact-Based Weather Warning Systems
Show others...
2023 (English)In: Atmosphere, E-ISSN 2073-4433, Vol. 14, no 7, article id 1141Article in journal (Refereed) Published
Abstract [en]

Extreme weather events, such as flooding, are expected to increase in frequency and intensity. Therefore, the prediction of extreme weather events, assessment of their local impacts in urban environments, and implementation of adaptation measures are becoming high-priority challenges for local, regional, and national agencies and authorities. To manage these challenges, access to accurate weather warnings and information about the occurrence, extent, and impacts of extreme weather events are crucial. As a result, in addition to official sources of information for prediction and monitoring, citizen volunteered geographic information (VGI) has emerged as a complementary source of valuable information. In this work, we propose the formulation of an approach to complement the impact-based weather warning system that has been introduced in Sweden in 2021 by making use of such alternative sources of data. We present and discuss design considerations and opportunities towards the creation of a visual analytics (VA) pipeline for the identification and exploration of extreme weather events and their impacts from VGI texts and images retrieved from social media. The envisioned VA pipeline incorporates three main steps: (1) data collection, (2) image/text classification and analysis, and (3) visualization and exploration through an interactive visual interface. We envision that our work has the potential to support three processes that involve multiple stakeholders of the weather warning system: (1) the validation of previously issued warnings, (2) local and regional assessment-support documentation, and (3) the monitoring of ongoing events. The results of this work could thus generate information that is relevant to climate adaptation decision making and provide potential support for the future development of national weather warning systems.

Place, publisher, year, edition, pages
MDPI, 2023
Keywords
weather warning systems, flooding, volunteered geographic information, visualization, visual analytics, artificial intelligence, machine learning, natural language processing, classification, social media
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-196332 (URN)10.3390/atmos14071141 (DOI)001037893300001 ()
Projects
AI4ClimateAdaptation
Funder
Vinnova, 2020-03388
Note

This research was funded by Sweden's Innovation Agency, VINNOVA, grant number 2020-03388, 'AI for Climate Adaptation'.

Available from: 2023-07-18 Created: 2023-07-18 Last updated: 2024-07-04
Hanji, P., Mantiuk, R. K., Eilertsen, G., Hajisharif, S. & Unger, J. (2022). Comparison of single image HDR reconstruction methods — the caveats of quality assessment. In: Munkhtsetseg Nandigjav,Niloy J. Mitra, Aaron Hertzmann (Ed.), SIGGRAPH '22: ACM SIGGRAPH 2022 Conference Proceedings: . Paper presented at SIGGRAPH '22: Special Interest Group on Computer Graphics and Interactive Techniques Conference Vancouver BC Canada August 7 - 11, 2022 (pp. 1-8). New York, NY, United States: Association for Computing Machinery (ACM), Article ID 1.
Open this publication in new window or tab >>Comparison of single image HDR reconstruction methods — the caveats of quality assessment
Show others...
2022 (English)In: SIGGRAPH '22: ACM SIGGRAPH 2022 Conference Proceedings / [ed] Munkhtsetseg Nandigjav,Niloy J. Mitra, Aaron Hertzmann, New York, NY, United States: Association for Computing Machinery (ACM), 2022, p. 1-8, article id 1Conference paper, Published paper (Refereed)
Abstract [en]

As the problem of reconstructing high dynamic range (HDR) imagesfrom a single exposure has attracted much research effort, it isessential to provide a robust protocol and clear guidelines on howto evaluate and compare new methods. In this work, we comparedsix recent single image HDR reconstruction (SI-HDR) methodsin a subjective image quality experiment on an HDR display. Wefound that only two methods produced results that are, on average,more preferred than the unprocessed single exposure images. Whenthe same methods are evaluated using image quality metrics, astypically done in papers, the metric predictions correlate poorlywith subjective quality scores. The main reason is a significant toneand color difference between the reference and reconstructed HDRimages. To improve the predictions of image quality metrics, we propose correcting for the inaccuracies of the estimated cameraresponse curve before computing quality values. We further analyzethe sources of prediction noise when evaluating SI-HDR methodsand demonstrate that existing metrics can reliably predict onlylarge quality differences.

Place, publisher, year, edition, pages
New York, NY, United States: Association for Computing Machinery (ACM), 2022
Keywords
High dynamic range, inverse problems, image quality metrics
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-186401 (URN)10.1145/3528233.3530729 (DOI)9781450393379 (ISBN)
Conference
SIGGRAPH '22: Special Interest Group on Computer Graphics and Interactive Techniques Conference Vancouver BC Canada August 7 - 11, 2022
Note

Funding: This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement N° 725253–EyeCode)

Available from: 2022-06-23 Created: 2022-06-23 Last updated: 2025-02-18Bibliographically approved
Jönsson, D., Kronander, J., Unger, J., Schön, T. & Wrenninge, M. (2022). Direct Transmittance Estimation in Heterogeneous Participating Media Using Approximated Taylor Expansions. Paper presented at Jul;28(7):2602-2614. IEEE Transactions on Visualization and Computer Graphics, 28(7), 2602-2614
Open this publication in new window or tab >>Direct Transmittance Estimation in Heterogeneous Participating Media Using Approximated Taylor Expansions
Show others...
2022 (English)In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 28, no 7, p. 2602-2614Article in journal (Refereed) Published
Abstract [en]

Evaluating the transmittance between two points along a ray is a key component in solving the light transport through heterogeneous participating media and entails computing an intractable exponential of the integrated medium's extinction coefficient. While algorithms for estimating this transmittance exist, there is a lack of theoretical knowledge about their behaviour, which also prevent new theoretically sound algorithms from being developed. For this purpose, we introduce a new class of unbiased transmittance estimators based on random sampling or truncation of a Taylor expansion of the exponential function. In contrast to classical tracking algorithms, these estimators are non-analogous to the physical light transport process and directly sample the underlying extinction function without performing incremental advancement. We present several versions of the new class of estimators, based on either importance sampling or Russian roulette to provide finite unbiased estimators of the infinite Taylor series expansion. We also show that the well known ratio tracking algorithm can be seen as a special case of the new class of estimators. Lastly, we conduct performance evaluations on both the central processing unit (CPU) and the graphics processing unit (GPU), and the results demonstrate that the new algorithms outperform traditional algorithms for heterogeneous mediums.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
Media, Taylor series, Rendering (computer graphics), Estimation, Upper bound, Monte Carlo methods
National Category
Signal Processing Computer and Information Sciences Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-178602 (URN)10.1109/TVCG.2020.3035516 (DOI)000801853400005 ()33141672 (PubMedID)
Conference
Jul;28(7):2602-2614
Funder
Knut and Alice Wallenberg Foundation, 2013-0076Swedish e‐Science Research CenterWallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Foundation for Strategic Research, RIT15-0012ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Note

Funding: Knut and Alice Wallenberg Foundation (KAW) [2013-0076]; SeRC (Swedish e-Science Research Center); Wallenberg AI, Autonomous Systems and Software Program (WASP); Swedish Foundation for Strategic Research (SSF) via the project ASSEMBLE [RIT15-0012]; ELLIIT environment for strategic research in Sweden

Available from: 2021-08-24 Created: 2021-08-24 Last updated: 2025-02-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7765-1747

Search in DiVA

Show all publications