liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Miandji, Ehsan
Publications (10 of 22) Show all publications
Kavoosighafi, B., Hajisharif, S., Miandji, E., Baravdish, G., Cao, W. & Unger, J. (2024). Deep SVBRDF Acquisition and Modelling: A Survey. Computer graphics forum (Print), 43(6)
Open this publication in new window or tab >>Deep SVBRDF Acquisition and Modelling: A Survey
Show others...
2024 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 43, no 6Article in journal (Refereed) Published
Abstract [en]

Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine-learning-driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high-quality measurements of bi-directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi-directional Reflectance Distribution Functions (SVBRDFs). Learning-based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State-of-the-Art Report (STAR) presents an in-depth overview of the state-of-the-art in machine-learning-driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real-world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at . Papers surveyed in this study with a focus on the extraction of BRDF or SVBRDF from a few measurements, classifying them according to their specific geometries and lighting conditions. Whole-scene refers to techniques that capture entire indoor or outdoor outside the scope of this survey. image

Place, publisher, year, edition, pages
WILEY, 2024
Keywords
modelling; appearance modelling; rendering
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-207835 (URN)10.1111/cgf.15199 (DOI)001312821700001 ()
Note

Funding Agencies|European Union [956585]

Available from: 2024-09-25 Created: 2024-09-25 Last updated: 2024-10-07
Cao, W., Miandji, E. & Unger, J. (2024). Multidimensional Compressed Sensing for Spectral Light Field Imaging. In: Petia Radeva, A. Furnari, Kadi Bouatouch, A. Augusto Sousa (Ed.), Multidimensional Compressed Sensing for Spectral Light Field Imaging: . Paper presented at In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISAPP 2024, Rome,Feb 27-Feb 29 2024. (pp. 349-356). Rome, Italy: Institute for Systems and Technologies of Information, Control and Communication, 4
Open this publication in new window or tab >>Multidimensional Compressed Sensing for Spectral Light Field Imaging
2024 (English)In: Multidimensional Compressed Sensing for Spectral Light Field Imaging / [ed] Petia Radeva, A. Furnari, Kadi Bouatouch, A. Augusto Sousa, Rome, Italy: Institute for Systems and Technologies of Information, Control and Communication, 2024, Vol. 4, p. 8p. 349-356Conference paper, Published paper (Refereed)
Abstract [en]

This paper considers a compressive multi-spectral light field camera model that utilizes a one-hot spectral-coded mask and a microlens array to capture spatial, angular, and spectral information using a singlemonochrome sensor. We propose a model that employs compressed sensing techniques to reconstruct thecomplete multi-spectral light field from undersampled measurements. Unlike previous work where a lightfield is vectorized to a 1D signal, our method employs a 5D basis and a novel 5D measurement model, hence,matching the intrinsic dimensionality of multispectral light fields. We mathematically and empirically showthe equivalence of 5D and 1D sensing models, and most importantly that the 5D framework achieves or-ders of magnitude faster reconstruction while requiring a small fraction of the memory. Moreover, our newmultidimensional sensing model opens new research directions for designing efficient visual data acquisitionalgorithms and hardware.

Place, publisher, year, edition, pages
Rome, Italy: Institute for Systems and Technologies of Information, Control and Communication, 2024. p. 8
Keywords
Spectral light field, Compressive sensing
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-201273 (URN)10.5220/0012431300003660 (DOI)978-989-758-679-8 (ISBN)
Conference
In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications VISAPP 2024, Rome,Feb 27-Feb 29 2024.
Available from: 2024-03-03 Created: 2024-03-03 Last updated: 2025-01-20
Lei, D., Miandji, E., Unger, J. & Hotz, I. (2024). Sparse q-ball imaging towards efficient visual exploration of HARDI data. Computer graphics forum (Print), 43(3), Article ID e15082.
Open this publication in new window or tab >>Sparse q-ball imaging towards efficient visual exploration of HARDI data
2024 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 43, no 3, article id e15082Article in journal (Refereed) Published
Abstract [en]

Diffusion-weighted magnetic resonance imaging (D-MRI) is a technique to measure the diffusion of water, in biological tissues. It is used to detect microscopic patterns, such as neural fibers in the living human brain, with many medical and neuroscience applications e.g. for fiber tracking. In this paper, we consider High-Angular Resolution Diffusion Imaging (HARDI) which provides one of the richest representations of water diffusion. It records the movement of water molecules by measuring diffusion under 64 or more directions. A key challenge is that it generates high-dimensional, large, and complex datasets. In our work, we develop a novel representation that exploits the inherent sparsity of the HARDI signal by approximating it as a linear sum of basic atoms in an overcomplete data-driven dictionary using only a sparse set of coefficients. We show that this approach can be efficiently integrated into the standard q-ball imaging pipeline to compute the diffusion orientation distribution function (ODF). Sparse representations have the potential to reduce the size of the data while also giving some insight into the data. To explore the results, we provide a visualization of the atoms of the dictionary and their frequency in the data to highlight the basic characteristics of the data. We present our proposed pipeline and demonstrate its performance on 5 HARDI datasets.

Place, publisher, year, edition, pages
WILEY, 2024
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-204924 (URN)10.1111/cgf.15082 (DOI)001239278600001 ()
Note

Funding Agencies|Swedish Research Council (VR)

Available from: 2024-06-17 Created: 2024-06-17 Last updated: 2025-02-07Bibliographically approved
Kavoosighafi, B., Frisvad, J. R., Hajisharif, S., Unger, J. & Miandji, E. (2023). SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions. In: : . Paper presented at Eurographics Symposium on Rendering (EGSR), Delft, The Netherlands, 28 - 30 June, 2023 (pp. 37-50). The Eurographics Association
Open this publication in new window or tab >>SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming atcompact storage, real-time rendering performance, and high image quality. Our model is trained once, using a small trainingset, and then used to obtain a sparse tensor containing the model parameters. Our technique exploits redundancies in the dataacross all dimensions simultaneously, as opposed to existing methods that use only angular information and ignore correlationsin the spatial domain. We show that our model admits efficient angular interpolation directly in the model space, rather thanthe BTF space, leading to a notably higher rendering speed than in previous work. Additionally, the high quality-storage costtradeoff enabled by our method facilitates controlling the image quality, storage cost, and rendering speed using a singleparameter, the number of coefficients. Previous methods rely on a fixed number of latent variables for training and testing,hence limiting the potential for achieving a favorable quality-storage cost tradeoff and scalability. Our experimental resultsdemonstrate that our method outperforms existing methods both quantitatively and qualitatively, as well as achieving a highercompression ratio and rendering speed.

Place, publisher, year, edition, pages
The Eurographics Association, 2023
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-195283 (URN)10.2312/sr.20231123 (DOI)978-3-03868-229-5 (ISBN)
Conference
Eurographics Symposium on Rendering (EGSR), Delft, The Netherlands, 28 - 30 June, 2023
Available from: 2023-06-19 Created: 2023-06-19 Last updated: 2024-09-23Bibliographically approved
Baravdish, G., Unger, J. & Miandji, E. (2021). GPU Accelerated SL0 for Multidimensional Signals. In: 50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING WORKSHOP PROCEEDINGS - ICPP WORKSHOPS 21: . Paper presented at 50th International Conference on Parallel Processing (ICPP), ELECTR NETWORK, aug 09-12, 2021. ASSOC COMPUTING MACHINERY, Article ID 28.
Open this publication in new window or tab >>GPU Accelerated SL0 for Multidimensional Signals
2021 (English)In: 50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING WORKSHOP PROCEEDINGS - ICPP WORKSHOPS 21, ASSOC COMPUTING MACHINERY , 2021, article id 28Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we propose a novel GPU-based method for highly parallel compressed sensing of n-dimensional (nD) signals based on the smoothed l(0) (SL0) algorithm. We demonstrate the efficiency of our approach by showing several examples of nD tensor reconstructions. Moreover, we also consider the traditional 1D compressed sensing, and compare the results. We show that the multidimensional SL0 algorithm is computationally superior compared to the 1D variant due to the small dictionary sizes per dimension. This allows us to fully utilize the GPU and perform massive batch-wise computations, which is not possible for the 1D compressed sensing using SL0. For our evaluations, we use light field and light field video data sets. We show that we gain more than an order of magnitude speedup for both one-dimensional as well as multidimensional data points compared to a parallel CPU implementation. Finally, we present a theoretical analysis of the SL0 algorithm for nD signals, which generalizes previous work for 1D signals.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY, 2021
Series
International Conference on Parallel Processing Workshops, ISSN 1530-2016
Keywords
GPGPU; Multidimensional signal processing; Compressed sensing
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-179559 (URN)10.1145/3458744.3474048 (DOI)000747651900033 ()9781450384414 (ISBN)
Conference
50th International Conference on Parallel Processing (ICPP), ELECTR NETWORK, aug 09-12, 2021
Note

Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation

Available from: 2021-09-24 Created: 2021-09-24 Last updated: 2024-11-28
Hajisharif, S., Miandji, E., Baravdish, G., Per, L. & Unger, J. (2020). Compression and Real-Time Rendering of Inward Looking Spherical Light Fields. In: Wilkie, Alexander and Banterle, Francesco (Ed.), Eurographics 2020 - Short Papers: . Paper presented at Eurographics 2020.
Open this publication in new window or tab >>Compression and Real-Time Rendering of Inward Looking Spherical Light Fields
Show others...
2020 (English)In: Eurographics 2020 - Short Papers / [ed] Wilkie, Alexander and Banterle, Francesco, 2020Conference paper, Published paper (Refereed)
Abstract [en]

Photorealistic rendering is an essential tool for immersive virtual reality. In this regard, the data structure of choice is typically light fields since they contain multidimensional information about the captured environment that can provide motion parallax and view-dependent information such as highlights. There are various ways to acquire light fields depending on the nature of the scene, limitations on the capturing setup, and the application at hand. Our focus in this paper is on full-parallax imaging of large-scale static objects for photorealistic real-time rendering. To this end, we introduce and simulate a new design for capturing inward-looking spherical light fields, and propose a system for efficient compression and real-time rendering of such data using consumer-level hardware suitable for virtual reality applications.

Series
Executive Master in Project Management
Series
Eurographics 2020 - Short Papers, ISSN 1017-4656
Keywords
light field, compression, realtime rendering, rendering, multi camera system, data-driven
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-165799 (URN)10.2312/egs.20201007 (DOI)978-3-03868-101-4 (ISBN)
Conference
Eurographics 2020
Available from: 2020-05-25 Created: 2020-05-25 Last updated: 2021-09-16
Hajisharif, S., Miandji, E., Guillemot, C. & Unger, J. (2020). Single Sensor Compressive Light Field Video Camera. Paper presented at Eurographics 2020. Computer graphics forum (Print), 39(2), 463-474
Open this publication in new window or tab >>Single Sensor Compressive Light Field Video Camera
2020 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 2, p. 463-474Article in journal (Refereed) Published
Abstract [en]

This paper presents a novel compressed sensing (CS) algorithm and camera design for light field video capture using a single sensor consumer camera module. Unlike microlens light field cameras which sacrifice spatial resolution to obtain angular information, our CS approach is designed for capturing light field videos with high angular, spatial, and temporal resolution. The compressive measurements required by CS are obtained using a random color-coded mask placed between the sensor and aperture planes. The convolution of the incoming light rays from different angles with the mask results in a single image on the sensor; hence, achieving a significant reduction on the required bandwidth for capturing light field videos. We propose to change the random pattern on the spectral mask between each consecutive frame in a video sequence and extracting spatioangular- spectral-temporal 6D patches. Our CS reconstruction algorithm for light field videos recovers each frame while taking into account the neighboring frames to achieve significantly higher reconstruction quality with reduced temporal incoherencies, as compared with previous methods. Moreover, a thorough analysis of various sensing models for compressive light field video acquisition is conducted to highlight the advantages of our method. The results show a clear advantage of our method for monochrome sensors, as well as sensors with color filter arrays.

Place, publisher, year, edition, pages
Wiley-Blackwell, 2020
Keywords
Light fields, compressive sensing, light field video, single sensor imaging
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-165790 (URN)10.1111/cgf.13944 (DOI)000548709600038 ()
Conference
Eurographics 2020
Note

Funding agencies:  Swedish Foundation for Strategic ResearchSwedish Foundation for Strategic Research [IIS11-0081]; EU H2020 Research and Innovation Programme [694122]

Available from: 2020-05-25 Created: 2020-05-25 Last updated: 2021-07-13
Miandji, E., Hajisharif, S. & Unger, J. (2019). A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos. ACM Transactions on Graphics, 38(3), 1-18, Article ID 23.
Open this publication in new window or tab >>A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos
2019 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 38, no 3, p. 1-18, article id 23Article in journal (Refereed) Published
Abstract [en]

In this article we present a novel dictionary learning framework designed for compression and sampling of light fields and light field videos. Unlike previous methods, where a single dictionary with one-dimensional atoms is learned, we propose to train a Multidimensional Dictionary Ensemble (MDE). It is shown that learning an ensemble in the native dimensionality of the data promotes sparsity, hence increasing the compression ratio and sampling efficiency. To make maximum use of correlations within the light field data sets, we also introduce a novel nonlocal pre-clustering approach that constructs an Aggregate MDE (AMDE). The pre-clustering not only improves the image quality but also reduces the training time by an order of magnitude in most cases. The decoding algorithm supports efficient local reconstruction of the compressed data, which enables efficient real-time playback of high-resolution light field videos. Moreover, we discuss the application of AMDE for compressed sensing. A theoretical analysis is presented that indicates the required conditions for exact recovery of point-sampled light fields that are sparse under AMDE. The analysis provides guidelines for designing efficient compressive light field cameras. We use various synthetic and natural light field and light field video data sets to demonstrate the utility of our approach in comparison with the state-of-the-art learning-based dictionaries, as well as established analytical dictionaries.

Place, publisher, year, edition, pages
ACM Digital Library, 2019
Keywords
Light field video compression, compressed sensing, dictionary learning, light field photography
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-158026 (URN)10.1145/3269980 (DOI)000495415600005 ()
Available from: 2019-06-24 Created: 2019-06-24 Last updated: 2020-02-18Bibliographically approved
Baravdish, G., Miandji, E. & Unger, J. (2019). GPU Accelerated Sparse Representation of Light Fields. In: VISIGRAPP - 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, February 25-27, 2019.: . Paper presented at VISAPP - 14th International Conference on Computer Vision Theory and Applications, Prague, Czech Republic, February 25-27, 2019. (pp. 177-182). SCITEPRESS, 4
Open this publication in new window or tab >>GPU Accelerated Sparse Representation of Light Fields
2019 (English)In: VISIGRAPP - 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, February 25-27, 2019., SCITEPRESS , 2019, Vol. 4, p. 177-182Conference paper, Published paper (Refereed)
Abstract [en]

We present a method for GPU accelerated compression of light fields. The approach is by using a dictionary learning framework for compression of light field images. The large amount of data storage by capturing light fields is a challenge to compress and we seek to accelerate the encoding routine by GPGPU computations. We compress the data by projecting each data point onto a set of trained multi-dimensional dictionaries and seek the most sparse representation with the least error. This is done by a parallelization of the tensor-matrix product computed on the GPU. An optimized greedy algorithm to suit computations on the GPU is also presented. The encoding of the data is done segmentally in parallel for a faster computation speed while maintaining the quality. The results shows an order of magnitude faster encoding time compared to the results in the same research field. We conclude that there are further improvements to increase the speed, and thus it is not too far from an interacti ve compression speed.

Place, publisher, year, edition, pages
SCITEPRESS, 2019
Keywords
Light Field Compression, Gpgpu Computation, Sparse Representation
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-157009 (URN)10.5220/0007393101770182 (DOI)000570779500020 ()978-989-758-354-4 (ISBN)
Conference
VISAPP - 14th International Conference on Computer Vision Theory and Applications, Prague, Czech Republic, February 25-27, 2019.
Available from: 2019-05-22 Created: 2019-05-22 Last updated: 2021-09-16Bibliographically approved
Hajisharif, S., Miandji, E., Per, L., Tran, K. & Unger, J. (2019). Light Field Video Compression and Real Time Rendering. Paper presented at Pacific Graphics 2019. Computer graphics forum (Print), 38, 265-276
Open this publication in new window or tab >>Light Field Video Compression and Real Time Rendering
Show others...
2019 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, p. 265-276Article in journal (Refereed) Published
Abstract [en]

Light field imaging is rapidly becoming an established method for generating flexible image based description of scene appearances. Compared to classical 2D imaging techniques, the angular information included in light fields enables effects such as post‐capture refocusing and the exploration of the scene from different vantage points. In this paper, we describe a novel GPU pipeline for compression and real‐time rendering of light field videos with full parallax. To achieve this, we employ a dictionary learning approach and train an ensemble of dictionaries capable of efficiently representing light field video data using highly sparse coefficient sets. A novel, key element in our representation is that we simultaneously compress both image data (pixel colors) and the auxiliary information (depth, disparity, or optical flow) required for view interpolation. During playback, the coefficients are streamed to the GPU where the light field and the auxiliary information are reconstructed using the dictionary ensemble and view interpolation is performed. In order to realize the pipeline we present several technical contributions including a denoising scheme enhancing the sparsity in the dataset which enables higher compression ratios, and a novel pruning strategy which reduces the size of the dictionary ensemble and leads to significant reductions in computational complexity during the encoding of a light field. Our approach is independent of the light field parameterization and can be used with data from any light field video capture system. To demonstrate the usefulness of our pipeline, we utilize various publicly available light field video datasets and discuss the medical application of documenting heart surgery.

Place, publisher, year, edition, pages
John Wiley & Sons, 2019
Keywords
Computational photography, Light Fields, Light Fields Compression, Light Field Video
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-162100 (URN)10.1111/cgf.13835 (DOI)000496351100025 ()
Conference
Pacific Graphics 2019
Note

Funding agencies:  childrens heart clinic at Skane University hospital, Barnhjartcentrum; strategic research environment ELLIIT; Swedish Science Council [201505180]; VinnovaVinnova [2017-03728]; Visual Sweden Platform for Augmented Intelligence

Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2021-09-30
Organisations

Search in DiVA

Show all publications