liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
Miandji, Ehsan
Publications (10 of 19) Show all publications
Kavoosighafi, B., Frisvad, J. R., Hajisharif, S., Unger, J. & Miandji, E. (2023). SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions. In: : . Paper presented at Eurographics Symposium on Rendering (EGSR), Delft, The Netherlands, 28 - 30 June, 2023.
Open this publication in new window or tab >>SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions
Show others...
2023 (English)Conference paper, Published paper (Refereed)
Abstract [en]

We propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming atcompact storage, real-time rendering performance, and high image quality. Our model is trained once, using a small trainingset, and then used to obtain a sparse tensor containing the model parameters. Our technique exploits redundancies in the dataacross all dimensions simultaneously, as opposed to existing methods that use only angular information and ignore correlationsin the spatial domain. We show that our model admits efficient angular interpolation directly in the model space, rather thanthe BTF space, leading to a notably higher rendering speed than in previous work. Additionally, the high quality-storage costtradeoff enabled by our method facilitates controlling the image quality, storage cost, and rendering speed using a singleparameter, the number of coefficients. Previous methods rely on a fixed number of latent variables for training and testing,hence limiting the potential for achieving a favorable quality-storage cost tradeoff and scalability. Our experimental resultsdemonstrate that our method outperforms existing methods both quantitatively and qualitatively, as well as achieving a highercompression ratio and rendering speed.

National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-195283 (URN)
Conference
Eurographics Symposium on Rendering (EGSR), Delft, The Netherlands, 28 - 30 June, 2023
Available from: 2023-06-19 Created: 2023-06-19 Last updated: 2023-06-28Bibliographically approved
Baravdish, G., Unger, J. & Miandji, E. (2021). GPU Accelerated SL0 for Multidimensional Signals. In: 50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING WORKSHOP PROCEEDINGS - ICPP WORKSHOPS 21: . Paper presented at 50th International Conference on Parallel Processing (ICPP), ELECTR NETWORK, aug 09-12, 2021. ASSOC COMPUTING MACHINERY, Article ID 28.
Open this publication in new window or tab >>GPU Accelerated SL0 for Multidimensional Signals
2021 (English)In: 50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING WORKSHOP PROCEEDINGS - ICPP WORKSHOPS 21, ASSOC COMPUTING MACHINERY , 2021, article id 28Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we propose a novel GPU-based method for highly parallel compressed sensing of n-dimensional (nD) signals based on the smoothed l(0) (SL0) algorithm. We demonstrate the efficiency of our approach by showing several examples of nD tensor reconstructions. Moreover, we also consider the traditional 1D compressed sensing, and compare the results. We show that the multidimensional SL0 algorithm is computationally superior compared to the 1D variant due to the small dictionary sizes per dimension. This allows us to fully utilize the GPU and perform massive batch-wise computations, which is not possible for the 1D compressed sensing using SL0. For our evaluations, we use light field and light field video data sets. We show that we gain more than an order of magnitude speedup for both one-dimensional as well as multidimensional data points compared to a parallel CPU implementation. Finally, we present a theoretical analysis of the SL0 algorithm for nD signals, which generalizes previous work for 1D signals.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY, 2021
Series
International Conference on Parallel Processing Workshops, ISSN 1530-2016
Keywords
GPGPU; Multidimensional signal processing; Compressed sensing
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-179559 (URN)10.1145/3458744.3474048 (DOI)9781450384414 (ISBN)
Conference
50th International Conference on Parallel Processing (ICPP), ELECTR NETWORK, aug 09-12, 2021
Note

Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation

Available from: 2021-09-24 Created: 2021-09-24 Last updated: 2022-02-09
Hajisharif, S., Miandji, E., Baravdish, G., Per, L. & Unger, J. (2020). Compression and Real-Time Rendering of Inward Looking Spherical Light Fields. In: Wilkie, Alexander and Banterle, Francesco (Ed.), Eurographics 2020 - Short Papers: . Paper presented at Eurographics 2020.
Open this publication in new window or tab >>Compression and Real-Time Rendering of Inward Looking Spherical Light Fields
Show others...
2020 (English)In: Eurographics 2020 - Short Papers / [ed] Wilkie, Alexander and Banterle, Francesco, 2020Conference paper, Published paper (Refereed)
Abstract [en]

Photorealistic rendering is an essential tool for immersive virtual reality. In this regard, the data structure of choice is typically light fields since they contain multidimensional information about the captured environment that can provide motion parallax and view-dependent information such as highlights. There are various ways to acquire light fields depending on the nature of the scene, limitations on the capturing setup, and the application at hand. Our focus in this paper is on full-parallax imaging of large-scale static objects for photorealistic real-time rendering. To this end, we introduce and simulate a new design for capturing inward-looking spherical light fields, and propose a system for efficient compression and real-time rendering of such data using consumer-level hardware suitable for virtual reality applications.

Series
Executive Master in Project Management
Series
Eurographics 2020 - Short Papers, ISSN 1017-4656
Keywords
light field, compression, realtime rendering, rendering, multi camera system, data-driven
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-165799 (URN)10.2312/egs.20201007 (DOI)978-3-03868-101-4 (ISBN)
Conference
Eurographics 2020
Available from: 2020-05-25 Created: 2020-05-25 Last updated: 2021-09-16
Hajisharif, S., Miandji, E., Guillemot, C. & Unger, J. (2020). Single Sensor Compressive Light Field Video Camera. Paper presented at Eurographics 2020. Computer graphics forum (Print), 39(2), 463-474
Open this publication in new window or tab >>Single Sensor Compressive Light Field Video Camera
2020 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 2, p. 463-474Article in journal (Refereed) Published
Abstract [en]

This paper presents a novel compressed sensing (CS) algorithm and camera design for light field video capture using a single sensor consumer camera module. Unlike microlens light field cameras which sacrifice spatial resolution to obtain angular information, our CS approach is designed for capturing light field videos with high angular, spatial, and temporal resolution. The compressive measurements required by CS are obtained using a random color-coded mask placed between the sensor and aperture planes. The convolution of the incoming light rays from different angles with the mask results in a single image on the sensor; hence, achieving a significant reduction on the required bandwidth for capturing light field videos. We propose to change the random pattern on the spectral mask between each consecutive frame in a video sequence and extracting spatioangular- spectral-temporal 6D patches. Our CS reconstruction algorithm for light field videos recovers each frame while taking into account the neighboring frames to achieve significantly higher reconstruction quality with reduced temporal incoherencies, as compared with previous methods. Moreover, a thorough analysis of various sensing models for compressive light field video acquisition is conducted to highlight the advantages of our method. The results show a clear advantage of our method for monochrome sensors, as well as sensors with color filter arrays.

Place, publisher, year, edition, pages
Wiley-Blackwell, 2020
Keywords
Light fields, compressive sensing, light field video, single sensor imaging
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-165790 (URN)10.1111/cgf.13944 (DOI)000548709600038 ()
Conference
Eurographics 2020
Note

Funding agencies:  Swedish Foundation for Strategic ResearchSwedish Foundation for Strategic Research [IIS11-0081]; EU H2020 Research and Innovation Programme [694122]

Available from: 2020-05-25 Created: 2020-05-25 Last updated: 2021-07-13
Miandji, E., Hajisharif, S. & Unger, J. (2019). A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos. ACM Transactions on Graphics, 38(3), 1-18, Article ID 23.
Open this publication in new window or tab >>A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos
2019 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 38, no 3, p. 1-18, article id 23Article in journal (Refereed) Published
Abstract [en]

In this article we present a novel dictionary learning framework designed for compression and sampling of light fields and light field videos. Unlike previous methods, where a single dictionary with one-dimensional atoms is learned, we propose to train a Multidimensional Dictionary Ensemble (MDE). It is shown that learning an ensemble in the native dimensionality of the data promotes sparsity, hence increasing the compression ratio and sampling efficiency. To make maximum use of correlations within the light field data sets, we also introduce a novel nonlocal pre-clustering approach that constructs an Aggregate MDE (AMDE). The pre-clustering not only improves the image quality but also reduces the training time by an order of magnitude in most cases. The decoding algorithm supports efficient local reconstruction of the compressed data, which enables efficient real-time playback of high-resolution light field videos. Moreover, we discuss the application of AMDE for compressed sensing. A theoretical analysis is presented that indicates the required conditions for exact recovery of point-sampled light fields that are sparse under AMDE. The analysis provides guidelines for designing efficient compressive light field cameras. We use various synthetic and natural light field and light field video data sets to demonstrate the utility of our approach in comparison with the state-of-the-art learning-based dictionaries, as well as established analytical dictionaries.

Place, publisher, year, edition, pages
ACM Digital Library, 2019
Keywords
Light field video compression, compressed sensing, dictionary learning, light field photography
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-158026 (URN)10.1145/3269980 (DOI)000495415600005 ()
Available from: 2019-06-24 Created: 2019-06-24 Last updated: 2020-02-18Bibliographically approved
Baravdish, G., Miandji, E. & Unger, J. (2019). GPU Accelerated Sparse Representation of Light Fields. In: VISIGRAPP - 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, February 25-27, 2019.: . Paper presented at VISAPP - 14th International Conference on Computer Vision Theory and Applications, Prague, Czech Republic, February 25-27, 2019. (pp. 177-182). SCITEPRESS, 4
Open this publication in new window or tab >>GPU Accelerated Sparse Representation of Light Fields
2019 (English)In: VISIGRAPP - 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, February 25-27, 2019., SCITEPRESS , 2019, Vol. 4, p. 177-182Conference paper, Published paper (Refereed)
Abstract [en]

We present a method for GPU accelerated compression of light fields. The approach is by using a dictionary learning framework for compression of light field images. The large amount of data storage by capturing light fields is a challenge to compress and we seek to accelerate the encoding routine by GPGPU computations. We compress the data by projecting each data point onto a set of trained multi-dimensional dictionaries and seek the most sparse representation with the least error. This is done by a parallelization of the tensor-matrix product computed on the GPU. An optimized greedy algorithm to suit computations on the GPU is also presented. The encoding of the data is done segmentally in parallel for a faster computation speed while maintaining the quality. The results shows an order of magnitude faster encoding time compared to the results in the same research field. We conclude that there are further improvements to increase the speed, and thus it is not too far from an interacti ve compression speed.

Place, publisher, year, edition, pages
SCITEPRESS, 2019
Keywords
Light Field Compression, Gpgpu Computation, Sparse Representation
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-157009 (URN)10.5220/0007393101770182 (DOI)000570779500020 ()978-989-758-354-4 (ISBN)
Conference
VISAPP - 14th International Conference on Computer Vision Theory and Applications, Prague, Czech Republic, February 25-27, 2019.
Available from: 2019-05-22 Created: 2019-05-22 Last updated: 2021-09-16Bibliographically approved
Hajisharif, S., Miandji, E., Per, L., Tran, K. & Unger, J. (2019). Light Field Video Compression and Real Time Rendering. Paper presented at Pacific Graphics 2019. Computer graphics forum (Print), 38, 265-276
Open this publication in new window or tab >>Light Field Video Compression and Real Time Rendering
Show others...
2019 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, p. 265-276Article in journal (Refereed) Published
Abstract [en]

Light field imaging is rapidly becoming an established method for generating flexible image based description of scene appearances. Compared to classical 2D imaging techniques, the angular information included in light fields enables effects such as post‐capture refocusing and the exploration of the scene from different vantage points. In this paper, we describe a novel GPU pipeline for compression and real‐time rendering of light field videos with full parallax. To achieve this, we employ a dictionary learning approach and train an ensemble of dictionaries capable of efficiently representing light field video data using highly sparse coefficient sets. A novel, key element in our representation is that we simultaneously compress both image data (pixel colors) and the auxiliary information (depth, disparity, or optical flow) required for view interpolation. During playback, the coefficients are streamed to the GPU where the light field and the auxiliary information are reconstructed using the dictionary ensemble and view interpolation is performed. In order to realize the pipeline we present several technical contributions including a denoising scheme enhancing the sparsity in the dataset which enables higher compression ratios, and a novel pruning strategy which reduces the size of the dictionary ensemble and leads to significant reductions in computational complexity during the encoding of a light field. Our approach is independent of the light field parameterization and can be used with data from any light field video capture system. To demonstrate the usefulness of our pipeline, we utilize various publicly available light field video datasets and discuss the medical application of documenting heart surgery.

Place, publisher, year, edition, pages
John Wiley & Sons, 2019
Keywords
Computational photography, Light Fields, Light Fields Compression, Light Field Video
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-162100 (URN)10.1111/cgf.13835 (DOI)000496351100025 ()
Conference
Pacific Graphics 2019
Note

Funding agencies:  childrens heart clinic at Skane University hospital, Barnhjartcentrum; strategic research environment ELLIIT; Swedish Science Council [201505180]; VinnovaVinnova [2017-03728]; Visual Sweden Platform for Augmented Intelligence

Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2021-09-30
Emadi, M., Miandji, E. & Unger, J. (2018). A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence. Circuits, systems, and signal processing, 37(4), 1562-1574
Open this publication in new window or tab >>A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence
2018 (English)In: Circuits, systems, and signal processing, ISSN 0278-081X, E-ISSN 1531-5878, Vol. 37, no 4, p. 1562-1574Article in journal (Refereed) Published
Abstract [en]

In this paper, we present a new performance guarantee for the orthogonal matching pursuit (OMP) algorithm. We use mutual coherence as a metric for determining the suitability of an arbitrary overcomplete dictionary for exact recovery. Specifically, a lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise and an upper bound for the mean square error is derived. Compared to the previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a much closer correlation to empirical results of OMP.

Place, publisher, year, edition, pages
Springer, 2018
Keywords
Compressed sensing; Sparse representation; Orthogonal matching pursuit; Sparse recovery
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-147092 (URN)10.1007/s00034-017-0602-x (DOI)000427149100010 ()
Available from: 2018-04-20 Created: 2018-04-20 Last updated: 2018-06-12
Miandji, E. (2018). Sparse representation of visual data for compression and compressed sensing. (Doctoral dissertation). Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>Sparse representation of visual data for compression and compressed sensing
2018 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The ongoing advances in computational photography have introduced a range of new imaging techniques for capturing multidimensional visual data such as light fields, BRDFs, BTFs, and more. A key challenge inherent to such imaging techniques is the large amount of high dimensional visual data that is produced, often requiring GBs, or even TBs, of storage. Moreover, the utilization of these datasets in real time applications poses many difficulties due to the large memory footprint. Furthermore, the acquisition of large-scale visual data is very challenging and expensive in most cases. This thesis makes several contributions with regards to acquisition, compression, and real time rendering of high dimensional visual data in computer graphics and imaging applications.

Contributions of this thesis reside on the strong foundation of sparse representations. Numerous applications are presented that utilize sparse representations for compression and compressed sensing of visual data. Specifically, we present a single sensor light field camera design, a compressive rendering method, a real time precomputed photorealistic rendering technique, light field (video) compression and real time rendering, compressive BRDF capture, and more. Another key contribution of this thesis is a general framework for compression and compressed sensing of visual data, regardless of the dimensionality. As a result, any type of discrete visual data with arbitrary dimensionality can be captured, compressed, and rendered in real time.

This thesis makes two theoretical contributions. In particular, uniqueness conditions for recovering a sparse signal under an ensemble of multidimensional dictionaries is presented. The theoretical results discussed here are useful for designing efficient capturing devices for multidimensional visual data. Moreover, we derive the probability of successful recovery of a noisy sparse signal using OMP, one of the most widely used algorithms for solving compressed sensing problems.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2018. p. 158
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1963
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-152863 (URN)10.3384/diss.diva-152863 (DOI)9789176851869 (ISBN)
Public defence
2018-12-14, Domteatern, Visualiseringscenter C, Kungsgatan 54, Campus Norrköping, Norrköping, 09:15 (English)
Opponent
Supervisors
Available from: 2018-11-23 Created: 2018-11-23 Last updated: 2018-11-23Bibliographically approved
Miandji, E., Emadi, M., Unger, J. & Ehsan, A. (2017). On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence. IEEE Signal Processing Letters, 24(11), 1646-1650
Open this publication in new window or tab >>On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence
2017 (English)In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 24, no 11, p. 1646-1650Article in journal (Refereed) Published
Abstract [en]

In this paper we present a new coherence-based performance guarantee for the Orthogonal Matching Pursuit (OMP) algorithm. A lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise is derived. Compared to previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a closer match to empirically obtained results of the OMP algorithm.

Place, publisher, year, edition, pages
IEEE Signal Processing Society, 2017
Keywords
Compressed Sensing (CS), Sparse Recovery, Orthogonal Matching Pursuit (OMP), Mutual Coherence
National Category
Signal Processing
Identifiers
urn:nbn:se:liu:diva-141613 (URN)10.1109/LSP.2017.2753939 (DOI)000412501600001 ()
Available from: 2017-10-03 Created: 2017-10-03 Last updated: 2018-11-23Bibliographically approved
Organisations

Search in DiVA

Show all publications