liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 51) Show all publications
Miandji, E., Hajisharif, S. & Unger, J. (2019). A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos. ACM Transactions on Graphics, 38(3), 1-18, Article ID 23.
Open this publication in new window or tab >>A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos
2019 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 38, no 3, p. 1-18, article id 23Article in journal (Refereed) Published
Abstract [en]

In this article we present a novel dictionary learning framework designed for compression and sampling of light fields and light field videos. Unlike previous methods, where a single dictionary with one-dimensional atoms is learned, we propose to train a Multidimensional Dictionary Ensemble (MDE). It is shown that learning an ensemble in the native dimensionality of the data promotes sparsity, hence increasing the compression ratio and sampling efficiency. To make maximum use of correlations within the light field data sets, we also introduce a novel nonlocal pre-clustering approach that constructs an Aggregate MDE (AMDE). The pre-clustering not only improves the image quality but also reduces the training time by an order of magnitude in most cases. The decoding algorithm supports efficient local reconstruction of the compressed data, which enables efficient real-time playback of high-resolution light field videos. Moreover, we discuss the application of AMDE for compressed sensing. A theoretical analysis is presented that indicates the required conditions for exact recovery of point-sampled light fields that are sparse under AMDE. The analysis provides guidelines for designing efficient compressive light field cameras. We use various synthetic and natural light field and light field video data sets to demonstrate the utility of our approach in comparison with the state-of-the-art learning-based dictionaries, as well as established analytical dictionaries.

Place, publisher, year, edition, pages
ACM Digital Library, 2019
Keywords
Light field video compression, compressed sensing, dictionary learning, light field photography
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-158026 (URN)10.1145/3269980 (DOI)
Available from: 2019-06-24 Created: 2019-06-24 Last updated: 2019-06-24Bibliographically approved
Baravdish, G., Miandji, E. & Unger, J. (2019). GPU Accelerated Sparse Representation of Light Fields. In: VISIGRAPP - 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, February 25-27, 2019.: . Paper presented at VISAPP - 14th International Conference on Computer Vision Theory and Applications, Prague, Czech Republic, February 25-27, 2019. (pp. 177-182). , 4
Open this publication in new window or tab >>GPU Accelerated Sparse Representation of Light Fields
2019 (English)In: VISIGRAPP - 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, February 25-27, 2019., 2019, Vol. 4, p. 177-182Conference paper, Published paper (Refereed)
Abstract [en]

We present a method for GPU accelerated compression of light fields. The approach is by using a dictionary learning framework for compression of light field images. The large amount of data storage by capturing light fields is a challenge to compress and we seek to accelerate the encoding routine by GPGPU computations. We compress the data by projecting each data point onto a set of trained multi-dimensional dictionaries and seek the most sparse representation with the least error. This is done by a parallelization of the tensor-matrix product computed on the GPU. An optimized greedy algorithm to suit computations on the GPU is also presented. The encoding of the data is done segmentally in parallel for a faster computation speed while maintaining the quality. The results shows an order of magnitude faster encoding time compared to the results in the same research field. We conclude that there are further improvements to increase the speed, and thus it is not too far from an interacti ve compression speed.

Keywords
Light Field Compression, Gpgpu Computation, Sparse Representation
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-157009 (URN)10.5220/0007393101770182 (DOI)978-989-758-354-4 (ISBN)
Conference
VISAPP - 14th International Conference on Computer Vision Theory and Applications, Prague, Czech Republic, February 25-27, 2019.
Available from: 2019-05-22 Created: 2019-05-22 Last updated: 2019-06-14Bibliographically approved
Emadi, M., Miandji, E. & Unger, J. (2018). A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence. Circuits, systems, and signal processing, 37(4), 1562-1574
Open this publication in new window or tab >>A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence
2018 (English)In: Circuits, systems, and signal processing, ISSN 0278-081X, E-ISSN 1531-5878, Vol. 37, no 4, p. 1562-1574Article in journal (Refereed) Published
Abstract [en]

In this paper, we present a new performance guarantee for the orthogonal matching pursuit (OMP) algorithm. We use mutual coherence as a metric for determining the suitability of an arbitrary overcomplete dictionary for exact recovery. Specifically, a lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise and an upper bound for the mean square error is derived. Compared to the previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a much closer correlation to empirical results of OMP.

Place, publisher, year, edition, pages
Springer, 2018
Keywords
Compressed sensing; Sparse representation; Orthogonal matching pursuit; Sparse recovery
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:liu:diva-147092 (URN)10.1007/s00034-017-0602-x (DOI)000427149100010 ()
Available from: 2018-04-20 Created: 2018-04-20 Last updated: 2018-06-12
Eilertsen, G., Forssén, P.-E. & Unger, J. (2017). BriefMatch: Dense binary feature matching for real-time optical flow estimation. In: Puneet Sharma, Filippo Maria Bianchi (Ed.), Proceedings of the Scandinavian Conference on Image Analysis (SCIA17): . Paper presented at Scandinavian Conference on Image Analysis (SCIA17), Tromsø, Norway, 12-4 June, 2017 (pp. 221-233). Springer, 10269
Open this publication in new window or tab >>BriefMatch: Dense binary feature matching for real-time optical flow estimation
2017 (English)In: Proceedings of the Scandinavian Conference on Image Analysis (SCIA17) / [ed] Puneet Sharma, Filippo Maria Bianchi, Springer, 2017, Vol. 10269, p. 221-233Conference paper, Published paper (Refereed)
Abstract [en]

Research in optical flow estimation has to a large extent focused on achieving the best possible quality with no regards to running time. Nevertheless, in a number of important applications the speed is crucial. To address this problem we present BriefMatch, a real-time optical flow method that is suitable for live applications. The method combines binary features with the search strategy from PatchMatch in order to efficiently find a dense correspondence field between images. We show that the BRIEF descriptor provides better candidates (less outlier-prone) in shorter time, when compared to direct pixel comparisons and the Census transform. This allows us to achieve high quality results from a simple filtering of the initially matched candidates. Currently, BriefMatch has the fastest running time on the Middlebury benchmark, while placing highest of all the methods that run in shorter than 0.5 seconds.

Place, publisher, year, edition, pages
Springer, 2017
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
Keywords
computer vision, optical flow, feature matching, real-time computation
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-149418 (URN)10.1007/978-3-319-59126-1_19 (DOI)2-s2.0-85020383306 (Scopus ID)978-3-319-59125-4 (ISBN)
Conference
Scandinavian Conference on Image Analysis (SCIA17), Tromsø, Norway, 12-4 June, 2017
Available from: 2018-06-28 Created: 2018-06-28 Last updated: 2018-08-24Bibliographically approved
Tongbuasirilai, T., Unger, J. & Kurt, M. (2017). Efficient BRDF Sampling Using Projected Deviation Vector Parameterization. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW): . Paper presented at 16th IEEE International Conference on Computer Vision (ICCV), 22-29 October 2017, Venice, Italy (pp. 153-158). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Efficient BRDF Sampling Using Projected Deviation Vector Parameterization
2017 (English)In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 153-158Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a novel approach for efficient sampling of isotropic Bidirectional Reflectance Distribution Functions (BRDFs). Our approach builds upon a new parameterization, the Projected Deviation Vector parameterization, in which isotropic BRDFs can be described by two 1D functions. We show that BRDFs can be efficiently and accurately measured in this space using simple mechanical measurement setups. To demonstrate the utility of our approach, we perform a thorough numerical evaluation and show that the BRDFs reconstructed from measurements along the two 1D bases produce rendering results that are visually comparable to the reference BRDF measurements which are densely sampled over the 4D domain described by the standard hemispherical parameterization.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2017
Series
IEEE International Conference on Computer Vision Workshops, E-ISSN 2473-9936 ; 2017
National Category
Medical Laboratory and Measurements Technologies
Identifiers
urn:nbn:se:liu:diva-145821 (URN)10.1109/ICCVW.2017.26 (DOI)000425239600019 ()9781538610343 (ISBN)9781538610350 (ISBN)
Conference
16th IEEE International Conference on Computer Vision (ICCV), 22-29 October 2017, Venice, Italy
Note

Funding Agencies|Scientific and Technical Research Council of Turkey [115E203]; Scientific Research Projects Directorate of Ege University [2015/BIL/043]

Available from: 2018-03-21 Created: 2018-03-21 Last updated: 2019-06-27Bibliographically approved
Miandji, E., Emadi, M., Unger, J. & Ehsan, A. (2017). On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence. IEEE Signal Processing Letters, 24(11), 1646-1650
Open this publication in new window or tab >>On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence
2017 (English)In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 24, no 11, p. 1646-1650Article in journal (Refereed) Published
Abstract [en]

In this paper we present a new coherence-based performance guarantee for the Orthogonal Matching Pursuit (OMP) algorithm. A lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise is derived. Compared to previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a closer match to empirically obtained results of the OMP algorithm.

Place, publisher, year, edition, pages
IEEE Signal Processing Society, 2017
Keywords
Compressed Sensing (CS), Sparse Recovery, Orthogonal Matching Pursuit (OMP), Mutual Coherence
National Category
Signal Processing
Identifiers
urn:nbn:se:liu:diva-141613 (URN)10.1109/LSP.2017.2753939 (DOI)000412501600001 ()
Available from: 2017-10-03 Created: 2017-10-03 Last updated: 2018-11-23Bibliographically approved
Eilertsen, G., Unger, J. & Mantiuk, R. (2016). Evaluation of tone mapping operators for HDR video (1sted.). In: Frédéric Dufaux, Patrick Le Callet, Rafal K. Mantiuk, Marta Mrak (Ed.), High dynamic range video: from acquisition to display and applications (pp. 185-206). London, United Kingdom: Academic Press
Open this publication in new window or tab >>Evaluation of tone mapping operators for HDR video
2016 (English)In: High dynamic range video: from acquisition to display and applications / [ed] Frédéric Dufaux, Patrick Le Callet, Rafal K. Mantiuk, Marta Mrak, London, United Kingdom: Academic Press, 2016, 1st, p. 185-206Chapter in book (Other academic)
Abstract [en]

Tone mapping of HDR-video is a challenging filtering problem. It is highly important to develop a framework for evaluation and comparison of tone mapping operators. This chapter gives an overview of different approaches for how evalation of tone mapping operators can be conducted, including experimental setups, choice of input data, choice of tone mapping operators, and the importance of parameter tweaking for fair comparisons. This chapter also gives examples of previous evaluations with a focus on the results from the most recent evaluation conducted by Eilertsen et. al [reference]. This results in a classification of the currently most commonly used tone mapping operators and overview of their performance and possible artifacts.

Place, publisher, year, edition, pages
London, United Kingdom: Academic Press, 2016 Edition: 1st
Keywords
High dynamic range imaging, tone mapping, image reproduction
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-127345 (URN)10.1016/B978-0-08-100412-8.00007-3 (DOI)9780081004128 (ISBN)
Projects
VPS
Funder
Swedish Foundation for Strategic Research , IIS11-0081
Available from: 2016-04-21 Created: 2016-04-21 Last updated: 2018-07-19Bibliographically approved
Miandji, E. & Unger, J. (2016). ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES. In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP): . Paper presented at 23rd IEEE International Conference on Image Processing (ICIP) (pp. 2519-2523). IEEE
Open this publication in new window or tab >>ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES
2016 (English)In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 2519-2523Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we consider the problem of nonlocal image completion from random measurements and using an ensemble of dictionaries. Utilizing recent advances in the field of compressed sensing, we derive conditions under which one can uniquely recover an incomplete image with overwhelming probability. The theoretical results are complemented by numerical simulations using various ensembles of analytical and training-based dictionaries.

Place, publisher, year, edition, pages
IEEE, 2016
Series
IEEE International Conference on Image Processing ICIP, ISSN 1522-4880
Keywords
compressed sensing; image completion; nonlocal; inverse problems; uniqueness conditions
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-134107 (URN)10.1109/ICIP.2016.7532813 (DOI)000390782002114 ()978-1-4673-9961-6 (ISBN)
Conference
23rd IEEE International Conference on Image Processing (ICIP)
Available from: 2017-01-22 Created: 2017-01-22 Last updated: 2018-11-23
Unger, J., Banterle, F., Mantiuk, R. & Eilertsen, G. (2016). The HDR-video pipeline: From capture and image reconstruction to compression and tone mapping. In: : . Paper presented at Eurographics 2016, May 9, Lisbon, Portugal (pp. 1-6). The Eurographics Association
Open this publication in new window or tab >>The HDR-video pipeline: From capture and image reconstruction to compression and tone mapping
2016 (English)Conference paper, Published paper (Other academic)
Abstract [en]

High dynamic range (HDR) video technology has gone through remarkable developments over the last few years;HDR-video cameras are being commercialized, new algorithms for color grading and tone mapping specifically designed for HDR-video have recently been proposed, and the first open source compression algorithms for HDR-video are becoming available. HDR-video represents a paradigm shift in imaging and computer graphics, which has and will continue to generate a range of both new research challenges and applications. This intermediate-level tutorial will give an in-depth overview of the full HDR-video pipeline present several examples of state-of-the-art algorithms and technology in HDR-video capture, tone mapping, compression and specific applications in computer graphics.

Place, publisher, year, edition, pages
The Eurographics Association, 2016
Keywords
High dynamic range imaging, hdr video, tone mapping, video tonemapping, hdr video comrpession, image-based lighting
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-139785 (URN)
Conference
Eurographics 2016, May 9, Lisbon, Portugal
Note

Presentation slides and more information can be found at the tutorial web page: http://vcg.isti.cnr.it/Publications/2016/UBEM16a

Available from: 2017-08-16 Created: 2017-08-16 Last updated: 2018-01-16Bibliographically approved
Unger, J., Hajisharif, S. & Kronander, J. (2016). Unified reconstruction of RAW HDR video data (1sted.). In: Frédéric Dufaux, Patrick Le Callet, Rafal K. Mantiuk, Marta Mrak (Ed.), High dynamic range video: from acquisition to display and applications (pp. 63-82). London, United Kingdom: Academic Press
Open this publication in new window or tab >>Unified reconstruction of RAW HDR video data
2016 (English)In: High dynamic range video: from acquisition to display and applications / [ed] Frédéric Dufaux, Patrick Le Callet, Rafal K. Mantiuk, Marta Mrak, London, United Kingdom: Academic Press, 2016, 1st, p. 63-82Chapter in book (Other academic)
Abstract [en]

Traditional HDR capture has mostly relied on merging images captured with different exposure times. While this works well for static scenes, dynamic scenes poses difficult challenges as registration of differently exposed images often leads to ghosting and other artifacts. This chapter reviews methods which capture HDR-video frames within a single exposure time, using either multiple synchronised sensors, or by multiplexing of the sensor response spatially across the sensor. Most previous HDR reconstruction methods perform demoisaicing, noise reduction, resampling (registration), and HDR-fusion in separate steps. This chapter presents a framework for unified HDR-reconstruction, including all steps in the traditional imaging pipeline in a single adaptive filtering operation, and describes an image formation model and a sensor noise model applicable to both single-, and multi-sensor systems. The benefits of using raw data directly are demonstrated with examples using input data from multiple synchronized sensors, and single images with varying per-pixel gain.

Place, publisher, year, edition, pages
London, United Kingdom: Academic Press, 2016 Edition: 1st
Keywords
High dynamic range imaging, image reconstruction
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-127344 (URN)10.1016/B978-0-08-100412-8.00002-4 (DOI)9780081004128 (ISBN)
Projects
VPS
Funder
Swedish Foundation for Strategic Research , IIS11-0081
Available from: 2016-04-21 Created: 2016-04-21 Last updated: 2018-07-19Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-7765-1747

Search in DiVA

Show all publications