liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Kavoosighafi, Behnaz
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Frisvad, Jeppe Revall
    Technical University of Denmark, Denmark.
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    SparseBTF: Sparse Representation Learning for Bidirectional Texture Functions2023Conference paper (Refereed)
    Abstract [en]

    We propose a novel dictionary-based representation learning model for Bidirectional Texture Functions (BTFs) aiming atcompact storage, real-time rendering performance, and high image quality. Our model is trained once, using a small trainingset, and then used to obtain a sparse tensor containing the model parameters. Our technique exploits redundancies in the dataacross all dimensions simultaneously, as opposed to existing methods that use only angular information and ignore correlationsin the spatial domain. We show that our model admits efficient angular interpolation directly in the model space, rather thanthe BTF space, leading to a notably higher rendering speed than in previous work. Additionally, the high quality-storage costtradeoff enabled by our method facilitates controlling the image quality, storage cost, and rendering speed using a singleparameter, the number of coefficients. Previous methods rely on a fixed number of latent variables for training and testing,hence limiting the potential for achieving a favorable quality-storage cost tradeoff and scalability. Our experimental resultsdemonstrate that our method outperforms existing methods both quantitatively and qualitatively, as well as achieving a highercompression ratio and rendering speed.

  • 2.
    Hanji, Param
    et al.
    University of Cambridge, United Kingdom.
    Mantiuk, Rafal K.
    University of Cambridge, United Kingdom.
    Eilertsen, Gabriel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Comparison of single image HDR reconstruction methods — the caveats of quality assessment2022In: ACM SIGGRAPH ’22 Conference Proceedings, 2022Conference paper (Refereed)
    Abstract [en]

    As the problem of reconstructing high dynamic range (HDR) imagesfrom a single exposure has attracted much research effort, it isessential to provide a robust protocol and clear guidelines on howto evaluate and compare new methods. In this work, we comparedsix recent single image HDR reconstruction (SI-HDR) methodsin a subjective image quality experiment on an HDR display. Wefound that only two methods produced results that are, on average,more preferred than the unprocessed single exposure images. Whenthe same methods are evaluated using image quality metrics, astypically done in papers, the metric predictions correlate poorlywith subjective quality scores. The main reason is a significant toneand color difference between the reference and reconstructed HDRimages. To improve the predictions of image quality metrics, we propose correcting for the inaccuracies of the estimated cameraresponse curve before computing quality values. We further analyzethe sources of prediction noise when evaluating SI-HDR methodsand demonstrate that existing metrics can reliably predict onlylarge quality differences.

    Download full text (pdf)
    fulltext
  • 3.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Nguyen, Hoai-Nam
    Inria Rennes Bretagne Atlantique, France.
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Guillemot, Christine
    Inria Rennes Bretagne Atlantique, France.
    Compressive HDR Light Field Imaging Using a Single Multi-ISO Sensor2021In: IEEE Transactions on Computational Imaging, ISSN 2573-0436, E-ISSN 2333-9403, Vol. 7, p. 1369-1384Article in journal (Refereed)
    Abstract [en]

    In this paper, we propose a new design for single sensor compressive HDR light field cameras, combining multi-ISO photography with coded mask acquisition, placed in a compressive sensing framework. The proposed camera model is based on a main lens, a multi-ISO sensor and a coded mask located in the optical path between the main lens and the sensor that projects the coded spatio-angular information of the light field onto the 2D sensor. The model encompasses different acquisition scenarios with different ISO patterns and gains. Moreover, we assume that the sensor has a built-in color filter array (CFA), making our design more suitable for consumer-level cameras. We propose a reconstruction algorithm to jointly perform color demosaicing, light field angular information recovery, HDR reconstruction, and denoising from the multi-ISO measurements formed on the sensor. This is achieved by enabling the sparse representation of HDR light fields using an overcomplete HDR dictionary. We also provide two HDR light field data sets: one synthetic data set created using the Blender rendering software with two baselines, and a real light field data set created from the fusion of multi-exposure low dynamic range (LDR) images captured using a Lytro Illum light field camera. Experimental results show that, with a sampling rate as low as 2.67%, using two shots, our proposed method yields a higher light field reconstruction quality compared to the fusion of multiple LDR light fields captured with different exposures, and with the fusion of multiple LDR light fields captured with different ISO settings.

  • 4.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hanji, Param
    Univ Cambridge, England.
    Tsirikoglou, Apostolia
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mantiuk, Rafal K.
    Univ Cambridge, England.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    How to cheat with metrics in single-image HDR reconstruction2021In: 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), IEEE COMPUTER SOC , 2021, p. 3981-3990Conference paper (Refereed)
    Abstract [en]

    Single-image high dynamic range (SI-HDR) reconstruction has recently emerged as a problem well-suited for deep learning methods. Each successive technique demonstrates an improvement over existing methods by reporting higher image quality scores. This paper, however, highlights that such improvements in objective metrics do not necessarily translate to visually superior images. The first problem is the use of disparate evaluation conditions in terms of data and metric parameters, calling for a standardized protocol to make it possible to compare between papers. The second problem, which forms the main focus of this paper, is the inherent difficulty in evaluating SI-HDR reconstructions since certain aspects of the reconstruction problem dominate objective differences, thereby introducing a bias. Here, we reproduce a typical evaluation using existing as well as simulated SI-HDR methods to demonstrate how different aspects of the problem affect objective quality metrics. Surprisingly, we found that methods that do not even reconstruct HDR information can compete with state-of-the-art deep learning methods. We show how such results are not representative of the perceived quality and that SI-HDR reconstruction needs better evaluation protocols.

  • 5.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Inria Rennes.
    Baravdish, Gabriel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Per, Larsson
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Compression and Real-Time Rendering of Inward Looking Spherical Light Fields2020In: Eurographics 2020 - Short Papers / [ed] Wilkie, Alexander and Banterle, Francesco, 2020Conference paper (Refereed)
    Abstract [en]

    Photorealistic rendering is an essential tool for immersive virtual reality. In this regard, the data structure of choice is typically light fields since they contain multidimensional information about the captured environment that can provide motion parallax and view-dependent information such as highlights. There are various ways to acquire light fields depending on the nature of the scene, limitations on the capturing setup, and the application at hand. Our focus in this paper is on full-parallax imaging of large-scale static objects for photorealistic real-time rendering. To this end, we introduce and simulate a new design for capturing inward-looking spherical light fields, and propose a system for efficient compression and real-time rendering of such data using consumer-level hardware suitable for virtual reality applications.

  • 6. Order onlineBuy this publication >>
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Computational Photography: High Dynamic Range and Light Fields2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The introduction and recent advancements of computational photography have revolutionized the imaging industry. Computational photography is a combination of imaging techniques at the intersection of various fields such as optics, computer vision, and computer graphics. These methods enhance the capabilities of traditional digital photography by applying computational techniques both during and after the capturing process. This thesis targets two major subjects in this field: High Dynamic Range (HDR) image reconstruction and Light Field (LF) compressive capturing, compression, and real-time rendering.

    The first part of the thesis focuses on the HDR images that concurrently contain detailed information from the very dark shadows to the brightest areas in the scenes. One of the main contributions presented in this thesis is the development of a unified reconstruction algorithm for spatially variant exposures in a single image. This method is based on a camera noise model, and it simultaneously resamples, reconstructs, denoises, and demosaics the image while extending its dynamic range. Furthermore, the HDR reconstruction algorithm is extended to adapt to the local features of the image, as well as the noise statistics, to preserve the high-frequency edges during reconstruction.

    In the second part of this thesis, the research focus shifts to the acquisition, encoding, reconstruction, and rendering of light field images and videos in a real-time setting. Unlike traditional integral photography, a light field captures the information of the dynamic environment from all angles, all points in space, and all spectral wavelength and time. This thesis employs sparse representation to provide an end-to-end solution to the problem of encoding, real-time reconstruction, and rendering of high dimensional light field video data sets. These solutions are applied on various types of data sets, such as light fields captured with multi-camera systems or hand-held cameras equipped with micro-lens arrays, and spherical light fields. Finally, sparse representation of light fields was utilized for developing a single sensor light field video camera equipped with a color-coded mask. A new compressive sensing model is presented that is suitable for dynamic scenes with temporal coherency and is capable of reconstructing high-resolution light field videos.  

    List of papers
    1. HDR reconstruction for alternating gain (ISO) sensor readout
    Open this publication in new window or tab >>HDR reconstruction for alternating gain (ISO) sensor readout
    2014 (English)In: Eurographics 2014 short papers, 2014Conference paper, Published paper (Refereed)
    Abstract [en]

    Modern image sensors are becoming more and more flexible in the way an image is captured. In this paper, we focus on sensors that allow the per pixel gain to be varied over the sensor and develop a new technique for efficient and accurate reconstruction of high dynamic range (HDR) images based on such input data. Our method estimates the radiant power at each output pixel using a sampling operation which performs color interpolation, re-sampling, noise reduction and HDR-reconstruction in a single step. The reconstruction filter uses a sensor noise model to weight the input pixel samples according to their variances. Our algorithm works in only a small spatial neighbourhood around each pixel and lends itself to efficient implementation in hardware. To demonstrate the utility of our approach we show example HDR-images reconstructed from raw sensor data captured using off-the shelf consumer hardware which allows for two different gain settings for different rows in the same image. To analyse the accuracy of the algorithm, we also use synthetic images from a camera simulation software.

    Keywords
    HDR, image reconstruction, dual-ISO, image processing
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-104922 (URN)
    Conference
    Eurographics, Strasbourg, France, April 7-11, 2014
    Projects
    VPS
    Available from: 2014-03-03 Created: 2014-03-03 Last updated: 2020-02-18Bibliographically approved
    2. Adaptive dualISO HDR-reconstruction
    Open this publication in new window or tab >>Adaptive dualISO HDR-reconstruction
    2015 (English)In: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281Article in journal (Refereed) Published
    Abstract [en]

    With the development of modern image sensors enabling flexible image acquisition, single shot HDR imaging is becoming increasingly popular. In this work we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. In comparison to previous single shot HDR capture based on a single sensor, this allows all incoming photons to be used in the imaging, instead of wasting incoming light using spatially varying ND-filters, commonly used in previous works. The main technical contribution in this work is an  extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations [15,10]. Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike previous works, using  adaptive filter kernels [16], our algorithms are based on analysing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain pattern and noise properties of the sensor.

    Place, publisher, year, edition, pages
    Springer Publishing Company, 2015
    Keywords
    HDR reconstruction; Single shot HDR imaging; DualISO; Statistical image fitlering
    National Category
    Computer Sciences Computer and Information Sciences
    Identifiers
    urn:nbn:se:liu:diva-122587 (URN)10.1186/s13640-015-0095-0 (DOI)000366324500001 ()
    Note

    Funding agencies: Swedish Foundation for Strategic Research (SSF) [IIS11-0081]; Linkoping University Center for Industrial Information Technology (CENIIT); Swedish Research Council through the Linnaeus Environment CADICS

    Available from: 2015-11-10 Created: 2015-11-10 Last updated: 2020-02-18Bibliographically approved
    3. A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos
    Open this publication in new window or tab >>A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos
    2019 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 38, no 3, p. 1-18, article id 23Article in journal (Refereed) Published
    Abstract [en]

    In this article we present a novel dictionary learning framework designed for compression and sampling of light fields and light field videos. Unlike previous methods, where a single dictionary with one-dimensional atoms is learned, we propose to train a Multidimensional Dictionary Ensemble (MDE). It is shown that learning an ensemble in the native dimensionality of the data promotes sparsity, hence increasing the compression ratio and sampling efficiency. To make maximum use of correlations within the light field data sets, we also introduce a novel nonlocal pre-clustering approach that constructs an Aggregate MDE (AMDE). The pre-clustering not only improves the image quality but also reduces the training time by an order of magnitude in most cases. The decoding algorithm supports efficient local reconstruction of the compressed data, which enables efficient real-time playback of high-resolution light field videos. Moreover, we discuss the application of AMDE for compressed sensing. A theoretical analysis is presented that indicates the required conditions for exact recovery of point-sampled light fields that are sparse under AMDE. The analysis provides guidelines for designing efficient compressive light field cameras. We use various synthetic and natural light field and light field video data sets to demonstrate the utility of our approach in comparison with the state-of-the-art learning-based dictionaries, as well as established analytical dictionaries.

    Place, publisher, year, edition, pages
    ACM Digital Library, 2019
    Keywords
    Light field video compression, compressed sensing, dictionary learning, light field photography
    National Category
    Computer and Information Sciences
    Identifiers
    urn:nbn:se:liu:diva-158026 (URN)10.1145/3269980 (DOI)000495415600005 ()
    Available from: 2019-06-24 Created: 2019-06-24 Last updated: 2020-02-18Bibliographically approved
    4. Light Field Video Compression and Real Time Rendering
    Open this publication in new window or tab >>Light Field Video Compression and Real Time Rendering
    Show others...
    2019 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, p. 265-276Article in journal (Refereed) Published
    Abstract [en]

    Light field imaging is rapidly becoming an established method for generating flexible image based description of scene appearances. Compared to classical 2D imaging techniques, the angular information included in light fields enables effects such as post‐capture refocusing and the exploration of the scene from different vantage points. In this paper, we describe a novel GPU pipeline for compression and real‐time rendering of light field videos with full parallax. To achieve this, we employ a dictionary learning approach and train an ensemble of dictionaries capable of efficiently representing light field video data using highly sparse coefficient sets. A novel, key element in our representation is that we simultaneously compress both image data (pixel colors) and the auxiliary information (depth, disparity, or optical flow) required for view interpolation. During playback, the coefficients are streamed to the GPU where the light field and the auxiliary information are reconstructed using the dictionary ensemble and view interpolation is performed. In order to realize the pipeline we present several technical contributions including a denoising scheme enhancing the sparsity in the dataset which enables higher compression ratios, and a novel pruning strategy which reduces the size of the dictionary ensemble and leads to significant reductions in computational complexity during the encoding of a light field. Our approach is independent of the light field parameterization and can be used with data from any light field video capture system. To demonstrate the usefulness of our pipeline, we utilize various publicly available light field video datasets and discuss the medical application of documenting heart surgery.

    Place, publisher, year, edition, pages
    John Wiley & Sons, 2019
    Keywords
    Computational photography, Light Fields, Light Fields Compression, Light Field Video
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-162100 (URN)10.1111/cgf.13835 (DOI)000496351100025 ()
    Conference
    Pacific Graphics 2019
    Note

    Funding agencies:  childrens heart clinic at Skane University hospital, Barnhjartcentrum; strategic research environment ELLIIT; Swedish Science Council [201505180]; VinnovaVinnova [2017-03728]; Visual Sweden Platform for Augmented Intelligence

    Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2021-09-30
    Download full text (pdf)
    fulltext
    Download (png)
    presentationsbild
  • 7.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. Inria Rennes.
    Guillemot, Christine
    Inria Rennes.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Single Sensor Compressive Light Field Video Camera2020In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 2, p. 463-474Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel compressed sensing (CS) algorithm and camera design for light field video capture using a single sensor consumer camera module. Unlike microlens light field cameras which sacrifice spatial resolution to obtain angular information, our CS approach is designed for capturing light field videos with high angular, spatial, and temporal resolution. The compressive measurements required by CS are obtained using a random color-coded mask placed between the sensor and aperture planes. The convolution of the incoming light rays from different angles with the mask results in a single image on the sensor; hence, achieving a significant reduction on the required bandwidth for capturing light field videos. We propose to change the random pattern on the spectral mask between each consecutive frame in a video sequence and extracting spatioangular- spectral-temporal 6D patches. Our CS reconstruction algorithm for light field videos recovers each frame while taking into account the neighboring frames to achieve significantly higher reconstruction quality with reduced temporal incoherencies, as compared with previous methods. Moreover, a thorough analysis of various sensing models for compressive light field video acquisition is conducted to highlight the advantages of our method. The results show a clear advantage of our method for monochrome sensors, as well as sensors with color filter arrays.

    Download full text (pdf)
    fulltext
  • 8.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos2019In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 38, no 3, p. 1-18, article id 23Article in journal (Refereed)
    Abstract [en]

    In this article we present a novel dictionary learning framework designed for compression and sampling of light fields and light field videos. Unlike previous methods, where a single dictionary with one-dimensional atoms is learned, we propose to train a Multidimensional Dictionary Ensemble (MDE). It is shown that learning an ensemble in the native dimensionality of the data promotes sparsity, hence increasing the compression ratio and sampling efficiency. To make maximum use of correlations within the light field data sets, we also introduce a novel nonlocal pre-clustering approach that constructs an Aggregate MDE (AMDE). The pre-clustering not only improves the image quality but also reduces the training time by an order of magnitude in most cases. The decoding algorithm supports efficient local reconstruction of the compressed data, which enables efficient real-time playback of high-resolution light field videos. Moreover, we discuss the application of AMDE for compressed sensing. A theoretical analysis is presented that indicates the required conditions for exact recovery of point-sampled light fields that are sparse under AMDE. The analysis provides guidelines for designing efficient compressive light field cameras. We use various synthetic and natural light field and light field video data sets to demonstrate the utility of our approach in comparison with the state-of-the-art learning-based dictionaries, as well as established analytical dictionaries.

    Download full text (pdf)
    A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos
  • 9.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Per, Larsson
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Tran, Kiet
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Light Field Video Compression and Real Time Rendering2019In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 38, p. 265-276Article in journal (Refereed)
    Abstract [en]

    Light field imaging is rapidly becoming an established method for generating flexible image based description of scene appearances. Compared to classical 2D imaging techniques, the angular information included in light fields enables effects such as post‐capture refocusing and the exploration of the scene from different vantage points. In this paper, we describe a novel GPU pipeline for compression and real‐time rendering of light field videos with full parallax. To achieve this, we employ a dictionary learning approach and train an ensemble of dictionaries capable of efficiently representing light field video data using highly sparse coefficient sets. A novel, key element in our representation is that we simultaneously compress both image data (pixel colors) and the auxiliary information (depth, disparity, or optical flow) required for view interpolation. During playback, the coefficients are streamed to the GPU where the light field and the auxiliary information are reconstructed using the dictionary ensemble and view interpolation is performed. In order to realize the pipeline we present several technical contributions including a denoising scheme enhancing the sparsity in the dataset which enables higher compression ratios, and a novel pruning strategy which reduces the size of the dictionary ensemble and leads to significant reductions in computational complexity during the encoding of a light field. Our approach is independent of the light field parameterization and can be used with data from any light field video capture system. To demonstrate the usefulness of our pipeline, we utilize various publicly available light field video datasets and discuss the medical application of documenting heart surgery.

  • 10.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unified reconstruction of RAW HDR video data2016In: High dynamic range video: from acquisition to display and applications / [ed] Frédéric Dufaux, Patrick Le Callet, Rafal K. Mantiuk, Marta Mrak, London, United Kingdom: Academic Press, 2016, 1st, p. 63-82Chapter in book (Other academic)
    Abstract [en]

    Traditional HDR capture has mostly relied on merging images captured with different exposure times. While this works well for static scenes, dynamic scenes poses difficult challenges as registration of differently exposed images often leads to ghosting and other artifacts. This chapter reviews methods which capture HDR-video frames within a single exposure time, using either multiple synchronised sensors, or by multiplexing of the sensor response spatially across the sensor. Most previous HDR reconstruction methods perform demoisaicing, noise reduction, resampling (registration), and HDR-fusion in separate steps. This chapter presents a framework for unified HDR-reconstruction, including all steps in the traditional imaging pipeline in a single adaptive filtering operation, and describes an image formation model and a sensor noise model applicable to both single-, and multi-sensor systems. The benefits of using raw data directly are demonstrated with examples using input data from multiple synchronized sensors, and single images with varying per-pixel gain.

  • 11.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Adaptive dualISO HDR-reconstruction2015In: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281Article in journal (Refereed)
    Abstract [en]

    With the development of modern image sensors enabling flexible image acquisition, single shot HDR imaging is becoming increasingly popular. In this work we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. In comparison to previous single shot HDR capture based on a single sensor, this allows all incoming photons to be used in the imaging, instead of wasting incoming light using spatially varying ND-filters, commonly used in previous works. The main technical contribution in this work is an  extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations [15,10]. Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike previous works, using  adaptive filter kernels [16], our algorithms are based on analysing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain pattern and noise properties of the sensor.

  • 12.
    Hajsharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    HDR reconstruction for alternating gain (ISO) sensor readout2014In: Eurographics 2014 short papers, 2014Conference paper (Refereed)
    Abstract [en]

    Modern image sensors are becoming more and more flexible in the way an image is captured. In this paper, we focus on sensors that allow the per pixel gain to be varied over the sensor and develop a new technique for efficient and accurate reconstruction of high dynamic range (HDR) images based on such input data. Our method estimates the radiant power at each output pixel using a sampling operation which performs color interpolation, re-sampling, noise reduction and HDR-reconstruction in a single step. The reconstruction filter uses a sensor noise model to weight the input pixel samples according to their variances. Our algorithm works in only a small spatial neighbourhood around each pixel and lends itself to efficient implementation in hardware. To demonstrate the utility of our approach we show example HDR-images reconstructed from raw sensor data captured using off-the shelf consumer hardware which allows for two different gain settings for different rows in the same image. To analyse the accuracy of the algorithm, we also use synthetic images from a camera simulation software.

    Download full text (pdf)
    Dual ISO HDR reconstruction
  • 13.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time image based lighting with streaming HDR-lightprobe sequences2012In: Proceedings of SIGRAD 2012 / [ed] Andreas Kerren, Stefan Seipel, Linköping, Sweden, 2012Conference paper (Other academic)
    Abstract [en]

    We present a framework for shading of virtual objects using high dynamic range (HDR) light probe sequencesin real-time. Such images (light probes) are captured using a high resolution HDR camera. In each frame ofthe HDR video, an optimized CUDA kernel is used to project incident lighting into spherical harmonics in realtime. Transfer coefficients are calculated in an offline process. Using precomputed radiance transfer the radiancecalculation reduces to a low order dot product between lighting and transfer coefficients. We exploit temporalcoherence between frames to further smooth lighting variation over time. Our results show that the frameworkcan achieve the effects of consistent illumination in real-time with flexibility to respond to dynamic changes in thereal environment.

    Download full text (pdf)
    Real-time image based lighting with streaming HDR-light probe sequences
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf