liu.seSearch for publications in DiVA
Change search
Refine search result
12 1 - 50 of 55
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    BriefMatch: Dense binary feature matching for real-time optical flow estimation2017In: Proceedings of the Scandinavian Conference on Image Analysis (SCIA17) / [ed] Puneet Sharma, Filippo Maria Bianchi, Springer, 2017, Vol. 10269, p. 221-233Conference paper (Refereed)
    Abstract [en]

    Research in optical flow estimation has to a large extent focused on achieving the best possible quality with no regards to running time. Nevertheless, in a number of important applications the speed is crucial. To address this problem we present BriefMatch, a real-time optical flow method that is suitable for live applications. The method combines binary features with the search strategy from PatchMatch in order to efficiently find a dense correspondence field between images. We show that the BRIEF descriptor provides better candidates (less outlier-prone) in shorter time, when compared to direct pixel comparisons and the Census transform. This allows us to achieve high quality results from a simple filtering of the initially matched candidates. Currently, BriefMatch has the fastest running time on the Middlebury benchmark, while placing highest of all the methods that run in shorter than 0.5 seconds.

  • 2.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology. Linköping University, Faculty of Science & Engineering.
    Denes, Gyorgy
    University of Cambridge, England.
    Mantiuk, Rafal K.
    University of Cambridge, England.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    HDR image reconstruction from a single exposure using deep CNNs2017In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 36, no 6, article id 178Article in journal (Refereed)
    Abstract [en]

    Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.

  • 3.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    A versatile material reflectance measurement system for use in production2011In: Proceedings of SIGRAD 2011. Evaluations of Graphics and Visualization — Efficiency, Usefulness, Accessibility, Usability, November 17–18, 2011, KTH, Stockholm, Sweden, Linköping University Electronic Press, 2011, p. 69-76Conference paper (Refereed)
    Abstract [en]

    In this paper we present our developed bidirectional reflectance distribution capturing pipeline. It includes a constructed gonioreflectometer for reflectance measurements, as well as extensive software for operation, data visualization and parameter fitting of analytic models. Our focus is on the flexible user interface, aimed at material appearance creation for computer graphics, and targeted both for production and research employment.

    Key challenges have been in providing a user friendly and effective software for functioning in a production environment, abstracting the details of the calculations involved in the reflectance capturing and fitting. We show how a combination of well-tuned tools can make complex processes such as reflectance calibration, measurement and fitting highly automated in a fast and easy work-flow, from material scanning to model parameters optimized for use in rendering. At the same time, the developed software provides a modifiable interface for detailed control. The importance of having good reflectance visualizations is also demonstrated, where the software plotting tools are able to show vital details of a reflectance distribution, giving valuable insight in to a materials properties and a models accuracy of fit to measured data, on both a local and global level.

  • 4.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mantiuk, R. K.
    University of Cambridge, England.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A comparative review of tone-mapping algorithms for high dynamic range video2017In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 36, no 2, p. 565-592Article in journal (Refereed)
    Abstract [en]

    Tone-mapping constitutes a key component within the field of high dynamic range (HDR) imaging. Its importance is manifested in the vast amount of tone-mapping methods that can be found in the literature, which are the result of an active development in the area for more than two decades. Although these can accommodate most requirements for display of HDR images, new challenges arose with the advent of HDR video, calling for additional considerations in the design of tone-mapping operators (TMOs). Today, a range of TMOs exist that do support video material. We are now reaching a point where most camera captured HDR videos can be prepared in high quality without visible artifacts, for the constraints of a standard display device. In this report, we set out to summarize and categorize the research in tone-mapping as of today, distilling the most important trends and characteristics of the tone reproduction pipeline. While this gives a wide overview over the area, we then specifically focus on tone-mapping of HDR video and the problems this medium entails. First, we formulate the major challenges a video TMO needs to address. Then, we provide a description and categorization of each of the existing video TMOs. Finally, by constructing a set of quantitative measures, we evaluate the performance of a number of the operators, in order to give a hint on which can be expected to render the least amount of artifacts. This serves as a comprehensive reference, categorization and comparative assessment of the state-of-the-art in tone-mapping for HDR video.

  • 5.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mantiuk, Rafal K.
    University of Cambridge, England.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A HIGH DYNAMIC RANGE VIDEO CODEC OPTIMIZED BY LARGE-SCALE TESTING2016In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 1379-1383Conference paper (Refereed)
    Abstract [en]

    While a number of existing high-bit depth video compression methods can potentially encode high dynamic range (HDR) video, few of them provide this capability. In this paper, we investigate techniques for adapting HDR video for this purpose. In a large-scale test on 33 HDR video sequences, we compare 2 video codecs, 4 luminance encoding techniques (transfer functions) and 3 color encoding methods, measuring quality in terms of two objective metrics, PU-MSSIM and HDR-VDP-2. From the results we design an open source HDR video encoder, optimized for the best compression performance given the techniques examined.

  • 6.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. IRYSTEC, Canada.
    Mantiuk, Rafal K.
    University of Cambridge, England; IRYSTEC, Canada.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering. IRYSTEC, Canada.
    REAL-TIME NOISE-AWARE TONE-MAPPING AND ITS USE IN LUMINANCE RETARGETING2016In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 894-898Conference paper (Refereed)
    Abstract [en]

    With the aid of tone-mapping operators, high dynamic range images can be mapped for reproduction on standard displays. However, for large restrictions in terms of display dynamic range and peak luminance, limitations of the human visual system have significant impact on the visual appearance. In this paper, we use components from the real-time noise-aware tone-mapping to complement an existing method for perceptual matching of image appearance under different luminance levels. The refined luminance retargeting method improves subjective quality on a display with large limitations in dynamic range, as suggested by our subjective evaluation.

  • 7.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mantiuk, Rafal
    University of Cambridge.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Real-time noise-aware tone mapping2015In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, ISSN 0730-0301, Vol. 34, no 6, p. 198:1-198:15, article id 198Article in journal (Refereed)
    Abstract [en]

    Real-time high quality video tone mapping is needed for manyapplications, such as digital viewfinders in cameras, displayalgorithms which adapt to ambient light, in-camera processing,rendering engines for video games and video post-processing. We propose a viable solution for these applications by designing a videotone-mapping operator that controls the visibility of the noise,adapts to display and viewing environment, minimizes contrastdistortions, preserves or enhances image details, and can be run inreal-time on an incoming sequence without any preprocessing. To ourknowledge, no existing solution offers all these features. Our novelcontributions are: a fast procedure for computing local display-adaptivetone-curves which minimize contrast distortions, a fast method for detailenhancement free from ringing artifacts, and an integrated videotone-mapping solution combining all the above features.

  • 8.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Mantiuk, Rafal
    University of Cambridge, UK.
    Evaluation of tone mapping operators for HDR video2016In: High dynamic range video: from acquisition to display and applications / [ed] Frédéric Dufaux, Patrick Le Callet, Rafal K. Mantiuk, Marta Mrak, London, United Kingdom: Academic Press, 2016, 1st, p. 185-206Chapter in book (Other academic)
    Abstract [en]

    Tone mapping of HDR-video is a challenging filtering problem. It is highly important to develop a framework for evaluation and comparison of tone mapping operators. This chapter gives an overview of different approaches for how evalation of tone mapping operators can be conducted, including experimental setups, choice of input data, choice of tone mapping operators, and the importance of parameter tweaking for fair comparisons. This chapter also gives examples of previous evaluations with a focus on the results from the most recent evaluation conducted by Eilertsen et. al [reference]. This results in a classification of the currently most commonly used tone mapping operators and overview of their performance and possible artifacts.

  • 9.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Wanat, Robert
    Bangor University, United Kingdom.
    Mantiuk, Rafal
    Bangor University, United Kingdom.
    Perceptually based parameter adjustments for video processing operations2014In: ACM SIGGRAPH Talks 2014, ACM Press, 2014Conference paper (Refereed)
    Abstract [en]

    Extensive post processing plays a central role in modern video production pipelines. A problem in this context is that many filters and processing operators are very sensitive to parameter settings and that the filter responses in most cases are highly non-linear. Since there is no general solution for performing perceptual calibration of image and video operators automatically, it is often necessary to manually perform tweaking of multiple parameters. This is an iterative process which requires instant visual feedback of the result in both the spatial and temporal domains. Due to large filter kernels, computational complexity, high frame rate, and image resolution it is, however, often very time consuming to iteratively re-process and tweak long video sequences.We present a new method for rapidly finding the perceptual minima in high-dimensional parameter spaces of general video operators. The key idea of our algorithm is that the characteristics of an operator can be accurately described by interpolating between a small set of pre-computed parameter settings. By computing a perceptual linearization of the parameter space of a video operator, the user can explore this interpolated space to find the best set of parameters in a robust way. Since many operators are dependent on two or more parameters, we formulate this as a general optimization problem where we let the objective function be determined by the user’s image assessments. To demonstrate the usefulness of our approach we show a set of use cases (see the supplementary material) where our algorithm is applied to computationally expensive video operations.

  • 10.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Wanat, Robert
    Bangor University, UK.
    Mantiuk, Rafal
    Bangor University, UK.
    Survey and Evaluation of Tone Mapping Operators for HDR-video2013In: Siggraph 2013 Talks, ACM Press, 2013Conference paper (Other academic)
    Abstract [en]

    This work presents a survey and a user evaluation of tone mapping operators (TMOs) for high dynamic range (HDR) video, i.e. TMOs that explicitly include a temporal model for processing of variations in the input HDR images in the time domain. The main motivations behind this work is that: robust tone mapping is one of the key aspects of HDR imaging [Reinhard et al. 2006]; recent developments in sensor and computing technologies have now made it possible to capture HDR-video, e.g. [Unger and Gustavson 2007; Tocci et al. 2011]; and, as shown by our survey, tone mapping for HDR video poses a set of completely new challenges compared to tone mapping for still HDR images. Furthermore, video tone mapping, though less studied, is highly important for a multitude of applications including gaming, cameras in mobile devices, adaptive display devices and movie post-processing. Our survey is meant to summarize the state-of-the-art in video tonemapping and, as exemplified in Figure 1 (right), analyze differences in their response to temporal variations. In contrast to other studies, we evaluate TMOs performance according to their actual intent, such as producing the image that best resembles the real world scene, that subjectively looks best to the viewer, or fulfills a certain artistic requirement. The unique strength of this work is that we use real high quality HDR video sequences, see Figure 1 (left), as opposed to synthetic images or footage generated from still HDR images.

  • 11.
    Eilertsen, Gabriel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Wanat, Robert
    Bangor University, Wales .
    Mantiuk, Rafal K.
    Bangor University, Wales .
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Evaluation of Tone Mapping Operators for HDR-Video2013In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 32, no 7, p. 275-284Article in journal (Refereed)
    Abstract [en]

    Eleven tone-mapping operators intended for video processing are analyzed and evaluated with camera-captured and computer-generated high-dynamic-range content. After optimizing the parameters of the operators in a formal experiment, we inspect and rate the artifacts (flickering, ghosting, temporal color consistency) and color rendition problems (brightness, contrast and color saturation) they produce. This allows us to identify major problems and challenges that video tone-mapping needs to address. Then, we compare the tone-mapping results in a pair-wise comparison experiment to identify the operators that, on average, can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.

  • 12.
    Emadi, Mohammad
    et al.
    Qualcomm Technol Inc, CA 95110 USA.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    A Performance Guarantee for Orthogonal Matching Pursuit Using Mutual Coherence2018In: Circuits, systems, and signal processing, ISSN 0278-081X, E-ISSN 1531-5878, Vol. 37, no 4, p. 1562-1574Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a new performance guarantee for the orthogonal matching pursuit (OMP) algorithm. We use mutual coherence as a metric for determining the suitability of an arbitrary overcomplete dictionary for exact recovery. Specifically, a lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise and an upper bound for the mean square error is derived. Compared to the previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a much closer correlation to empirical results of OMP.

  • 13.
    Emadi, Mohammad
    et al.
    Qualcomm Technol Inc, CA USA.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    OMP-based DOA estimation performance analysis2018In: Digital signal processing (Print), ISSN 1051-2004, E-ISSN 1095-4333, Vol. 79, p. 57-65Article in journal (Refereed)
    Abstract [en]

    In this paper, we present a new performance guarantee for Orthogonal Matching Pursuit (OMP) in the context of the Direction Of Arrival (DOA) estimation problem. For the first time, the effect of parameters such as sensor array configuration, as well as signal to noise ratio and dynamic range of the sources is thoroughly analyzed. In particular, we formulate a lower bound for the probability of detection and an upper bound for the estimation error. The proposed performance guarantee is further developed to include the estimation error as a user-defined parameter for the probability of detection. Numerical results show acceptable correlation between theoretical and empirical simulations. (C) 2018 Elsevier Inc. All rights reserved.

  • 14.
    Gardner, Andrew
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Depends: Workflow Management Software for Visual Effects Production2014Conference paper (Refereed)
    Abstract [en]

    In this paper, we present an open source, multi-platform, workflow management application named Depends, designed to clarify and enhance the workflow of artists in a visual effects environment. Depends organizes processes into a directed acyclic graph, enabling artists to quickly identify appropriate changes, make modifications, and improve the look of their work. Recovering information about past revisions of an element is made simple, as the provenance of data is a core focus of a Depends workflow. Sharing work is also facilitated by the clear and consistent structure of Depends. We demonstrate the flexibility of Depends by presenting a number of scenarios where its style of workflow management has been essential to the creation of high-quality results.

  • 15.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time image based lighting with streaming HDR-lightprobe sequences2012In: Proceedings of SIGRAD 2012 / [ed] Andreas Kerren, Stefan Seipel, Linköping, Sweden, 2012Conference paper (Other academic)
    Abstract [en]

    We present a framework for shading of virtual objects using high dynamic range (HDR) light probe sequencesin real-time. Such images (light probes) are captured using a high resolution HDR camera. In each frame ofthe HDR video, an optimized CUDA kernel is used to project incident lighting into spherical harmonics in realtime. Transfer coefficients are calculated in an offline process. Using precomputed radiance transfer the radiancecalculation reduces to a low order dot product between lighting and transfer coefficients. We exploit temporalcoherence between frames to further smooth lighting variation over time. Our results show that the frameworkcan achieve the effects of consistent illumination in real-time with flexibility to respond to dynamic changes in thereal environment.

  • 16.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Adaptive dualISO HDR-reconstruction2015In: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281Article in journal (Refereed)
    Abstract [en]

    With the development of modern image sensors enabling flexible image acquisition, single shot HDR imaging is becoming increasingly popular. In this work we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. In comparison to previous single shot HDR capture based on a single sensor, this allows all incoming photons to be used in the imaging, instead of wasting incoming light using spatially varying ND-filters, commonly used in previous works. The main technical contribution in this work is an  extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations [15,10]. Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike previous works, using  adaptive filter kernels [16], our algorithms are based on analysing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain pattern and noise properties of the sensor.

  • 17.
    Hajsharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    HDR reconstruction for alternating gain (ISO) sensor readout2014In: Eurographics 2014 short papers, 2014Conference paper (Refereed)
    Abstract [en]

    Modern image sensors are becoming more and more flexible in the way an image is captured. In this paper, we focus on sensors that allow the per pixel gain to be varied over the sensor and develop a new technique for efficient and accurate reconstruction of high dynamic range (HDR) images based on such input data. Our method estimates the radiant power at each output pixel using a sampling operation which performs color interpolation, re-sampling, noise reduction and HDR-reconstruction in a single step. The reconstruction filter uses a sensor noise model to weight the input pixel samples according to their variances. Our algorithm works in only a small spatial neighbourhood around each pixel and lends itself to efficient implementation in hardware. To demonstrate the utility of our approach we show example HDR-images reconstructed from raw sensor data captured using off-the shelf consumer hardware which allows for two different gain settings for different rows in the same image. To analyse the accuracy of the algorithm, we also use synthetic images from a camera simulation software.

  • 18.
    Jones, Andrew
    et al.
    USC Institute Creat Technology, CA 90094 USA.
    Nagano, Koki
    USC Institute Creat Technology, CA 90094 USA.
    Busch, Jay
    USC Institute Creat Technology, CA 90094 USA.
    Yu, Xueming
    USC Institute Creat Technology, CA 90094 USA.
    Peng, Hsuan-Yueh
    USC Institute Creat Technology, CA 90094 USA.
    Barreto, Joseph
    USC Institute Creat Technology, CA 90094 USA.
    Alexander, Oleg
    USC Institute Creat Technology, CA 90094 USA.
    Bolas, Mark
    USC Institute Creat Technology, CA 90094 USA.
    Debevec, Paul
    USC Institute Creat Technology, CA 90094 USA.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Time-Offset Conversations on a Life-Sized Automultiscopic Projector Array2016In: PROCEEDINGS OF 29TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, (CVPRW 2016), IEEE , 2016, p. 927-935Conference paper (Refereed)
    Abstract [en]

    We present a system for creating and displaying interactive life-sized 3D digital humans based on pre-recorded interviews. We use 30 cameras and an extensive list of questions to record a large set of video responses. Users access videos through a natural conversation interface that mimics face-to-face interaction. Recordings of answers, listening and idle behaviors are linked together to create a persistent visual image of the person throughout the interaction. The interview subjects are rendered using flowed light fields and shown life-size on a special rear-projection screen with an array of 216 video projectors. The display allows multiple users to see different 3D perspectives of the subject in proper relation to their viewpoints, without the need for stereo glasses. The display is effective for interactive conversations since it provides 3D cues such as eye gaze and spatial hand gestures.

  • 19.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Banterle, Francesco
    Visual Computing Lab, ISTI-CNR, Italy.
    Gardner, Andrew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Photorealistic rendering of mixed reality scenes2015In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 2, p. 643-665Article in journal (Refereed)
    Abstract [en]

    Photo-realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods of advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state-of-the-art in this field, and presents a categorization and comparison of current methods. Our in-depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.

  • 20.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Dahlin, Johan
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kok, Manon
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Schön, Thomas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology. Uppsala Universitet.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time video based lighting using GPU raytracing2014In: Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), 2014, IEEE Signal Processing Society, 2014Conference paper (Refereed)
    Abstract [en]

    The recent introduction of HDR video cameras has enabled the development of image based lighting techniques for rendering virtual objects illuminated with temporally varying real world illumination. A key challenge in this context is that rendering realistic objects illuminated with video environment maps is computationally demanding. In this work, we present a GPU based rendering system based on the NVIDIA OptiX framework, enabling real time raytracing of scenes illuminated with video environment maps. For this purpose, we explore and compare several Monte Carlo sampling approaches, including bidirectional importance sampling, multiple importance sampling and sequential Monte Carlo samplers. While previous work have focused on synthetic data and overly simple environment maps sequences, we have collected a set of real world dynamic environment map sequences using a state-of-art HDR video camera for evaluation and comparisons.

  • 21.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    SpheronVR AG.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unified HDR reconstruction from raw CFA data2013In: Proceedings of IEEE International Conference on Computational Photography 2013 / [ed] David Boas, Paris Sylvain, Shmel Peleg, Todd Zickler, IEEE , 2013, p. 1-9Conference paper (Refereed)
    Abstract [en]

    HDR reconstruction from multiple exposures poses several challenges. Previous HDR reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion in several steps. We instead present a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation. Our algorithm includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise. We also present a realistic camera noise model adapted to HDR video. The method allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over state-of-the-art methods, both in terms of flexibility and reconstruction quality.

  • 22.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    AG Spheron VR, Germany.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    A unified framework for multi-sensor HDR video reconstruction2014In: Signal Processing : Image Communications, ISSN 0923-5965, Vol. 29, no 2, p. 203-215Article in journal (Refereed)
    Abstract [en]

    One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.

  • 23.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time HDR video reconstruction for multi-sensor systems2012In: ACM SIGGRAPH 2012 Posters, New York, NY, USA: ACM Press, 2012, p. 65-Conference paper (Refereed)
    Abstract [en]

    HDR video is an emerging field of technology, with a few camera systems currently in existence [Myszkowski et al. 2008], Multi-sensor systems [Tocci et al. 2011] have recently proved to be particularly promising due to superior robustness against temporal artifacts, correct motion blur, and high light efficiency. Previous HDR reconstruction methods for multi-sensor systems have assumed pixel perfect alignment of the physical sensors. This is, however, very difficult to achieve in practice. It may even be the case that reflections in beam splitters make it impossible to match the arrangement of the Bayer filters between sensors. We therefor present a novel reconstruction method specifically designed to handle the case of non-negligible misalignments between the sensors. Furthermore, while previous reconstruction techniques have considered HDR assembly, debayering and denoising as separate problems, our method is capable of simultaneous HDR assembly, debayering and smoothing of the data (denoising). The method is also general in that it allows reconstruction to an arbitrary output resolution and mapping. The algorithm is implemented in CUDA, and shows video speed performance for an experimental HDR video platform consisting of four 2336x1756 pixels high quality CCD sensors imaging the scene trough a common optical system. ND-filters of different densities are placed in front of the sensors to capture a dynamic range of 24 f-stops.

  • 24.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Löw, Joakim
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ljung, Patric
    Siemens.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 3, p. 447-462Article in journal (Refereed)
    Abstract [sv]

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, includingdirectional lights, point lights and environment maps. real-time performance is achieved by encoding local and global volumetricvisibility using spherical harmonic (SH) basis functions stored in an efficient multi-resolution grid over the extent of the volume. Ourmethod enables high frequency shadows in the spatial domain, but is limited to a low frequency approximation of visibility and illuminationin the angular domain. In a first pass, Level Of Detail (LOD) selection in the grid is based on the current transfer function setting.This enables rapid on-line computation and SH projection of the local spherical distribution of visibility information. Using a piecewiseintegration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing thelight sources using their SH projections, the integral over lighting, visibility and isotropic phase functions can be efficiently computedduring rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performanceof the approach.

  • 25.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Schön, Thomas B.
    Division of Systems and Control, Department of Information Technology, Uppsala University.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Pseudo-Marginal Metropolis Light Transport2015In: Proceeding SA '15 SIGGRAPH Asia 2015 Technical Briefs, ACM Digital Library, 2015, p. 13:1-13:4Conference paper (Other academic)
    Abstract [en]

    Accurate and efficient simulation of light transport in heterogeneous participating media, such as smoke, clouds and fire, plays a key role in the synthesis of visually interesting renderings for e.g. visual effects, computer games and product visualization. However, rendering of scenes with heterogenous participating with Metropolis light transport (MLT) algorithms have previously been limited to primary sample space methods or using biased approximations of the transmittance in the scene. This paper presents a new sampling strategy for Markov chain Monte Carlo (MCMC) methods, e.g. MLT, based on pseudo-marginal MCMC. Specifically, we show that any positive and unbiased estimator of the target distribution can replace the exact quantity to simulate a Markov Chain with a stationary distribution that has a marginal which is the exact target distribution of interest. This enables us to evaluate the transmittance function with recent unbiased estimators which leads to significantly shorter rendering times. Compared to previous work, relying on (biased) ray-marching for evaluating transmittance, our method enables simulation of longer Markov chains, a better exploration of the path space, and consequently less image noise, for a given computational budget. To demonstrate the usefulness of our pseudo-marginal approach, we compare it to representative methods for efficient rendering of anisotropic heterogeneous participating media and glossy transfer. We show that it performs significantly better in terms of image noise and rendering times compared to previous techniques. Our method is robust, and can easily be implemented in a modern renderer.

  • 26.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Moeller, Torsten
    Simon Fraser University, Vancouver.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Estimation and Modeling of Actual Numerical Errors in Volume Rendering2010In: COMPUTER GRAPHICS FORUM, ISSN 0167-7055, Vol. 29, no 3, p. 893-902Article in journal (Refereed)
    Abstract [en]

    In this paper we study the comprehensive effects on volume rendered images due to numerical errors caused by the use of finite precision for data representation and processing. To estimate actual error behavior we conduct a thorough study using a volume renderer implemented with arbitrary floating-point precision. Based on the experimental data we then model the impact of floating-point pipeline precision, sampling frequency and fixed-point input data quantization on the fidelity of rendered images. We introduce three models, an average model, which does not adapt to different data nor varying transfer functions, as well as two adaptive models that take the intricacies of a new data set and transfer function into account by adapting themselves given a few different images rendered. We also test and validate our models based on new data that was not used during our model building.

  • 27.
    Löw, Joakim
    et al.
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    ABC - BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces2013In: Eurographics 24th Symposium on Rendering: Posters, 2013Conference paper (Other academic)
    Abstract [en]

    Glossy surface reflectance is hard to model accuratley using traditional parametric BRDF models. An alternative is provided by data driven reflectance models, however these models offers less user control and generally results in lower efficency. In our work we propose two new lightweight parameteric BRDF models for accurate modeling of glossy surface refllectance, one inspired by Rayleigh-Rice theory for optically smooth surfaces and one inspired by microfacet-theory. We base our models on a thourough study of the scattering behaviour of measured reflectance data from the MERL database. The study focuses on two key aspects of BRDF models, parametrization and scatter distribution. We propose a new scattering distributuion for glossy BRDFs inspired by the ABC model for surface statistics of optically smooth surfaces. Based on the survey we consider two parameterizations, one based on micro-facet theory using the halfway vector and one inspired by the parametrization for the Rayleigh-Rice BRDF model considering the projected devaition vector. To enable efficent rendering we also show how the new models can be approximatley sampled for importance sampling the scattering integral.

  • 28.
    Löw, Joakim
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces2012In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 31, no 1Article in journal (Refereed)
    Abstract [en]

    This article presents two new parametric models of the Bidirectional Reflectance Distribution Function (BRDF), one inspired by the Rayleigh-Rice theory for light scattering from optically smooth surfaces, and one inspired by micro-facet theory. The models represent scattering from a wide range of glossy surface types with high accuracy. In particular, they enable representation of types of surface scattering which previous parametric models have had trouble modeling accurately. In a study of the scattering behavior of measured reflectance data, we investigate what key properties are needed for a model to accurately represent scattering from glossy surfaces. We investigate different parametrizations and how well they match the behavior of measured BRDFs. We also examine the scattering curves which are represented in parametric models by different distribution functions. Based on the insights gained from the study, the new models are designed to provide accurate fittings to the measured data. Importance sampling schemes are developed for the new models, enabling direct use in existing production pipelines. In the resulting renderings we show that the visual quality achieved by the models matches that of the measured data.

  • 29.
    Löw, Joakim
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    HDR Light Probe Sequence Resampling for Realtime Incident Light Field Rendering2009In: Proceedings - SCCG 2009: 25th Spring Conference on Computer Graphics / [ed] Helwig Hauser, New York, USA: ACM New York , 2009, p. 43-50Conference paper (Refereed)
    Abstract [en]

    This paper presents a method for resampling a sequence of high dynamic range light probe images into a representation of Incident Light Field (ILF) illumination which enables realtime rendering. The light probe sequences are captured at varying positions in a real world environment using a high dynamic range video camera pointed at a mirror sphere. The sequences are then resampled to a set of radiance maps in a regular three dimensional grid before projection onto spherical harmonics. The capture locations and amount of samples in the original data make it inconvenient for direct use in rendering and resampling is necessary to produce an efficient data structure. Each light probe represents a large set of incident radiance samples from different directions around the capture location. Under the assumption that the spatial volume in which the capture was performed has no internal occlusion, the radiance samples are projected through the volume along their corresponding direction in order to build a new set of radiance maps at selected locations, in this case a three dimensional grid. The resampled data is projected onto a spherical harmonic basis to allow for realtime lighting of synthetic objects inside the incident light field.

  • 30.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Emadi, Mohammad
    Qualcomm Technologies Inc., San Jose, CA, USA.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Ehsan, Afshari
    Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA.
    On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence2017In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 24, no 11, p. 1646-1650Article in journal (Refereed)
    Abstract [en]

    In this paper we present a new coherence-based performance guarantee for the Orthogonal Matching Pursuit (OMP) algorithm. A lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise is derived. Compared to previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a closer match to empirically obtained results of the OMP algorithm.

  • 31.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Compressive Image Reconstruction in Reduced Union of Subspaces2015In: Computer Graphics Forum, ISSN 1467-8659, Vol. 34, no 2, p. 33-44Article in journal (Refereed)
    Abstract [en]

    We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light-fields. The algorithm relies on a learning-based basis representation. We train an ensemble of intrinsically two-dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K-SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light-fields). We show that our method outperforms state-of-the-art algorithms in computer graphics and image processing literature.

  • 32.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Geometry Independent Surface Light Fields for Real TimeRendering of Precomputed Global Illumination2011In: Proceedings of SGRAD 2011 / [ed] Thomas Larsson, Lars Kjelldahl, Kai-Mikael Jää-Aro, Royal Institute of Technology, Stockholm, 2011, p. 27-34Conference paper (Refereed)
    Abstract [en]

    We present a framework for generating, compressing and rendering of Surface Light Field (SLF) data. Our methodis based on radiance data generated using physically based rendering methods. Thus the SLF data is generateddirectly instead of re-sampling digital photographs. Our SLF representation decouples spatial resolution fromgeometric complexity. We achieve this by uniform sampling of spatial dimension of the SLF function. For compression,we use Clustered Principal Component Analysis (CPCA). The SLF matrix is first clustered to low frequencygroups of points across all directions. Then we apply PCA to each cluster. The clustering ensures that the withinclusterfrequency of data is low, allowing for projection using a few principal components. Finally we reconstructthe CPCA encoded data using an efficient rendering algorithm. Our reconstruction technique ensures seamlessreconstruction of discrete SLF data. We applied our rendering method for fast, high quality off-line rendering andreal-time illumination of static scenes. The proposed framework is not limited to complexity of materials or lightsources, enabling us to render high quality images describing the full global illumination in a scene.

  • 33.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Learning based compression for real-time rendering of surface light fields2013In: Siggraph 2013 Posters, ACM Press, 2013Conference paper (Other academic)
    Abstract [en]

    Photo-realistic image synthesis in real-time is a key challenge in computer graphics. A number of techniques where the light transport in a scene is pre-computed, compressed and used for real-time image synthesis have been proposed. In this work, we extend this idea and present a technique where the radiance distribution in a scene, including arbitrarily complex materials and light sources, is pre-computed using photo-realistic rendering techniques and stored as surface light fields (SLF) at each surface. An SLF describes the full appearance of each surface in a scene as a 4D function over the spatial and angular domains. An SLF is a complex data set with a large memory footprint often in the order of several GB per object in the scene. The key contribution in this work is a novel approach for compression of surface light fields that enables real-time rendering of complex scenes. Our learning-based compression technique is based on exemplar orthogonal bases (EOB), and trains a compact dictionary of full-rank orthogonal basis pairs with sparse coefficients. Our results outperform the widely used CPCA method in terms of storage cost, visual quality and rendering speed. Compared to PRT techniques for real-time global illumination, our approach is limited to static scenes but can represent high frequency materials and any type of light source in a unified framework.

  • 34.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes2013In: Proceedings of ACM SIGGRAPH ASIA 2013, ACM Press, 2013Conference paper (Refereed)
    Abstract [en]

    We present an algorithm for compression and real-time rendering of surface light fields (SLF) encoding the visual appearance of objects in static scenes with high frequency variations. We apply a non-local clustering in order to exploit spatial coherence in the SLFdata. To efficiently encode the data in each cluster, we introducea learning based approach, Clustered Exemplar Orthogonal Bases(CEOB), which trains a compact dictionary of orthogonal basispairs, enabling efficient sparse projection of the SLF data. In ad-dition, we discuss the application of the traditional Clustered Principal Component Analysis (CPCA) on SLF data, and show that inmost cases, CEOB outperforms CPCA, K-SVD and spherical harmonics in terms of memory footprint, rendering performance andreconstruction quality. Our method enables efficient reconstructionand real-time rendering of scenes with complex materials and lightsources, not possible to render in real-time using previous methods.

  • 35.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES2016In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 2519-2523Conference paper (Refereed)
    Abstract [en]

    In this paper we consider the problem of nonlocal image completion from random measurements and using an ensemble of dictionaries. Utilizing recent advances in the field of compressed sensing, we derive conditions under which one can uniquely recover an incomplete image with overwhelming probability. The theoretical results are complemented by numerical simulations using various ensembles of analytical and training-based dictionaries.

  • 36.
    Tongbuasirilai, Tanaboon
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kurt, Murat
    Ege Univ, Turkey.
    Efficient BRDF Sampling Using Projected Deviation Vector Parameterization2017In: 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), IEEE , 2017, p. 153-158Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel approach for efficient sampling of isotropic Bidirectional Reflectance Distribution Functions (BRDFs). Our approach builds upon a new parameterization, the Projected Deviation Vector parameterization, in which isotropic BRDFs can be described by two 1D functions. We show that BRDFs can be efficiently and accurately measured in this space using simple mechanical measurement setups. To demonstrate the utility of our approach, we perform a thorough numerical evaluation and show that the BRDFs reconstructed from measurements along the two 1D bases produce rendering results that are visually comparable to the reference BRDF measurements which are densely sampled over the 4D domain described by the standard hemispherical parameterization.

  • 37.
    Tsirikoglou, Apostolia
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ekeberg, Simon
    Swiss International AB, Sweden.
    Vikström, Johan
    Swiss International AB, Sweden.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    S(wi)SS: A flexible and robust sub-surface scattering shader2014In: Proceedings of SIGRAD 2014 / [ed] Morten Fjeld, 2014Conference paper (Refereed)
    Abstract [en]

    S(wi)SS is a new, flexible artist friendly multi-layered sub-surface scattering shader that simulates accurately subsurface scattering for a large range of translucent materials. It is a physically motivated multi-layered approach where the sub-surface scattering effect is generated using one to three layers. It enables seamless mixing of the classical dipole, the better dipole and the quantized diffusion reflectance model in the sub-surface scattering layers, and additionally provides the scattering coming of front and back illumination, as well as all the BSDFcomponents, in separate render channels enabling the artist to either use them physically accurately or tweak them independently during compositing to produce the desired result. To demonstrate the usefulness of our approach, we show a set of high quality rendering results from different user scenarios.

  • 38.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Incident Light Fields2009Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Image based lighting, (IBL), is a computer graphics technique for creating photorealistic renderings of synthetic objects such that they can be placed into real world scenes. IBL has been widely recognized and is today used in commercial production pipelines. However, the current techniques only use illumination captured at a single point in space. This means that traditional IBL cannot capture or recreate effects such as cast shadows, shafts of light or other important spatial variations in the illumination. Such lighting effects are, in many cases, artistically created or are there to emphasize certain features, and are therefore a very important part of the visual appearance of a scene.

    This thesis and the included papers present methods that extend IBL to allow for capture and rendering with spatially varying illumination. This is accomplished by measuring the light field incident onto a region in space, called an Incident Light Field, (ILF), and using it as illumination in renderings. This requires the illumination to be captured at a large number of points in space instead of just one. The complexity of the capture methods and rendering algorithms are then significantly increased.

    The technique for measuring spatially varying illumination in real scenes is based on capture of High Dynamic Range, (HDR), image sequences. For efficient measurement, the image capture is performed at video frame rates. The captured illumination information in the image sequences is processed such that it can be used in computer graphics rendering. By extracting high intensity regions from the captured data and representing them separately, this thesis also describes a technique for increasing rendering efficiency and methods for editing the captured illumination, for example artificially moving or turning on and of individual light sources.

    List of papers
    1. Capturing and Rendering with Incident Light Fields
    Open this publication in new window or tab >>Capturing and Rendering with Incident Light Fields
    Show others...
    2003 (English)In: EGSR’03, The 14th Eurographics Symposium on Rendering 2003, Leuven, Belgium, 2003Conference paper, Published paper (Refereed)
    Abstract [en]

    This paper presents a process for capturing spatially and directionally varying illumination from a real-world scene and using this lighting to illuminate computer-generated objects. We use two devices for capturing such illumination. In the first we photograph an array of mirrored spheres in high dynamic range to capture the spatially varying illumination. In the second, we obtain higher resolution data by capturing images with an high dynamic range omnidirectional camera as it traverses across a plane. For both methods we apply the light field technique to extrapolate the incident illumination to a volume. We render computer-generated objects as illuminated by this captured illumination using a custom shader within an existing global illumination rendering system. To demonstrate our technique we capture several spatially-varying lighting environments with spotlights, shadows, and dappled lighting and use them to illuminate synthetic scenes. We also show comparisons to real objects under the the same illumination.

    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-16281 (URN)
    Available from: 2009-01-13 Created: 2009-01-13 Last updated: 2015-09-22Bibliographically approved
    2. A Real Time Light Probe
    Open this publication in new window or tab >>A Real Time Light Probe
    2004 (English)In: The 25th Eurographics Annual Conference 2004 Short papers and Interactive Applications, Grenoble, France, 2004Conference paper, Published paper (Refereed)
    Abstract [en]

    We present a novel system capable of capturing high dynamic range (HDR) Light Probes at video speed. Each Light Probe frame is built from an individual full set of exposures, all of which are captured within the frame time. The exposures are processed and assembled into a mantissa-exponent representation image within the camera unit before output, and then streamed to a standard PC. As an example, the system is capable of capturing Light Probe Images with a resolution of 512x512 pixels using a set of 10 exposures covering 15 f-stops at a frame rate of up to 25 final HDR frames per second. The system is built around commercial special-purpose camera hardware with on-chip programmable image processing logic and tightly integrated frame buffer memory, and the algorithm is implemented as custom downloadable microcode software.

    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-16282 (URN)
    Conference
    Eurographics Annual Conference 2004
    Available from: 2009-01-13 Created: 2009-01-13 Last updated: 2015-09-22Bibliographically approved
    3. Performance Relighting and Reflectance Transformation with Time-Multiplexed Illumination
    Open this publication in new window or tab >>Performance Relighting and Reflectance Transformation with Time-Multiplexed Illumination
    Show others...
    2005 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 24, no 3Article in journal (Refereed) Published
    Abstract [en]

    We present a technique for capturing an actor’s live-action performance in such a way that the lighting and reflectance of the actor can be designed and modified in postproduction. Our approach is to illuminate the subject with a sequence of time-multiplexed basis lighting conditions, and to record these conditions with a highspeed video camera so that many conditions are recorded in the span of the desired output frame interval. We investigate several lighting bases for representing the sphere of incident illumination using a set of discrete LED light sources, and we estimate and compensate for subject motion using optical flow and image warping based on a set of tracking frames inserted into the lighting basis. To composite the illuminated performance into a new background, we include a time-multiplexed matte within the basis. We also show that the acquired data enables time-varying surface normals, albedo, and ambient occlusion to be estimated, which can be used to transform the actor’s reflectance to produce both subtle and stylistic effects.

    Keywords
    Relighting, compositing, environmental illumination, image-based rendering, reflectance models
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-16283 (URN)10.1145/1073204.1073258 (DOI)
    Available from: 2009-01-13 Created: 2009-01-13 Last updated: 2017-12-14Bibliographically approved
    4. Densely Sampled Light Probe Sequences for Spatially Variant Image Based Lighting
    Open this publication in new window or tab >>Densely Sampled Light Probe Sequences for Spatially Variant Image Based Lighting
    2006 (English)In: The 4th International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia, 2006 Kuala Lumpur, Malaysia, 2006, p. 341-347Conference paper, Published paper (Refereed)
    Abstract [en]

    We present a novel technique for capturing spatially and temporally resolved light probe sequences, and using them for rendering. For this purpose we have designed and built a Real Time Light Probe; a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The Real Time Light Probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images with a dynamic range of 10,000,000:1 at 25 frames per second.

    By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point in space along the path of motion to a particular frame in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, using both traditional image based lighting methods with temporally varying light probe illumination and an extension to handle spatially varying lighting conditions across large objects.

    Keywords
    HDR, Video, Image Based Lighting
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-16284 (URN)10.1145/1174429.1174487 (DOI)1-59593-564-9 (ISBN)
    Available from: 2009-01-13 Created: 2009-01-13 Last updated: 2015-09-22Bibliographically approved
    5. Spatially Varying Image Based Lighting by Light Probe Sequences, Capture, Processing and Rendering
    Open this publication in new window or tab >>Spatially Varying Image Based Lighting by Light Probe Sequences, Capture, Processing and Rendering
    2007 (English)In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 23, no 7, p. 453-465Article in journal (Refereed) Published
    Abstract [en]

    We present a novel technique for capturing spatially or temporally resolved light probe sequences, and using them for image based lighting. For this purpose we have designed and built a real-time light probe, a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The real-time light probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images of 512×512 pixels with a dynamic range of 10000000:1 at 25 frames per second.

    By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point and direction in space along the path of motion to a particular frame and pixel in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, first by using traditional image based lighting methods and temporally varying light probe illumination, and second an extension to handle spatially varying lighting conditions across large objects and object motion along an extended path.

    Place, publisher, year, edition, pages
    Springer Link, 2007
    Keywords
    High dynamic range imaging, Image based lighting
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-16285 (URN)10.1007/s00371-007-0127-6 (DOI)
    Available from: 2009-01-13 Created: 2009-01-13 Last updated: 2017-12-14Bibliographically approved
    6. Free Form Incident Light Fields
    Open this publication in new window or tab >>Free Form Incident Light Fields
    2008 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 27, no 4, p. 1293-1301Article in journal (Refereed) Published
    Abstract [en]

    This paper presents methods for photo-realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4-D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.

    Place, publisher, year, edition, pages
    Wiley InterScience, 2008
    Keywords
    Three-Dimensional Graphics and Realism, Digitization and Image Capture
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-16286 (URN)10.1111/j.1467-8659.2008.01268.x (DOI)
    Available from: 2009-01-13 Created: 2009-01-13 Last updated: 2017-12-14Bibliographically approved
  • 39.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Banterle, Francesco
    Visual Computing Laboratory at ISTI-CNR, Italy.
    Mantiuk, Rafal
    Computer Laboratory, University of Cambridge, UK.
    Eilertsen, Gabriel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    The HDR-video pipeline: From capture and image reconstruction to compression and tone mapping2016Conference paper (Other academic)
    Abstract [en]

    High dynamic range (HDR) video technology has gone through remarkable developments over the last few years;HDR-video cameras are being commercialized, new algorithms for color grading and tone mapping specifically designed for HDR-video have recently been proposed, and the first open source compression algorithms for HDR-video are becoming available. HDR-video represents a paradigm shift in imaging and computer graphics, which has and will continue to generate a range of both new research challenges and applications. This intermediate-level tutorial will give an in-depth overview of the full HDR-video pipeline present several examples of state-of-the-art algorithms and technology in HDR-video capture, tone mapping, compression and specific applications in computer graphics.

  • 40.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology.
    An optical system for single image environment maps2007In: SIGGRAPH '07 ACM SIGGRAPH 2007 posters, ACM Press, 2007Conference paper (Refereed)
    Abstract [en]

    We present an optical setup for capturing a full 360° environment map in a single image snapshot. The setup, which can be used with any camera device, consists of a curved mirror swept around a negative lens, and is suitable for capturing environment maps and light probes. The setup achieves good sampling density and uniformity for all directions in the environment.

  • 41.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    SpheronVR, Germany.
    Kaiser, Gunnar
    SpheronVR, Germany.
    Next Generation Image Based Lighting using HDR Video2011In: Proceeding SIGGRAPH '11 ACM SIGGRAPH 2011 Talks, ACM Special Interest Group on Computer Science Education, 2011, p. article no 60-Conference paper (Refereed)
    Abstract [en]

    We present an overview of our recently developed systems pipeline for capture, reconstruction, modeling and rendering of real world scenes based on state-of-the-art high dynamic range video (HDRV). The reconstructed scene representation allows for photo-realistic Image Based Lighting (IBL) in complex environments with strong spatial variations in the illumination. The pipeline comprises the following essential steps:

    1.) Capture - The scene capture is based on a 4MPixel global shutter HDRV camera with a dynamic range of more than 24 f-stops at 30 fps. The HDR output stream is stored as individual un-compressed frames for maximum flexibility. A scene is usually captured using a combination of panoramic light probe sequences [1], and sequences with a smaller field of view to maximize the resolution at regions of special interest in the scene. The panoramic sequences ensure full angular coverage at each position and guarantee that the information required for IBL is captured. The position and orientation of the camera is tracked during capture.

    2.) Scene recovery - Taking one or more HDRV sequences as input, a geometric proxy model of the scene is built using a semi-automatic approach. First, traditional computer vision algorithms such as structure from motion [2] and Manhattan world stereo [3] are used. If necessary, the recovered model is then modified using an interaction scheme based on visualizations of a volumetric representation of the scene radiance computed from the input HDRV sequence. The HDR nature of this volume also enables robust extraction of direct light sources and other high intensity regions in the scene.

    3.) Radiance processing - When the scene proxy geometry has been recovered, the radiance data captured in the HDRV sequences are re-projected onto the surfaces and the recovered light sources. Since most surface points have been imaged from a large number of directions, it is possible to reconstruct view dependent texture maps at the proxy geometries. These 4D data sets describe a combination of detailed geometry that has not been recovered and the radiance reflected from the underlying real surfaces. The view dependent textures are then processed and compactly stored in an adaptive data structure.

    4.) Rendering - Once the geometric and radiometric scene information has been recovered, it is possible to place virtual objects into the real scene and create photo-realistic renderings as illustrated above. The extracted light sources enable efficient sampling and rendering times that are fully comparable to that of traditional virtual computer graphics light sources. No previously described method is capable of capturing and reproducing the angular and spatial variation in the scene illumination in comparable detail.

    We believe that the rapid development of high quality HDRV systems will soon have a large impact on both computer vision and graphics. Following this trend, we are developing theory and algorithms for efficient processing HDRV sequences and using the abundance of radiance data that is going to be available.

  • 42.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ollila, Mark
    Integrated Vision Products AB, Sweden.
    Johannesson, Mattias
    Integrated Vision Products AB, Sweden.
    A Real Time Light Probe2004In: The 25th Eurographics Annual Conference 2004 Short papers and Interactive Applications, Grenoble, France, 2004Conference paper (Refereed)
    Abstract [en]

    We present a novel system capable of capturing high dynamic range (HDR) Light Probes at video speed. Each Light Probe frame is built from an individual full set of exposures, all of which are captured within the frame time. The exposures are processed and assembled into a mantissa-exponent representation image within the camera unit before output, and then streamed to a standard PC. As an example, the system is capable of capturing Light Probe Images with a resolution of 512x512 pixels using a set of 10 exposures covering 15 f-stops at a frame rate of up to 25 final HDR frames per second. The system is built around commercial special-purpose camera hardware with on-chip programmable image processing logic and tightly integrated frame buffer memory, and the algorithm is implemented as custom downloadable microcode software.

  • 43.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Per, Larsson
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Free Form Incident Light Fields2008In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 27, no 4, p. 1293-1301Article in journal (Refereed)
    Abstract [en]

    This paper presents methods for photo-realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4-D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.

  • 44.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Densely Sampled Light Probe Sequences for Spatially Variant Image Based Lighting2006In: The 4th International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia, 2006 Kuala Lumpur, Malaysia, 2006, p. 341-347Conference paper (Refereed)
    Abstract [en]

    We present a novel technique for capturing spatially and temporally resolved light probe sequences, and using them for rendering. For this purpose we have designed and built a Real Time Light Probe; a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The Real Time Light Probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images with a dynamic range of 10,000,000:1 at 25 frames per second.

    By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point in space along the path of motion to a particular frame in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, using both traditional image based lighting methods with temporally varying light probe illumination and an extension to handle spatially varying lighting conditions across large objects.

  • 45.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    High Dynamic Range Video for Photometric Measurement of Illumination2007In: Sensors, Cameras, and Systems for Scientific/Industrial Applications VIII / [ed] Morley M. Blouke, Bellingham, Washington/Springfield, Virginia, USA: SPIE—The International Society for Optical Engineering & IS&T—The Society for Imaging Science and Technology , 2007, p. 65010E-1-65010E-10Conference paper (Refereed)
    Abstract [en]

    We describe the design and implementation of a high dynamic range (HDR) imaging system capable of capturing RGB color images with a dynamic range of 10,000,000 : 1 at 25 frames per second. We use a highly programmable camera unit with high throughput A/D conversion, data processing and data output. HDR acquisition is performed by multiple exposures in a continuous rolling shutter progression over the sensor. All the different exposures for one particular row of pixels are acquired head to tail within the frame time, which means that the time disparity between exposures is minimal, the entire frame time can be used for light integration and the longest expo- sure is almost the entire frame time. The system is highly configurable, and trade-offs are possible between dynamic range, precision, number of exposures, image resolution and frame rate.

  • 46.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Spatially Varying Image Based Lighting by Light Probe Sequences, Capture, Processing and Rendering2007In: The Visual Computer, ISSN 0178-2789, E-ISSN 1432-2315, Vol. 23, no 7, p. 453-465Article in journal (Refereed)
    Abstract [en]

    We present a novel technique for capturing spatially or temporally resolved light probe sequences, and using them for image based lighting. For this purpose we have designed and built a real-time light probe, a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The real-time light probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images of 512×512 pixels with a dynamic range of 10000000:1 at 25 frames per second.

    By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point and direction in space along the path of motion to a particular frame and pixel in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, first by using traditional image based lighting methods and temporally varying light probe illumination, and second an extension to handle spatially varying lighting conditions across large objects and object motion along an extended path.

  • 47.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unified reconstruction of RAW HDR video data2016In: High dynamic range video: from acquisition to display and applications / [ed] Frédéric Dufaux, Patrick Le Callet, Rafal K. Mantiuk, Marta Mrak, London, United Kingdom: Academic Press, 2016, 1st, p. 63-82Chapter in book (Other academic)
    Abstract [en]

    Traditional HDR capture has mostly relied on merging images captured with different exposure times. While this works well for static scenes, dynamic scenes poses difficult challenges as registration of differently exposed images often leads to ghosting and other artifacts. This chapter reviews methods which capture HDR-video frames within a single exposure time, using either multiple synchronised sensors, or by multiplexing of the sensor response spatially across the sensor. Most previous HDR reconstruction methods perform demoisaicing, noise reduction, resampling (registration), and HDR-fusion in separate steps. This chapter presents a framework for unified HDR-reconstruction, including all steps in the traditional imaging pipeline in a single adaptive filtering operation, and describes an image formation model and a sensor noise model applicable to both single-, and multi-sensor systems. The benefits of using raw data directly are demonstrated with examples using input data from multiple synchronized sensors, and single images with varying per-pixel gain.

  • 48.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Löw, Joakim
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Spatially varying image based lighting using HDR-video2013In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 37, no 7, p. 923-934Article in journal (Refereed)
    Abstract [en]

    Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.

  • 49.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynner, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Temporally and Spatially Varying Image Based Lighting using HDR-video2013In: Proceedings of the 21st European Signal Processing Conference (EUSIPCO), 2013: Special Session on HDR-video, IEEE , 2013, p. 1-5Conference paper (Refereed)
    Abstract [en]

    In this paper we present novel algorithms and data structures for capturing, processing and rendering with real world lighting conditions based on high dynamic range video sequences. Based on the captured HDR video data we show how traditional image based lighting can be extended to include illumination variations in both the temporal as well as the spatial domain. This enables highly realistic renderings where traditional IBL techniques using a single light probe fail to capture important details in the real world lighting environment. To demonstrate the usefulness of our approach, we show examples of both off-line and real-time rendering applications.

  • 50.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Image Based Lighting using HDR-video2013In: Eurographics 24th Symposium on Rendering: Posters, 2013Conference paper (Other academic)
    Abstract [en]

    It has been widely recognized that lighting plays a key role in the realism and visual interest of computer graphics renderings. This hasled to research and development of image based lighting (IBL) techniques where the illumination conditions in real world scenes are captured as high dynamic range (HDR) image panoramas and used as lighting information during rendering. Traditional IBL where the lighting is captured at a single position in the scene has now become a widely used tool in most production pipelines. In this poster, we give an overview of a system pipeline where we use HDR-video cameras to extend traditional IBL techniques to capture real world lighting that may include variations in the spatial or temporal domains. We also describe how the capture systems and algorithms for processing and rendering have been incorporated into a robust systems pipeline for production of highly realisticrenderings. High dynamic range video based scene capture thus enables highly realistic renderings where traditional image based lighting, using a single light probe, fail to capture important details.

12 1 - 50 of 55
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf