liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 28 of 28
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time image based lighting with streaming HDR-lightprobe sequences2012In: Proceedings of SIGRAD 2012 / [ed] Andreas Kerren, Stefan Seipel, Linköping, Sweden, 2012Conference paper (Other academic)
    Abstract [en]

    We present a framework for shading of virtual objects using high dynamic range (HDR) light probe sequencesin real-time. Such images (light probes) are captured using a high resolution HDR camera. In each frame ofthe HDR video, an optimized CUDA kernel is used to project incident lighting into spherical harmonics in realtime. Transfer coefficients are calculated in an offline process. Using precomputed radiance transfer the radiancecalculation reduces to a low order dot product between lighting and transfer coefficients. We exploit temporalcoherence between frames to further smooth lighting variation over time. Our results show that the frameworkcan achieve the effects of consistent illumination in real-time with flexibility to respond to dynamic changes in thereal environment.

  • 2.
    Hajisharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Adaptive dualISO HDR-reconstruction2015In: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281Article in journal (Refereed)
    Abstract [en]

    With the development of modern image sensors enabling flexible image acquisition, single shot HDR imaging is becoming increasingly popular. In this work we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. In comparison to previous single shot HDR capture based on a single sensor, this allows all incoming photons to be used in the imaging, instead of wasting incoming light using spatially varying ND-filters, commonly used in previous works. The main technical contribution in this work is an  extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations [15,10]. Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike previous works, using  adaptive filter kernels [16], our algorithms are based on analysing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain pattern and noise properties of the sensor.

  • 3.
    Hajsharif, Saghi
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    HDR reconstruction for alternating gain (ISO) sensor readout2014In: Eurographics 2014 short papers, 2014Conference paper (Refereed)
    Abstract [en]

    Modern image sensors are becoming more and more flexible in the way an image is captured. In this paper, we focus on sensors that allow the per pixel gain to be varied over the sensor and develop a new technique for efficient and accurate reconstruction of high dynamic range (HDR) images based on such input data. Our method estimates the radiant power at each output pixel using a sampling operation which performs color interpolation, re-sampling, noise reduction and HDR-reconstruction in a single step. The reconstruction filter uses a sensor noise model to weight the input pixel samples according to their variances. Our algorithm works in only a small spatial neighbourhood around each pixel and lends itself to efficient implementation in hardware. To demonstrate the utility of our approach we show example HDR-images reconstructed from raw sensor data captured using off-the shelf consumer hardware which allows for two different gain settings for different rows in the same image. To analyse the accuracy of the algorithm, we also use synthetic images from a camera simulation software.

  • 4.
    Jönsson, Daniel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ropinski, Timo
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 12, p. 2364-2371Article in journal (Refereed)
    Abstract [en]

    In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.

  • 5.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Physically Based Rendering of Synthetic Objects in Real Environments2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis presents methods for photorealistic rendering of virtual objects so that they can be seamlessly composited into images of the real world. To generate predictable and consistent results, we study physically based methods, which simulate how light propagates in a mathematical model of the augmented scene. This computationally challenging problem demands both efficient and accurate simulation of the light transport in the scene, as well as detailed modeling of the geometries, illumination conditions, and material properties. In this thesis, we discuss and formulate the challenges inherent in these steps and present several methods to make the process more efficient.

    In particular, the material contained in this thesis addresses four closely related areas: HDR imaging, IBL, reflectance modeling, and efficient rendering. The thesis presents a new, statistically motivated algorithm for HDR reconstruction from raw camera data combining demosaicing, denoising, and HDR fusion in a single processing operation. The thesis also presents practical and robust methods for rendering with spatially and temporally varying illumination conditions captured using omnidirectional HDR video. Furthermore, two new parametric BRDF models are proposed for surfaces exhibiting wide angle gloss. Finally, the thesis also presents a physically based light transport algorithm based on Markov Chain Monte Carlo methods that allows approximations to be used in place of exact quantities, while still converging to the exact result. As illustrated in the thesis, the proposed algorithm enables efficient rendering of scenes with glossy transfer and heterogenous participating media.

    List of papers
    1. Photorealistic rendering of mixed reality scenes
    Open this publication in new window or tab >>Photorealistic rendering of mixed reality scenes
    Show others...
    2015 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 2, p. 643-665Article in journal (Refereed) Published
    Abstract [en]

    Photo-realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods of advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state-of-the-art in this field, and presents a categorization and comparison of current methods. Our in-depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.

    Place, publisher, year, edition, pages
    Wiley-Blackwell, 2015
    Keywords
    Picture/Image Generation—Illumination Estimation, Image-Based Lighting, Reflectance and Shading
    National Category
    Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-118542 (URN)10.1111/cgf.12591 (DOI)000358326600060 ()
    Conference
    The 36th Annual Conference of the European Association of Computer Graphics, Eurographics 2015, Zürich, Switzerland, 4th–8th May 2015
    Projects
    VPS
    Funder
    Swedish Foundation for Strategic Research , IIS11-0081Linnaeus research environment CADICS
    Available from: 2015-05-31 Created: 2015-05-31 Last updated: 2017-12-04Bibliographically approved
    2. Pseudo-Marginal Metropolis Light Transport
    Open this publication in new window or tab >>Pseudo-Marginal Metropolis Light Transport
    2015 (English)In: Proceeding SA '15 SIGGRAPH Asia 2015 Technical Briefs, ACM Digital Library, 2015, p. 13:1-13:4Conference paper, Published paper (Other academic)
    Abstract [en]

    Accurate and efficient simulation of light transport in heterogeneous participating media, such as smoke, clouds and fire, plays a key role in the synthesis of visually interesting renderings for e.g. visual effects, computer games and product visualization. However, rendering of scenes with heterogenous participating with Metropolis light transport (MLT) algorithms have previously been limited to primary sample space methods or using biased approximations of the transmittance in the scene. This paper presents a new sampling strategy for Markov chain Monte Carlo (MCMC) methods, e.g. MLT, based on pseudo-marginal MCMC. Specifically, we show that any positive and unbiased estimator of the target distribution can replace the exact quantity to simulate a Markov Chain with a stationary distribution that has a marginal which is the exact target distribution of interest. This enables us to evaluate the transmittance function with recent unbiased estimators which leads to significantly shorter rendering times. Compared to previous work, relying on (biased) ray-marching for evaluating transmittance, our method enables simulation of longer Markov chains, a better exploration of the path space, and consequently less image noise, for a given computational budget. To demonstrate the usefulness of our pseudo-marginal approach, we compare it to representative methods for efficient rendering of anisotropic heterogeneous participating media and glossy transfer. We show that it performs significantly better in terms of image noise and rendering times compared to previous techniques. Our method is robust, and can easily be implemented in a modern renderer.

    Place, publisher, year, edition, pages
    ACM Digital Library, 2015
    National Category
    Computer Sciences Computer and Information Sciences
    Identifiers
    urn:nbn:se:liu:diva-122586 (URN)10.1145/2820903.2820922 (DOI)978-1-4503-3930-8 (ISBN)
    Conference
    The 8th ACM SIGGRAPH Conference and Exhibition, Asia Technical Briefs, 3-5 November, Kobe, Japan
    Available from: 2015-11-10 Created: 2015-11-10 Last updated: 2018-01-10Bibliographically approved
    3. Temporally and Spatially Varying Image Based Lighting using HDR-video
    Open this publication in new window or tab >>Temporally and Spatially Varying Image Based Lighting using HDR-video
    Show others...
    2013 (English)In: Proceedings of the 21st European Signal Processing Conference (EUSIPCO), 2013: Special Session on HDR-video, IEEE , 2013, p. 1-5Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper we present novel algorithms and data structures for capturing, processing and rendering with real world lighting conditions based on high dynamic range video sequences. Based on the captured HDR video data we show how traditional image based lighting can be extended to include illumination variations in both the temporal as well as the spatial domain. This enables highly realistic renderings where traditional IBL techniques using a single light probe fail to capture important details in the real world lighting environment. To demonstrate the usefulness of our approach, we show examples of both off-line and real-time rendering applications.

    Place, publisher, year, edition, pages
    IEEE, 2013
    National Category
    Electrical Engineering, Electronic Engineering, Information Engineering
    Identifiers
    urn:nbn:se:liu:diva-95746 (URN)000341754500314 ()
    Conference
    21st European Signal Processing Conference (EUSIPCO 2013), 9-13 September 2013, Marrakech, Morocco
    Projects
    VPS
    Funder
    Swedish Research CouncilSwedish Foundation for Strategic Research , IIS11-0080
    Available from: 2013-07-18 Created: 2013-07-18 Last updated: 2015-11-10Bibliographically approved
    4. Spatially varying image based lighting using HDR-video
    Open this publication in new window or tab >>Spatially varying image based lighting using HDR-video
    Show others...
    2013 (English)In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 37, no 7, p. 923-934Article in journal (Refereed) Published
    Abstract [en]

    Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.

    Place, publisher, year, edition, pages
    Elsevier, 2013
    Keywords
    High dynamic range video, HDR-video, image based lighting, photo realistic image synthesis
    National Category
    Media Engineering Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-96949 (URN)10.1016/j.cag.2013.07.001 (DOI)000325834400015 ()
    Projects
    VPS
    Funder
    Swedish Foundation for Strategic Research , IIS11-0081Swedish Research Council
    Available from: 2013-08-30 Created: 2013-08-30 Last updated: 2017-12-06Bibliographically approved
    5. Unified HDR reconstruction from raw CFA data
    Open this publication in new window or tab >>Unified HDR reconstruction from raw CFA data
    2013 (English)In: Proceedings of IEEE International Conference on Computational Photography 2013 / [ed] David Boas, Paris Sylvain, Shmel Peleg, Todd Zickler, IEEE , 2013, p. 1-9Conference paper, Published paper (Refereed)
    Abstract [en]

    HDR reconstruction from multiple exposures poses several challenges. Previous HDR reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion in several steps. We instead present a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation. Our algorithm includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise. We also present a realistic camera noise model adapted to HDR video. The method allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over state-of-the-art methods, both in terms of flexibility and reconstruction quality.

    Place, publisher, year, edition, pages
    IEEE, 2013
    National Category
    Engineering and Technology Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-90106 (URN)10.1109/ICCPhot.2013.6528315 (DOI)978-1-4673-6463-8 (ISBN)
    Conference
    5th IEEE International Conference on Computational Photography, ICCP 2013; Cambridge, MA; United States
    Projects
    VPS
    Available from: 2013-03-19 Created: 2013-03-19 Last updated: 2015-11-10
    6. A unified framework for multi-sensor HDR video reconstruction
    Open this publication in new window or tab >>A unified framework for multi-sensor HDR video reconstruction
    Show others...
    2014 (English)In: Signal Processing : Image Communications, ISSN 0923-5965, Vol. 29, no 2, p. 203-215Article in journal (Refereed) Published
    Abstract [en]

    One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.

    Place, publisher, year, edition, pages
    Elsevier, 2014
    Keywords
    HDR video, HDR fusion, Kernel regression, Radiometric calibration
    National Category
    Media Engineering
    Identifiers
    urn:nbn:se:liu:diva-104617 (URN)10.1016/j.image.2013.08.018 (DOI)000332999200003 ()
    Projects
    VPS
    Funder
    Swedish Foundation for Strategic Research , IIS11-0081
    Available from: 2014-02-19 Created: 2014-02-19 Last updated: 2015-11-10Bibliographically approved
    7. Adaptive dualISO HDR-reconstruction
    Open this publication in new window or tab >>Adaptive dualISO HDR-reconstruction
    2015 (English)In: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281Article in journal (Refereed) Published
    Abstract [en]

    With the development of modern image sensors enabling flexible image acquisition, single shot HDR imaging is becoming increasingly popular. In this work we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. In comparison to previous single shot HDR capture based on a single sensor, this allows all incoming photons to be used in the imaging, instead of wasting incoming light using spatially varying ND-filters, commonly used in previous works. The main technical contribution in this work is an  extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations [15,10]. Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike previous works, using  adaptive filter kernels [16], our algorithms are based on analysing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain pattern and noise properties of the sensor.

    Place, publisher, year, edition, pages
    Springer Publishing Company, 2015
    Keywords
    HDR reconstruction; Single shot HDR imaging; DualISO; Statistical image fitlering
    National Category
    Computer Sciences Computer and Information Sciences
    Identifiers
    urn:nbn:se:liu:diva-122587 (URN)10.1186/s13640-015-0095-0 (DOI)000366324500001 ()
    Note

    Funding agencies: Swedish Foundation for Strategic Research (SSF) [IIS11-0081]; Linkoping University Center for Industrial Information Technology (CENIIT); Swedish Research Council through the Linnaeus Environment CADICS

    Available from: 2015-11-10 Created: 2015-11-10 Last updated: 2018-01-10Bibliographically approved
    8. BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces
    Open this publication in new window or tab >>BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces
    2012 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 31, no 1Article in journal (Refereed) Published
    Abstract [en]

    This article presents two new parametric models of the Bidirectional Reflectance Distribution Function (BRDF), one inspired by the Rayleigh-Rice theory for light scattering from optically smooth surfaces, and one inspired by micro-facet theory. The models represent scattering from a wide range of glossy surface types with high accuracy. In particular, they enable representation of types of surface scattering which previous parametric models have had trouble modeling accurately. In a study of the scattering behavior of measured reflectance data, we investigate what key properties are needed for a model to accurately represent scattering from glossy surfaces. We investigate different parametrizations and how well they match the behavior of measured BRDFs. We also examine the scattering curves which are represented in parametric models by different distribution functions. Based on the insights gained from the study, the new models are designed to provide accurate fittings to the measured data. Importance sampling schemes are developed for the new models, enabling direct use in existing production pipelines. In the resulting renderings we show that the visual quality achieved by the models matches that of the measured data.

    Place, publisher, year, edition, pages
    Association for Computing Machinery (ACM), 2012
    Keywords
    BRDF, gloss, Rayleigh-Rice, global illumination, Monte Carlo, importance sampling
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-75045 (URN)10.1145/2077341.2077350 (DOI)000300622500009 ()
    Projects
    CADICSELLIIT
    Note
    funding agencies|Swedish Foundation for Strategic Research through the Strategic Research Centre MOVIII| A3:05:193 |Swedish Knowledge Foundation| 2009/0091 |Forskning och Framtid| ITN 2009-00116 |Swedish Research Council through the Linnaeus Center for Control, Autonomy, and Decision-making in Complex Systems (CADICS)||Excellence Center at Linkoping and Lund in Information Technology (ELLIIT)||Available from: 2012-02-15 Created: 2012-02-15 Last updated: 2017-12-07
  • 6.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Banterle, Francesco
    Visual Computing Lab, ISTI-CNR, Italy.
    Gardner, Andrew
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Photorealistic rendering of mixed reality scenes2015In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 2, p. 643-665Article in journal (Refereed)
    Abstract [en]

    Photo-realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods of advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state-of-the-art in this field, and presents a categorization and comparison of current methods. Our in-depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.

  • 7.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Dahlin, Johan
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kok, Manon
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Schön, Thomas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology. Uppsala Universitet.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time video based lighting using GPU raytracing2014In: Proceedings of the 22nd European Signal Processing Conference (EUSIPCO), 2014, IEEE Signal Processing Society, 2014Conference paper (Refereed)
    Abstract [en]

    The recent introduction of HDR video cameras has enabled the development of image based lighting techniques for rendering virtual objects illuminated with temporally varying real world illumination. A key challenge in this context is that rendering realistic objects illuminated with video environment maps is computationally demanding. In this work, we present a GPU based rendering system based on the NVIDIA OptiX framework, enabling real time raytracing of scenes illuminated with video environment maps. For this purpose, we explore and compare several Monte Carlo sampling approaches, including bidirectional importance sampling, multiple importance sampling and sequential Monte Carlo samplers. While previous work have focused on synthetic data and overly simple environment maps sequences, we have collected a set of real world dynamic environment map sequences using a state-of-art HDR video camera for evaluation and comparisons.

  • 8.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    SpheronVR AG.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unified HDR reconstruction from raw CFA data2013In: Proceedings of IEEE International Conference on Computational Photography 2013 / [ed] David Boas, Paris Sylvain, Shmel Peleg, Todd Zickler, IEEE , 2013, p. 1-9Conference paper (Refereed)
    Abstract [en]

    HDR reconstruction from multiple exposures poses several challenges. Previous HDR reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion in several steps. We instead present a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation. Our algorithm includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise. We also present a realistic camera noise model adapted to HDR video. The method allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over state-of-the-art methods, both in terms of flexibility and reconstruction quality.

  • 9.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    AG Spheron VR, Germany.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    A unified framework for multi-sensor HDR video reconstruction2014In: Signal Processing : Image Communications, ISSN 0923-5965, Vol. 29, no 2, p. 203-215Article in journal (Refereed)
    Abstract [en]

    One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.

  • 10.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Real-time HDR video reconstruction for multi-sensor systems2012In: ACM SIGGRAPH 2012 Posters, New York, NY, USA: ACM Press, 2012, p. 65-Conference paper (Refereed)
    Abstract [en]

    HDR video is an emerging field of technology, with a few camera systems currently in existence [Myszkowski et al. 2008], Multi-sensor systems [Tocci et al. 2011] have recently proved to be particularly promising due to superior robustness against temporal artifacts, correct motion blur, and high light efficiency. Previous HDR reconstruction methods for multi-sensor systems have assumed pixel perfect alignment of the physical sensors. This is, however, very difficult to achieve in practice. It may even be the case that reflections in beam splitters make it impossible to match the arrangement of the Bayer filters between sensors. We therefor present a novel reconstruction method specifically designed to handle the case of non-negligible misalignments between the sensors. Furthermore, while previous reconstruction techniques have considered HDR assembly, debayering and denoising as separate problems, our method is capable of simultaneous HDR assembly, debayering and smoothing of the data (denoising). The method is also general in that it allows reconstruction to an arbitrary output resolution and mapping. The algorithm is implemented in CUDA, and shows video speed performance for an experimental HDR video platform consisting of four 2336x1756 pixels high quality CCD sensors imaging the scene trough a common optical system. ND-filters of different densities are placed in front of the sensors to capture a dynamic range of 24 f-stops.

  • 11.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Jönsson, Daniel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Löw, Joakim
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ljung, Patric
    Siemens.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering: -2012In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 18, no 3, p. 447-462Article in journal (Refereed)
    Abstract [sv]

    We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, includingdirectional lights, point lights and environment maps. real-time performance is achieved by encoding local and global volumetricvisibility using spherical harmonic (SH) basis functions stored in an efficient multi-resolution grid over the extent of the volume. Ourmethod enables high frequency shadows in the spatial domain, but is limited to a low frequency approximation of visibility and illuminationin the angular domain. In a first pass, Level Of Detail (LOD) selection in the grid is based on the current transfer function setting.This enables rapid on-line computation and SH projection of the local spherical distribution of visibility information. Using a piecewiseintegration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing thelight sources using their SH projections, the integral over lighting, visibility and isotropic phase functions can be efficiently computedduring rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performanceof the approach.

  • 12.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Schön, Thomas B.
    Uppsala Universitet, Sweden.
    Robust auxiliary particle filters using multiple importance sampling2014In: Proceedings of the 2014 IEEE Statistical Signal Processing Workshop, IEEE , 2014, p. 268-271Conference paper (Refereed)
    Abstract [en]

    A poor choice of importance density can have detrimental effect on the efficiency of a particle filter. While a specific choice of proposal distribution might be close to optimal for certain models, it might fail miserably for other models, possibly even leading to infinite variance. In this paper we show how mixture sampling techniques can be used to derive robust and efficient particle filters, that in general performs on par with, or better than, the best of the standard importance densities. We derive several variants of the auxiliary particle filter using both random and deterministic mixture sampling via multiple importance sampling. The resulting robust particle filters are easy to implement and require little parameter tuning.

  • 13.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Schön, Thomas B.
    Uppsala University.
    Dahlin, Johan
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Backward sequential Monte Carlo for marginal smoothing2014In: Proceedings of the 2014 IEEE Statistical Signal Processing Workshop, IEEE Press, 2014, p. 368-371Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a new type of particle smoother with linear computational complexity. The smoother is based on running a sequential Monte Carlo sampler backward in time after an initial forward filtering pass. While this introduces dependencies among the backward trajectories we show through simulation studies that the new smoother can outperform existing forward-backward particle smoothers when targeting the marginal smoothing densities.

  • 14.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Schön, Thomas B.
    Division of Systems and Control, Department of Information Technology, Uppsala University.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Pseudo-Marginal Metropolis Light Transport2015In: Proceeding SA '15 SIGGRAPH Asia 2015 Technical Briefs, ACM Digital Library, 2015, p. 13:1-13:4Conference paper (Other academic)
    Abstract [en]

    Accurate and efficient simulation of light transport in heterogeneous participating media, such as smoke, clouds and fire, plays a key role in the synthesis of visually interesting renderings for e.g. visual effects, computer games and product visualization. However, rendering of scenes with heterogenous participating with Metropolis light transport (MLT) algorithms have previously been limited to primary sample space methods or using biased approximations of the transmittance in the scene. This paper presents a new sampling strategy for Markov chain Monte Carlo (MCMC) methods, e.g. MLT, based on pseudo-marginal MCMC. Specifically, we show that any positive and unbiased estimator of the target distribution can replace the exact quantity to simulate a Markov Chain with a stationary distribution that has a marginal which is the exact target distribution of interest. This enables us to evaluate the transmittance function with recent unbiased estimators which leads to significantly shorter rendering times. Compared to previous work, relying on (biased) ray-marching for evaluating transmittance, our method enables simulation of longer Markov chains, a better exploration of the path space, and consequently less image noise, for a given computational budget. To demonstrate the usefulness of our pseudo-marginal approach, we compare it to representative methods for efficient rendering of anisotropic heterogeneous participating media and glossy transfer. We show that it performs significantly better in terms of image noise and rendering times compared to previous techniques. Our method is robust, and can easily be implemented in a modern renderer.

  • 15.
    Kronander, Joel
    et al.
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Moeller, Torsten
    Simon Fraser University, Vancouver.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Visual Information Technology and Applications (VITA). Linköping University, The Institute of Technology.
    Estimation and Modeling of Actual Numerical Errors in Volume Rendering2010In: COMPUTER GRAPHICS FORUM, ISSN 0167-7055, Vol. 29, no 3, p. 893-902Article in journal (Refereed)
    Abstract [en]

    In this paper we study the comprehensive effects on volume rendered images due to numerical errors caused by the use of finite precision for data representation and processing. To estimate actual error behavior we conduct a thorough study using a volume renderer implemented with arbitrary floating-point precision. Based on the experimental data we then model the impact of floating-point pipeline precision, sampling frequency and fixed-point input data quantization on the fidelity of rendered images. We introduce three models, an average model, which does not adapt to different data nor varying transfer functions, as well as two adaptive models that take the intricacies of a new data set and transfer function into account by adapting themselves given a few different images rendered. We also test and validate our models based on new data that was not used during our model building.

  • 16.
    Lindholm, Stefan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Accounting for Uncertainty in Medical Data: A CUDA Implementation of Normalized Convolution2011In: Evaluations of graphics and visualization - efficiency, usefulness, accessibility, usability., 2011Conference paper (Refereed)
    Abstract [en]

    The domain of medical imaging is naturally moving towards methods that can represent, and account for, localuncertainties in the image data. Even so, fast and efficient solutions that take uncertainty into account are notreadily available even for common problems such as gradient estimation. In this work we present a CUDA imple-mentation of Normalized Convolution, an uncertainty-aware image processing technique, well established in thesignal processing domain. Our results show that up to 100X speedups are possible, which enables full resolutionCT images to be processed at interactive processing speeds, fulfilling demands of both efficiency and interactivitythat exist in the medical domain.

  • 17.
    Löw, Joakim
    et al.
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    ABC - BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces2013In: Eurographics 24th Symposium on Rendering: Posters, 2013Conference paper (Other academic)
    Abstract [en]

    Glossy surface reflectance is hard to model accuratley using traditional parametric BRDF models. An alternative is provided by data driven reflectance models, however these models offers less user control and generally results in lower efficency. In our work we propose two new lightweight parameteric BRDF models for accurate modeling of glossy surface refllectance, one inspired by Rayleigh-Rice theory for optically smooth surfaces and one inspired by microfacet-theory. We base our models on a thourough study of the scattering behaviour of measured reflectance data from the MERL database. The study focuses on two key aspects of BRDF models, parametrization and scatter distribution. We propose a new scattering distributuion for glossy BRDFs inspired by the ABC model for surface statistics of optically smooth surfaces. Based on the survey we consider two parameterizations, one based on micro-facet theory using the halfway vector and one inspired by the parametrization for the Rayleigh-Rice BRDF model considering the projected devaition vector. To enable efficent rendering we also show how the new models can be approximatley sampled for importance sampling the scattering integral.

  • 18.
    Löw, Joakim
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces2012In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 31, no 1Article in journal (Refereed)
    Abstract [en]

    This article presents two new parametric models of the Bidirectional Reflectance Distribution Function (BRDF), one inspired by the Rayleigh-Rice theory for light scattering from optically smooth surfaces, and one inspired by micro-facet theory. The models represent scattering from a wide range of glossy surface types with high accuracy. In particular, they enable representation of types of surface scattering which previous parametric models have had trouble modeling accurately. In a study of the scattering behavior of measured reflectance data, we investigate what key properties are needed for a model to accurately represent scattering from glossy surfaces. We investigate different parametrizations and how well they match the behavior of measured BRDFs. We also examine the scattering curves which are represented in parametric models by different distribution functions. Based on the insights gained from the study, the new models are designed to provide accurate fittings to the measured data. Importance sampling schemes are developed for the new models, enabling direct use in existing production pipelines. In the resulting renderings we show that the visual quality achieved by the models matches that of the measured data.

  • 19.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Compressive Image Reconstruction in Reduced Union of Subspaces2015In: Computer Graphics Forum, ISSN 1467-8659, Vol. 34, no 2, p. 33-44Article in journal (Refereed)
    Abstract [en]

    We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light-fields. The algorithm relies on a learning-based basis representation. We train an ensemble of intrinsically two-dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K-SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light-fields). We show that our method outperforms state-of-the-art algorithms in computer graphics and image processing literature.

  • 20.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Geometry Independent Surface Light Fields for Real TimeRendering of Precomputed Global Illumination2011In: Proceedings of SGRAD 2011 / [ed] Thomas Larsson, Lars Kjelldahl, Kai-Mikael Jää-Aro, Royal Institute of Technology, Stockholm, 2011, p. 27-34Conference paper (Refereed)
    Abstract [en]

    We present a framework for generating, compressing and rendering of Surface Light Field (SLF) data. Our methodis based on radiance data generated using physically based rendering methods. Thus the SLF data is generateddirectly instead of re-sampling digital photographs. Our SLF representation decouples spatial resolution fromgeometric complexity. We achieve this by uniform sampling of spatial dimension of the SLF function. For compression,we use Clustered Principal Component Analysis (CPCA). The SLF matrix is first clustered to low frequencygroups of points across all directions. Then we apply PCA to each cluster. The clustering ensures that the withinclusterfrequency of data is low, allowing for projection using a few principal components. Finally we reconstructthe CPCA encoded data using an efficient rendering algorithm. Our reconstruction technique ensures seamlessreconstruction of discrete SLF data. We applied our rendering method for fast, high quality off-line rendering andreal-time illumination of static scenes. The proposed framework is not limited to complexity of materials or lightsources, enabling us to render high quality images describing the full global illumination in a scene.

  • 21.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Learning based compression for real-time rendering of surface light fields2013In: Siggraph 2013 Posters, ACM Press, 2013Conference paper (Other academic)
    Abstract [en]

    Photo-realistic image synthesis in real-time is a key challenge in computer graphics. A number of techniques where the light transport in a scene is pre-computed, compressed and used for real-time image synthesis have been proposed. In this work, we extend this idea and present a technique where the radiance distribution in a scene, including arbitrarily complex materials and light sources, is pre-computed using photo-realistic rendering techniques and stored as surface light fields (SLF) at each surface. An SLF describes the full appearance of each surface in a scene as a 4D function over the spatial and angular domains. An SLF is a complex data set with a large memory footprint often in the order of several GB per object in the scene. The key contribution in this work is a novel approach for compression of surface light fields that enables real-time rendering of complex scenes. Our learning-based compression technique is based on exemplar orthogonal bases (EOB), and trains a compact dictionary of full-rank orthogonal basis pairs with sparse coefficients. Our results outperform the widely used CPCA method in terms of storage cost, visual quality and rendering speed. Compared to PRT techniques for real-time global illumination, our approach is limited to static scenes but can represent high frequency materials and any type of light source in a unified framework.

  • 22.
    Miandji, Ehsan
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes2013In: Proceedings of ACM SIGGRAPH ASIA 2013, ACM Press, 2013Conference paper (Refereed)
    Abstract [en]

    We present an algorithm for compression and real-time rendering of surface light fields (SLF) encoding the visual appearance of objects in static scenes with high frequency variations. We apply a non-local clustering in order to exploit spatial coherence in the SLFdata. To efficiently encode the data in each cluster, we introducea learning based approach, Clustered Exemplar Orthogonal Bases(CEOB), which trains a compact dictionary of orthogonal basispairs, enabling efficient sparse projection of the SLF data. In ad-dition, we discuss the application of the traditional Clustered Principal Component Analysis (CPCA) on SLF data, and show that inmost cases, CEOB outperforms CPCA, K-SVD and spherical harmonics in terms of memory footprint, rendering performance andreconstruction quality. Our method enables efficient reconstructionand real-time rendering of scenes with complex materials and lightsources, not possible to render in real-time using previous methods.

  • 23.
    Tsirikoglou, Apostolia
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ekeberg, Simon
    Swiss International AB, Sweden.
    Vikström, Johan
    Swiss International AB, Sweden.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Unger, Jonas
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    S(wi)SS: A flexible and robust sub-surface scattering shader2014In: Proceedings of SIGRAD 2014 / [ed] Morten Fjeld, 2014Conference paper (Refereed)
    Abstract [en]

    S(wi)SS is a new, flexible artist friendly multi-layered sub-surface scattering shader that simulates accurately subsurface scattering for a large range of translucent materials. It is a physically motivated multi-layered approach where the sub-surface scattering effect is generated using one to three layers. It enables seamless mixing of the classical dipole, the better dipole and the quantized diffusion reflectance model in the sub-surface scattering layers, and additionally provides the scattering coming of front and back illumination, as well as all the BSDFcomponents, in separate render channels enabling the artist to either use them physically accurately or tweak them independently during compositing to produce the desired result. To demonstrate the usefulness of our approach, we show a set of high quality rendering results from different user scenarios.

  • 24.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Bonnet, Gerhard
    SpheronVR, Germany.
    Kaiser, Gunnar
    SpheronVR, Germany.
    Next Generation Image Based Lighting using HDR Video2011In: Proceeding SIGGRAPH '11 ACM SIGGRAPH 2011 Talks, ACM Special Interest Group on Computer Science Education, 2011, p. article no 60-Conference paper (Refereed)
    Abstract [en]

    We present an overview of our recently developed systems pipeline for capture, reconstruction, modeling and rendering of real world scenes based on state-of-the-art high dynamic range video (HDRV). The reconstructed scene representation allows for photo-realistic Image Based Lighting (IBL) in complex environments with strong spatial variations in the illumination. The pipeline comprises the following essential steps:

    1.) Capture - The scene capture is based on a 4MPixel global shutter HDRV camera with a dynamic range of more than 24 f-stops at 30 fps. The HDR output stream is stored as individual un-compressed frames for maximum flexibility. A scene is usually captured using a combination of panoramic light probe sequences [1], and sequences with a smaller field of view to maximize the resolution at regions of special interest in the scene. The panoramic sequences ensure full angular coverage at each position and guarantee that the information required for IBL is captured. The position and orientation of the camera is tracked during capture.

    2.) Scene recovery - Taking one or more HDRV sequences as input, a geometric proxy model of the scene is built using a semi-automatic approach. First, traditional computer vision algorithms such as structure from motion [2] and Manhattan world stereo [3] are used. If necessary, the recovered model is then modified using an interaction scheme based on visualizations of a volumetric representation of the scene radiance computed from the input HDRV sequence. The HDR nature of this volume also enables robust extraction of direct light sources and other high intensity regions in the scene.

    3.) Radiance processing - When the scene proxy geometry has been recovered, the radiance data captured in the HDRV sequences are re-projected onto the surfaces and the recovered light sources. Since most surface points have been imaged from a large number of directions, it is possible to reconstruct view dependent texture maps at the proxy geometries. These 4D data sets describe a combination of detailed geometry that has not been recovered and the radiance reflected from the underlying real surfaces. The view dependent textures are then processed and compactly stored in an adaptive data structure.

    4.) Rendering - Once the geometric and radiometric scene information has been recovered, it is possible to place virtual objects into the real scene and create photo-realistic renderings as illustrated above. The extracted light sources enable efficient sampling and rendering times that are fully comparable to that of traditional virtual computer graphics light sources. No previously described method is capable of capturing and reproducing the angular and spatial variation in the scene illumination in comparable detail.

    We believe that the rapid development of high quality HDRV systems will soon have a large impact on both computer vision and graphics. Following this trend, we are developing theory and algorithms for efficient processing HDRV sequences and using the abundance of radiance data that is going to be available.

  • 25.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Hajisharif, Saghi
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Unified reconstruction of RAW HDR video data2016In: High dynamic range video: from acquisition to display and applications / [ed] Frédéric Dufaux, Patrick Le Callet, Rafal K. Mantiuk, Marta Mrak, London, United Kingdom: Academic Press, 2016, 1st, p. 63-82Chapter in book (Other academic)
    Abstract [en]

    Traditional HDR capture has mostly relied on merging images captured with different exposure times. While this works well for static scenes, dynamic scenes poses difficult challenges as registration of differently exposed images often leads to ghosting and other artifacts. This chapter reviews methods which capture HDR-video frames within a single exposure time, using either multiple synchronised sensors, or by multiplexing of the sensor response spatially across the sensor. Most previous HDR reconstruction methods perform demoisaicing, noise reduction, resampling (registration), and HDR-fusion in separate steps. This chapter presents a framework for unified HDR-reconstruction, including all steps in the traditional imaging pipeline in a single adaptive filtering operation, and describes an image formation model and a sensor noise model applicable to both single-, and multi-sensor systems. The benefits of using raw data directly are demonstrated with examples using input data from multiple synchronized sensors, and single images with varying per-pixel gain.

  • 26.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Löw, Joakim
    Linköping University, Department of Science and Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Spatially varying image based lighting using HDR-video2013In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 37, no 7, p. 923-934Article in journal (Refereed)
    Abstract [en]

    Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.

  • 27.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynner, Anders
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Temporally and Spatially Varying Image Based Lighting using HDR-video2013In: Proceedings of the 21st European Signal Processing Conference (EUSIPCO), 2013: Special Session on HDR-video, IEEE , 2013, p. 1-5Conference paper (Refereed)
    Abstract [en]

    In this paper we present novel algorithms and data structures for capturing, processing and rendering with real world lighting conditions based on high dynamic range video sequences. Based on the captured HDR video data we show how traditional image based lighting can be extended to include illumination variations in both the temporal as well as the spatial domain. This enables highly realistic renderings where traditional IBL techniques using a single light probe fail to capture important details in the real world lighting environment. To demonstrate the usefulness of our approach, we show examples of both off-line and real-time rendering applications.

  • 28.
    Unger, Jonas
    et al.
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Kronander, Joel
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Larsson, Per
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Gustavson, Stefan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Ynnerman, Anders
    Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology.
    Image Based Lighting using HDR-video2013In: Eurographics 24th Symposium on Rendering: Posters, 2013Conference paper (Other academic)
    Abstract [en]

    It has been widely recognized that lighting plays a key role in the realism and visual interest of computer graphics renderings. This hasled to research and development of image based lighting (IBL) techniques where the illumination conditions in real world scenes are captured as high dynamic range (HDR) image panoramas and used as lighting information during rendering. Traditional IBL where the lighting is captured at a single position in the scene has now become a widely used tool in most production pipelines. In this poster, we give an overview of a system pipeline where we use HDR-video cameras to extend traditional IBL techniques to capture real world lighting that may include variations in the spatial or temporal domains. We also describe how the capture systems and algorithms for processing and rendering have been incorporated into a robust systems pipeline for production of highly realisticrenderings. High dynamic range video based scene capture thus enables highly realistic renderings where traditional image based lighting, using a single light probe, fail to capture important details.

1 - 28 of 28
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf