liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Physically Based Rendering of Synthetic Objects in Real Environments
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
2015 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis presents methods for photorealistic rendering of virtual objects so that they can be seamlessly composited into images of the real world. To generate predictable and consistent results, we study physically based methods, which simulate how light propagates in a mathematical model of the augmented scene. This computationally challenging problem demands both efficient and accurate simulation of the light transport in the scene, as well as detailed modeling of the geometries, illumination conditions, and material properties. In this thesis, we discuss and formulate the challenges inherent in these steps and present several methods to make the process more efficient.

In particular, the material contained in this thesis addresses four closely related areas: HDR imaging, IBL, reflectance modeling, and efficient rendering. The thesis presents a new, statistically motivated algorithm for HDR reconstruction from raw camera data combining demosaicing, denoising, and HDR fusion in a single processing operation. The thesis also presents practical and robust methods for rendering with spatially and temporally varying illumination conditions captured using omnidirectional HDR video. Furthermore, two new parametric BRDF models are proposed for surfaces exhibiting wide angle gloss. Finally, the thesis also presents a physically based light transport algorithm based on Markov Chain Monte Carlo methods that allows approximations to be used in place of exact quantities, while still converging to the exact result. As illustrated in the thesis, the proposed algorithm enables efficient rendering of scenes with glossy transfer and heterogenous participating media.

Abstract [sv]

En av de största utmaningarna inom datorgrafik är att syntetisera, eller rendera, fotorealistiska bilder. Fotorealistisk rendering används idag inom många tillämpningsområden såsom specialeffekter i film, datorspel, produktvisualisering och virtuell verklighet. I många praktiska tillämpningar av fotorealistisk rendering är det viktigt att kunna placera in virtuella objekt i fotografier, så att de virtuella objekten ser verkliga ut. IKEA-katalogen, till exempel, produceras i många olika versioner för att passa olika länder och regioner. Grunden till de flesta bilderna i katalogen är oftast densamma, men symboler och standardmått på möbler varierar ofta för olika versioner av katalogen. Istället för att fotografera varje version separat kan man använda ett grundfotografi och lägga in olika virtuella objekt såsom möbler i fotot. Genom att på det här sättet möblera ett rum virtuellt, istället för på riktigt, kan man också snabbt testa olika möbleringar och därmed göra ekonomiska besparingar.

Den här avhandlingen bidrar med metoder och algoritmer för att rendera fotorealistiska bilder av virtuella objekt som kan blandas med verkliga fotografier. För att rendera sådana bilder används fysikaliskt baserade simuleringar av hur ljus interagerar med virtuella och verkliga objekt i motivet. För fotorealistiska resultat kräver simuleringarna noggrann modellering av objektens geometri, belysning och materialegenskaper, såsom färg, textur och reflektans.

För att de virtuella objekten ska se verkliga ut är det viktigt att belysa dem med samma ljus som de skulle ha haft om de var en del av den verkliga miljön. Därför är det viktigt att noggrant mäta och modellera ljusförhållanden på de platser i scenen där de virtuella objekten ska placeras. För detta använder vi High Dynamic Range-fotografi, eller HDR. Med hjälp av HDR-fotografi kan vi noggrant mäta hela omfånget av det infallande ljuset i en punkt, från mörka skuggor till direkta ljuskällor. Detta är inte möjligt med traditionella digitalkameror, då det dynamiska omfånget hos vanliga kamerasensorer är begränsat. Avhandlingen beskriver nya metoder för att rekonstruera HDR-bilder som ger mindre brus och artefakter än tidigare metoder. Vi presenterar också metoder för att rendera virtuella objekt som rör sig mellan regioner med olika belysning, eller där belysningen varierar i tiden. Metoder för att representera spatiellt varierande belysning på ett kompakt sätt presenteras också. För att noggrant beskriva hur glansiga ytor sprider eller reflekterar ljus, beskrivs också två nya parametriska modeller som är mer verklighetstrogna än tidigare reflektionsmodeller. I avhandlingen presenteras också en ny metod för effektiv rendering av motiv som är mycket beräkningskrävande, till exempel scener med uppmätta belysningsförhållanden, komplicerade  material, och volumetriska modeller som rök, moln, textiler, biologisk vävnad och vätskor. Metoden bygger på en typ av så kallade Markov Chain Monte Carlo metoder för att simulera ljustransporten i scenen, och är inspirerad av nyligen presenterade resultat inom matematisk statistik.

Metoderna som beskrivs i avhandlingen presenteras i kontexten av fotorealistisk rendering av virtuella objekt i riktiga miljöer, då majoriteten av forskningen utförts inom detta område. Flera av de metoder som presenteras i denna avhandling är dock tillämpbara inom andra domäner, såsom fysiksimulering, datorseende och vetenskaplig visualisering.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2015. , p. 135
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1717
National Category
Signal Processing
Identifiers
URN: urn:nbn:se:liu:diva-122588DOI: 10.3384/diss.diva-122588ISBN: 978-91-7685-912-4 (print)OAI: oai:DiVA.org:liu-122588DiVA, id: diva2:868287
Public defence
2015-12-04, Domteatern, Visualiseringscenter C, Kungsgatan 54, Norrköping, 09:15 (English)
Opponent
Supervisors
Available from: 2015-11-10 Created: 2015-11-10 Last updated: 2019-11-15Bibliographically approved
List of papers
1. Photorealistic rendering of mixed reality scenes
Open this publication in new window or tab >>Photorealistic rendering of mixed reality scenes
Show others...
2015 (English)In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 34, no 2, p. 643-665Article in journal (Refereed) Published
Abstract [en]

Photo-realistic rendering of virtual objects into real scenes is one of the most important research problems in computer graphics. Methods for capture and rendering of mixed reality scenes are driven by a large number of applications, ranging from augmented reality to visual effects and product visualization. Recent developments in computer graphics, computer vision, and imaging technology have enabled a wide range of new mixed reality techniques including methods of advanced image based lighting, capturing spatially varying lighting conditions, and algorithms for seamlessly rendering virtual objects directly into photographs without explicit measurements of the scene lighting. This report gives an overview of the state-of-the-art in this field, and presents a categorization and comparison of current methods. Our in-depth survey provides a tool for understanding the advantages and disadvantages of each method, and gives an overview of which technique is best suited to a specific problem.

Place, publisher, year, edition, pages
Wiley-Blackwell, 2015
Keywords
Picture/Image Generation—Illumination Estimation, Image-Based Lighting, Reflectance and Shading
National Category
Signal Processing
Identifiers
urn:nbn:se:liu:diva-118542 (URN)10.1111/cgf.12591 (DOI)000358326600060 ()
Conference
The 36th Annual Conference of the European Association of Computer Graphics, Eurographics 2015, Zürich, Switzerland, 4th–8th May 2015
Projects
VPS
Funder
Swedish Foundation for Strategic Research , IIS11-0081Linnaeus research environment CADICS
Available from: 2015-05-31 Created: 2015-05-31 Last updated: 2017-12-04Bibliographically approved
2. Pseudo-Marginal Metropolis Light Transport
Open this publication in new window or tab >>Pseudo-Marginal Metropolis Light Transport
2015 (English)In: Proceeding SA '15 SIGGRAPH Asia 2015 Technical Briefs, ACM Digital Library, 2015, p. 13:1-13:4Conference paper, Published paper (Other academic)
Abstract [en]

Accurate and efficient simulation of light transport in heterogeneous participating media, such as smoke, clouds and fire, plays a key role in the synthesis of visually interesting renderings for e.g. visual effects, computer games and product visualization. However, rendering of scenes with heterogenous participating with Metropolis light transport (MLT) algorithms have previously been limited to primary sample space methods or using biased approximations of the transmittance in the scene. This paper presents a new sampling strategy for Markov chain Monte Carlo (MCMC) methods, e.g. MLT, based on pseudo-marginal MCMC. Specifically, we show that any positive and unbiased estimator of the target distribution can replace the exact quantity to simulate a Markov Chain with a stationary distribution that has a marginal which is the exact target distribution of interest. This enables us to evaluate the transmittance function with recent unbiased estimators which leads to significantly shorter rendering times. Compared to previous work, relying on (biased) ray-marching for evaluating transmittance, our method enables simulation of longer Markov chains, a better exploration of the path space, and consequently less image noise, for a given computational budget. To demonstrate the usefulness of our pseudo-marginal approach, we compare it to representative methods for efficient rendering of anisotropic heterogeneous participating media and glossy transfer. We show that it performs significantly better in terms of image noise and rendering times compared to previous techniques. Our method is robust, and can easily be implemented in a modern renderer.

Place, publisher, year, edition, pages
ACM Digital Library, 2015
National Category
Computer Sciences Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-122586 (URN)10.1145/2820903.2820922 (DOI)978-1-4503-3930-8 (ISBN)
Conference
The 8th ACM SIGGRAPH Conference and Exhibition, Asia Technical Briefs, 3-5 November, Kobe, Japan
Available from: 2015-11-10 Created: 2015-11-10 Last updated: 2018-01-10Bibliographically approved
3. Temporally and Spatially Varying Image Based Lighting using HDR-video
Open this publication in new window or tab >>Temporally and Spatially Varying Image Based Lighting using HDR-video
Show others...
2013 (English)In: Proceedings of the 21st European Signal Processing Conference (EUSIPCO), 2013: Special Session on HDR-video, IEEE , 2013, p. 1-5Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present novel algorithms and data structures for capturing, processing and rendering with real world lighting conditions based on high dynamic range video sequences. Based on the captured HDR video data we show how traditional image based lighting can be extended to include illumination variations in both the temporal as well as the spatial domain. This enables highly realistic renderings where traditional IBL techniques using a single light probe fail to capture important details in the real world lighting environment. To demonstrate the usefulness of our approach, we show examples of both off-line and real-time rendering applications.

Place, publisher, year, edition, pages
IEEE, 2013
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-95746 (URN)000341754500314 ()
Conference
21st European Signal Processing Conference (EUSIPCO 2013), 9-13 September 2013, Marrakech, Morocco
Projects
VPS
Funder
Swedish Research CouncilSwedish Foundation for Strategic Research , IIS11-0080
Available from: 2013-07-18 Created: 2013-07-18 Last updated: 2015-11-10Bibliographically approved
4. Spatially varying image based lighting using HDR-video
Open this publication in new window or tab >>Spatially varying image based lighting using HDR-video
Show others...
2013 (English)In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 37, no 7, p. 923-934Article in journal (Refereed) Published
Abstract [en]

Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.

Place, publisher, year, edition, pages
Elsevier, 2013
Keywords
High dynamic range video, HDR-video, image based lighting, photo realistic image synthesis
National Category
Media Engineering Signal Processing
Identifiers
urn:nbn:se:liu:diva-96949 (URN)10.1016/j.cag.2013.07.001 (DOI)000325834400015 ()
Projects
VPS
Funder
Swedish Foundation for Strategic Research , IIS11-0081Swedish Research Council
Available from: 2013-08-30 Created: 2013-08-30 Last updated: 2017-12-06Bibliographically approved
5. Unified HDR reconstruction from raw CFA data
Open this publication in new window or tab >>Unified HDR reconstruction from raw CFA data
2013 (English)In: Proceedings of IEEE International Conference on Computational Photography 2013 / [ed] David Boas, Paris Sylvain, Shmel Peleg, Todd Zickler, IEEE , 2013, p. 1-9Conference paper, Published paper (Refereed)
Abstract [en]

HDR reconstruction from multiple exposures poses several challenges. Previous HDR reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion in several steps. We instead present a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation. Our algorithm includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise. We also present a realistic camera noise model adapted to HDR video. The method allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over state-of-the-art methods, both in terms of flexibility and reconstruction quality.

Place, publisher, year, edition, pages
IEEE, 2013
National Category
Engineering and Technology Signal Processing
Identifiers
urn:nbn:se:liu:diva-90106 (URN)10.1109/ICCPhot.2013.6528315 (DOI)978-1-4673-6463-8 (ISBN)
Conference
5th IEEE International Conference on Computational Photography, ICCP 2013; Cambridge, MA; United States
Projects
VPS
Available from: 2013-03-19 Created: 2013-03-19 Last updated: 2015-11-10
6. A unified framework for multi-sensor HDR video reconstruction
Open this publication in new window or tab >>A unified framework for multi-sensor HDR video reconstruction
Show others...
2014 (English)In: Signal Processing : Image Communications, ISSN 0923-5965, Vol. 29, no 2, p. 203-215Article in journal (Refereed) Published
Abstract [en]

One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.

Place, publisher, year, edition, pages
Elsevier, 2014
Keywords
HDR video, HDR fusion, Kernel regression, Radiometric calibration
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-104617 (URN)10.1016/j.image.2013.08.018 (DOI)000332999200003 ()
Projects
VPS
Funder
Swedish Foundation for Strategic Research , IIS11-0081
Available from: 2014-02-19 Created: 2014-02-19 Last updated: 2015-11-10Bibliographically approved
7. Adaptive dualISO HDR-reconstruction
Open this publication in new window or tab >>Adaptive dualISO HDR-reconstruction
2015 (English)In: EURASIP Journal on Image and Video Processing, ISSN 1687-5176, E-ISSN 1687-5281Article in journal (Refereed) Published
Abstract [en]

With the development of modern image sensors enabling flexible image acquisition, single shot HDR imaging is becoming increasingly popular. In this work we capture single shot HDR images using an imaging sensor with spatially varying gain/ISO. In comparison to previous single shot HDR capture based on a single sensor, this allows all incoming photons to be used in the imaging, instead of wasting incoming light using spatially varying ND-filters, commonly used in previous works. The main technical contribution in this work is an  extension of previous HDR reconstruction approaches for single shot HDR imaging based on local polynomial approximations [15,10]. Using a sensor noise model, these works deploy a statistically informed filtering operation to reconstruct HDR pixel values. However, instead of using a fixed filter size, we introduce two novel algorithms for adaptive filter kernel selection. Unlike previous works, using  adaptive filter kernels [16], our algorithms are based on analysing the model fit and the expected statistical deviation of the estimate based on the sensor noise model. Using an iterative procedure we can then adapt the filter kernel according to the image structure and the statistical image noise. Experimental results show that the proposed filter de-noises the noisy image carefully while well preserving the important image features such as edges and corners, outperforming previous methods. To demonstrate the robustness of our approach, we have exploited input images from raw sensor data using a commercial off-the shelf camera. To further analyze our algorithm, we have also implemented a camera simulator to evaluate different gain pattern and noise properties of the sensor.

Place, publisher, year, edition, pages
Springer Publishing Company, 2015
Keywords
HDR reconstruction; Single shot HDR imaging; DualISO; Statistical image fitlering
National Category
Computer Sciences Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-122587 (URN)10.1186/s13640-015-0095-0 (DOI)000366324500001 ()
Note

Funding agencies: Swedish Foundation for Strategic Research (SSF) [IIS11-0081]; Linkoping University Center for Industrial Information Technology (CENIIT); Swedish Research Council through the Linnaeus Environment CADICS

Available from: 2015-11-10 Created: 2015-11-10 Last updated: 2020-02-18Bibliographically approved
8. BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces
Open this publication in new window or tab >>BRDF Models for Accurate and Efficient Rendering of Glossy Surfaces
2012 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 31, no 1Article in journal (Refereed) Published
Abstract [en]

This article presents two new parametric models of the Bidirectional Reflectance Distribution Function (BRDF), one inspired by the Rayleigh-Rice theory for light scattering from optically smooth surfaces, and one inspired by micro-facet theory. The models represent scattering from a wide range of glossy surface types with high accuracy. In particular, they enable representation of types of surface scattering which previous parametric models have had trouble modeling accurately. In a study of the scattering behavior of measured reflectance data, we investigate what key properties are needed for a model to accurately represent scattering from glossy surfaces. We investigate different parametrizations and how well they match the behavior of measured BRDFs. We also examine the scattering curves which are represented in parametric models by different distribution functions. Based on the insights gained from the study, the new models are designed to provide accurate fittings to the measured data. Importance sampling schemes are developed for the new models, enabling direct use in existing production pipelines. In the resulting renderings we show that the visual quality achieved by the models matches that of the measured data.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2012
Keywords
BRDF, gloss, Rayleigh-Rice, global illumination, Monte Carlo, importance sampling
National Category
Computer Systems
Identifiers
urn:nbn:se:liu:diva-75045 (URN)10.1145/2077341.2077350 (DOI)000300622500009 ()
Projects
CADICSELLIIT
Note
funding agencies|Swedish Foundation for Strategic Research through the Strategic Research Centre MOVIII| A3:05:193 |Swedish Knowledge Foundation| 2009/0091 |Forskning och Framtid| ITN 2009-00116 |Swedish Research Council through the Linnaeus Center for Control, Autonomy, and Decision-making in Complex Systems (CADICS)||Excellence Center at Linkoping and Lund in Information Technology (ELLIIT)||Available from: 2012-02-15 Created: 2012-02-15 Last updated: 2017-12-07

Open Access in DiVA

fulltext(1481 kB)3477 downloads
File information
File name FULLTEXT01.pdfFile size 1481 kBChecksum SHA-512
c6506cb459272d9abb17434f8e593129ef39e4f646772f0e1324103b5dd4e05032cdc435b9f728dca61b4fc26610178b2459c6181aa3239e51d258172b4406db
Type fulltextMimetype application/pdf
omslag(367 kB)115 downloads
File information
File name COVER01.pdfFile size 367 kBChecksum SHA-512
c9045d97f155be27485d1fef3d599460f98da8070209da3ec1b4b7cb6178a66477390495b8032cbcf43ce987af244c15e4359b3afa290a3fe16c32e0ede13d11
Type coverMimetype application/pdf
Order online >>

Other links

Publisher's full text

Authority records

Kronander, Joel

Search in DiVA

By author/editor
Kronander, Joel
By organisation
Media and Information TechnologyFaculty of Science & Engineering
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 3481 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 14883 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf