liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 17) Show all publications
Kronander, J., Gustavson, S., Bonnet, G., Ynnerman, A. & Unger, J. (2014). A unified framework for multi-sensor HDR video reconstruction. Signal Processing : Image Communications, 29(2), 203-215
Open this publication in new window or tab >>A unified framework for multi-sensor HDR video reconstruction
Show others...
2014 (English)In: Signal Processing : Image Communications, ISSN 0923-5965, Vol. 29, no 2, p. 203-215Article in journal (Refereed) Published
Abstract [en]

One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.

Place, publisher, year, edition, pages
Elsevier, 2014
Keywords
HDR video, HDR fusion, Kernel regression, Radiometric calibration
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-104617 (URN)10.1016/j.image.2013.08.018 (DOI)000332999200003 ()
Projects
VPS
Funder
Swedish Foundation for Strategic Research , IIS11-0081
Available from: 2014-02-19 Created: 2014-02-19 Last updated: 2015-11-10Bibliographically approved
Unger, J., Kronander, J., Larsson, P., Gustavson, S. & Ynnerman, A. (2013). Image Based Lighting using HDR-video. In: Eurographics 24th Symposium on Rendering: Posters. Paper presented at 24th Eurographics Symposium on Rendering, 19-21 June 2013, Zaragoza, Spain.
Open this publication in new window or tab >>Image Based Lighting using HDR-video
Show others...
2013 (English)In: Eurographics 24th Symposium on Rendering: Posters, 2013Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

It has been widely recognized that lighting plays a key role in the realism and visual interest of computer graphics renderings. This hasled to research and development of image based lighting (IBL) techniques where the illumination conditions in real world scenes are captured as high dynamic range (HDR) image panoramas and used as lighting information during rendering. Traditional IBL where the lighting is captured at a single position in the scene has now become a widely used tool in most production pipelines. In this poster, we give an overview of a system pipeline where we use HDR-video cameras to extend traditional IBL techniques to capture real world lighting that may include variations in the spatial or temporal domains. We also describe how the capture systems and algorithms for processing and rendering have been incorporated into a robust systems pipeline for production of highly realisticrenderings. High dynamic range video based scene capture thus enables highly realistic renderings where traditional image based lighting, using a single light probe, fail to capture important details.

Keywords
High dynamic range video, Photo-realistic image synthesis
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-102870 (URN)
Conference
24th Eurographics Symposium on Rendering, 19-21 June 2013, Zaragoza, Spain
Available from: 2014-01-04 Created: 2014-01-04 Last updated: 2015-09-22Bibliographically approved
Unger, J., Kronander, J., Larsson, P., Gustavson, S., Löw, J. & Ynnerman, A. (2013). Spatially varying image based lighting using HDR-video. Computers & graphics, 37(7), 923-934
Open this publication in new window or tab >>Spatially varying image based lighting using HDR-video
Show others...
2013 (English)In: Computers & graphics, ISSN 0097-8493, E-ISSN 1873-7684, Vol. 37, no 7, p. 923-934Article in journal (Refereed) Published
Abstract [en]

Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.

Place, publisher, year, edition, pages
Elsevier, 2013
Keywords
High dynamic range video, HDR-video, image based lighting, photo realistic image synthesis
National Category
Media Engineering Signal Processing
Identifiers
urn:nbn:se:liu:diva-96949 (URN)10.1016/j.cag.2013.07.001 (DOI)000325834400015 ()
Projects
VPS
Funder
Swedish Foundation for Strategic Research , IIS11-0081Swedish Research Council
Available from: 2013-08-30 Created: 2013-08-30 Last updated: 2017-12-06Bibliographically approved
Unger, J., Kronander, J., Larsson, P., Gustavson, S. & Ynner, A. (2013). Temporally and Spatially Varying Image Based Lighting using HDR-video. In: Proceedings of the 21st European Signal Processing Conference (EUSIPCO), 2013: Special Session on HDR-video. Paper presented at 21st European Signal Processing Conference (EUSIPCO 2013), 9-13 September 2013, Marrakech, Morocco (pp. 1-5). IEEE
Open this publication in new window or tab >>Temporally and Spatially Varying Image Based Lighting using HDR-video
Show others...
2013 (English)In: Proceedings of the 21st European Signal Processing Conference (EUSIPCO), 2013: Special Session on HDR-video, IEEE , 2013, p. 1-5Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present novel algorithms and data structures for capturing, processing and rendering with real world lighting conditions based on high dynamic range video sequences. Based on the captured HDR video data we show how traditional image based lighting can be extended to include illumination variations in both the temporal as well as the spatial domain. This enables highly realistic renderings where traditional IBL techniques using a single light probe fail to capture important details in the real world lighting environment. To demonstrate the usefulness of our approach, we show examples of both off-line and real-time rendering applications.

Place, publisher, year, edition, pages
IEEE, 2013
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-95746 (URN)000341754500314 ()
Conference
21st European Signal Processing Conference (EUSIPCO 2013), 9-13 September 2013, Marrakech, Morocco
Projects
VPS
Funder
Swedish Research CouncilSwedish Foundation for Strategic Research , IIS11-0080
Available from: 2013-07-18 Created: 2013-07-18 Last updated: 2015-11-10Bibliographically approved
Kronander, J., Gustavson, S., Bonnet, G. & Unger, J. (2013). Unified HDR reconstruction from raw CFA data. In: David Boas, Paris Sylvain, Shmel Peleg, Todd Zickler (Ed.), Proceedings of IEEE International Conference on Computational Photography 2013: . Paper presented at 5th IEEE International Conference on Computational Photography, ICCP 2013; Cambridge, MA; United States (pp. 1-9). IEEE
Open this publication in new window or tab >>Unified HDR reconstruction from raw CFA data
2013 (English)In: Proceedings of IEEE International Conference on Computational Photography 2013 / [ed] David Boas, Paris Sylvain, Shmel Peleg, Todd Zickler, IEEE , 2013, p. 1-9Conference paper, Published paper (Refereed)
Abstract [en]

HDR reconstruction from multiple exposures poses several challenges. Previous HDR reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion in several steps. We instead present a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation. Our algorithm includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise. We also present a realistic camera noise model adapted to HDR video. The method allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over state-of-the-art methods, both in terms of flexibility and reconstruction quality.

Place, publisher, year, edition, pages
IEEE, 2013
National Category
Engineering and Technology Signal Processing
Identifiers
urn:nbn:se:liu:diva-90106 (URN)10.1109/ICCPhot.2013.6528315 (DOI)978-1-4673-6463-8 (ISBN)
Conference
5th IEEE International Conference on Computational Photography, ICCP 2013; Cambridge, MA; United States
Projects
VPS
Available from: 2013-03-19 Created: 2013-03-19 Last updated: 2015-11-10
Gustavson, S. (2012). 2D Shape Rendering by Distance Fields. In: Patrick Cozzi and Christophe Riccio (Ed.), OpenGL Insights: OpenGL, OpenGL ES, and WebGL community experiences (pp. 173-182). CRC Press
Open this publication in new window or tab >>2D Shape Rendering by Distance Fields
2012 (English)In: OpenGL Insights: OpenGL, OpenGL ES, and WebGL community experiences / [ed] Patrick Cozzi and Christophe Riccio, CRC Press, 2012, p. 173-182Chapter in book (Other academic)
Abstract [en]

We present a method for real time rendering of anti-aliased curved contours, combining recent results from research on distance transforms and modern GPU shading using GLSL. The method is capable of rendering glyphs and symbols of very high quality at arbitrary levels of magnification and minification, and it is both versatile and easy to use.

Place, publisher, year, edition, pages
CRC Press, 2012
Keywords
distance fields, level sets, contour rendering, anti-aliasing, GLSL shader
National Category
Media and Communication Technology Other Computer and Information Science
Identifiers
urn:nbn:se:liu:diva-91558 (URN)978-1-4398-9376-0 (ISBN)
Available from: 2013-04-26 Created: 2013-04-26 Last updated: 2018-01-11
McEwan, I., Sheets, D., Gustavson, S. & Richardson, M. (2012). Efficient Computational Noise in GLSL. Journal of Graphics Tools, 16(2), 85-94
Open this publication in new window or tab >>Efficient Computational Noise in GLSL
2012 (English)In: Journal of Graphics Tools, ISSN 2165-347X, Vol. 16, no 2, p. 85-94Article in journal (Refereed) Published
Abstract [en]

We present GLSL implementations of Perlin noise and Perlin simplex noise that run fast enough for practical consideration on current generation GPU hardware. The key benefits are that the functions are purely computational (i.e., they use neither textures nor lookup tables) and that they are implemented in GLSL version 1.20, which means they are compatible with all current GLSL-capable platforms, including OpenGL ES 2.0 and WebGL 1.0. Their performance is on par with previously presented GPU implementations of noise, they are very convenient to use, and they scale well with increasing parallelism in present and upcoming GPU architectures.

Place, publisher, year, edition, pages
Taylor & Francis, 2012
Keywords
Perlin noise, GLSL, GPU, real time, shading
National Category
Media and Communication Technology
Identifiers
urn:nbn:se:liu:diva-78295 (URN)10.1080/2151237X.2012.649621 (DOI)
Available from: 2013-04-02 Created: 2012-06-08 Last updated: 2018-01-12Bibliographically approved
Gustavson, S. (2012). Procedural Textures in GLSL. In: Patrick Cozzi and Christophe Riccio (Ed.), OpenGL Insights: OpenGL, OpenGL ES and WebGL community experiences (pp. 105-119). CRC Press
Open this publication in new window or tab >>Procedural Textures in GLSL
2012 (English)In: OpenGL Insights: OpenGL, OpenGL ES and WebGL community experiences / [ed] Patrick Cozzi and Christophe Riccio, CRC Press, 2012, p. 105-119Chapter in book (Other academic)
Abstract [en]

Procedural shading has been a versatile and popular tool for off-line rendering for decades. With the ever increasing speed and computational capabilities of modern GPUs, it is now becoming possible to use procedural shading also for real time rendering. This chapter is an introduction to some classic procedural shading techniques, adapted for real time use.

Place, publisher, year, edition, pages
CRC Press, 2012
Keywords
OpenGL GLSL procedural shading noise
National Category
Media and Communication Technology Other Computer and Information Science
Identifiers
urn:nbn:se:liu:diva-91530 (URN)978-1-4398-9376-0 (ISBN)
Available from: 2013-04-26 Created: 2013-04-26 Last updated: 2018-01-11Bibliographically approved
Kronander, J., Gustavson, S. & Unger, J. (2012). Real-time HDR video reconstruction for multi-sensor systems. In: ACM SIGGRAPH 2012 Posters. Paper presented at ACM Siggaph 2012 (pp. 65). New York, NY, USA: ACM Press
Open this publication in new window or tab >>Real-time HDR video reconstruction for multi-sensor systems
2012 (English)In: ACM SIGGRAPH 2012 Posters, New York, NY, USA: ACM Press, 2012, p. 65-Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

HDR video is an emerging field of technology, with a few camera systems currently in existence [Myszkowski et al. 2008], Multi-sensor systems [Tocci et al. 2011] have recently proved to be particularly promising due to superior robustness against temporal artifacts, correct motion blur, and high light efficiency. Previous HDR reconstruction methods for multi-sensor systems have assumed pixel perfect alignment of the physical sensors. This is, however, very difficult to achieve in practice. It may even be the case that reflections in beam splitters make it impossible to match the arrangement of the Bayer filters between sensors. We therefor present a novel reconstruction method specifically designed to handle the case of non-negligible misalignments between the sensors. Furthermore, while previous reconstruction techniques have considered HDR assembly, debayering and denoising as separate problems, our method is capable of simultaneous HDR assembly, debayering and smoothing of the data (denoising). The method is also general in that it allows reconstruction to an arbitrary output resolution and mapping. The algorithm is implemented in CUDA, and shows video speed performance for an experimental HDR video platform consisting of four 2336x1756 pixels high quality CCD sensors imaging the scene trough a common optical system. ND-filters of different densities are placed in front of the sensors to capture a dynamic range of 24 f-stops.

Place, publisher, year, edition, pages
New York, NY, USA: ACM Press, 2012
Keywords
High dynamica range video, image reconstruction
National Category
Media Engineering
Identifiers
urn:nbn:se:liu:diva-87172 (URN)10.1145/2342896.2342975 (DOI)
Conference
ACM Siggaph 2012
Projects
VPS
Available from: 2013-01-14 Created: 2013-01-11 Last updated: 2015-09-22Bibliographically approved
Gustavson, S. & Strand, R. (2011). Anti-aliased Euclidean distance transform. Pattern Recognition Letters, 32(2), 252-257
Open this publication in new window or tab >>Anti-aliased Euclidean distance transform
2011 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 32, no 2, p. 252-257Article in journal (Refereed) Published
Abstract [en]

We present a modified distance measure for use with distance transforms of anti-aliased, area sampled grayscale images of arbitrary binary contours. The modified measure can be used in any vector-propagation Euclidean distance transform. Our test implementation in the traditional SSED8 algorithm shows a considerable improvement in accuracy and homogeneity of the distance field compared to a traditional binary image transform. At the expense of a 10x slowdown for a particular image resolution, we achieve an accuracy comparable to a binary transform on a supersampled image with 16 × 16 higher resolution, which would require 256 times more computations and memory.

Place, publisher, year, edition, pages
Elsevier, 2011
Keywords
distance transform, level set
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-66968 (URN)10.1016/j.patrec.2010.08.010 (DOI)
Available from: 2011-03-23 Created: 2011-03-23 Last updated: 2018-01-12Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2559-6122

Search in DiVA

Show all publications