Real-time HDR video reconstruction for multi-sensor systems
2012 (English)In: ACM SIGGRAPH 2012 Posters, New York, NY, USA: ACM Press, 2012, 65- p.Conference paper, Poster (Refereed)
HDR video is an emerging field of technology, with a few camera systems currently in existence [Myszkowski et al. 2008], Multi-sensor systems [Tocci et al. 2011] have recently proved to be particularly promising due to superior robustness against temporal artifacts, correct motion blur, and high light efficiency. Previous HDR reconstruction methods for multi-sensor systems have assumed pixel perfect alignment of the physical sensors. This is, however, very difficult to achieve in practice. It may even be the case that reflections in beam splitters make it impossible to match the arrangement of the Bayer filters between sensors. We therefor present a novel reconstruction method specifically designed to handle the case of non-negligible misalignments between the sensors. Furthermore, while previous reconstruction techniques have considered HDR assembly, debayering and denoising as separate problems, our method is capable of simultaneous HDR assembly, debayering and smoothing of the data (denoising). The method is also general in that it allows reconstruction to an arbitrary output resolution and mapping. The algorithm is implemented in CUDA, and shows video speed performance for an experimental HDR video platform consisting of four 2336x1756 pixels high quality CCD sensors imaging the scene trough a common optical system. ND-filters of different densities are placed in front of the sensors to capture a dynamic range of 24 f-stops.
Place, publisher, year, edition, pages
New York, NY, USA: ACM Press, 2012. 65- p.
High dynamica range video, image reconstruction
IdentifiersURN: urn:nbn:se:liu:diva-87172DOI: 10.1145/2342896.2342975OAI: oai:DiVA.org:liu-87172DiVA: diva2:586566
ACM Siggaph 2012