liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
HDR image reconstruction from a single exposure using deep CNNs
Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Science and Technology. Linköping University, Faculty of Science & Engineering.
University of Cambridge, England.
University of Cambridge, England.
Show others and affiliations
2017 (English)In: ACM Transactions on Graphics, ISSN 0730-0301, E-ISSN 1557-7368, Vol. 36, no 6, article id 178Article in journal (Refereed) Published
Abstract [en]

Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY , 2017. Vol. 36, no 6, article id 178
Keyword [en]
HDR reconstruction; inverse tone-mapping; deep learning; convolutional network
National Category
Media Engineering
Identifiers
URN: urn:nbn:se:liu:diva-143943DOI: 10.1145/3130800.3130816ISI: 000417448700008OAI: oai:DiVA.org:liu-143943DiVA: diva2:1169758
Conference
10th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia
Note

Funding Agencies|Linkoping University Center for Industrial Information Technology (CENIIT); Swedish Science Council [2015-05180]; Wallenberg Autonomous Systems Program (WASP)

Available from: 2017-12-29 Created: 2017-12-29 Last updated: 2017-12-29

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Eilertsen, GabrielKronander, JoelUnger, Jonas
By organisation
Media and Information TechnologyFaculty of Science & EngineeringDepartment of Science and Technology
In the same journal
ACM Transactions on Graphics
Media Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 23 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf