liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Improving RGB-D Scene Reconstruction using Rolling Shutter Rectification
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.ORCID iD: 0000-0002-5698-5983
Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
2015 (English)In: New Development in Robot Vision / [ed] Yu Sun, Aman Behal & Chi-Kit Ronald Chung, Springer Berlin/Heidelberg, 2015, 55-71 p.Chapter in book (Refereed)
Abstract [en]

Scene reconstruction, i.e. the process of creating a 3D representation (mesh) of some real world scene, has recently become easier with the advent of cheap RGB-D sensors (e.g. the Microsoft Kinect).

Many such sensors use rolling shutter cameras, which produce geometrically distorted images when they are moving. To mitigate these rolling shutter distortions we propose a method that uses an attached gyroscope to rectify the depth scans.We also present a simple scheme to calibrate the relative pose and time synchronization between the gyro and a rolling shutter RGB-D sensor.

For scene reconstruction we use the Kinect Fusion algorithm to produce meshes. We create meshes from both raw and rectified depth scans, and these are then compared to a ground truth mesh. The types of motion we investigate are: pan, tilt and wobble (shaking) motions.

As our method relies on gyroscope readings, the amount of computations required is negligible compared to the cost of running Kinect Fusion.

This chapter is an extension of a paper at the IEEE Workshop on Robot Vision [10]. Compared to that paper, we have improved the rectification to also correct for lens distortion, and use a coarse-to-fine search to find the time shift more quicky.We have extended our experiments to also investigate the effects of lens distortion, and to use more accurate ground truth. The experiments demonstrate that correction of rolling shutter effects yields a larger improvement of the 3D model than correction for lens distortion.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2015. 55-71 p.
Series
Cognitive Systems Monographs, ISSN 1867-4925 ; 23
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:liu:diva-114344DOI: 10.1007/978-3-662-43859-6_4ISBN: 978-3-662-43858-9 (print)ISBN: 978-3-662-43859-6 (print)OAI: oai:DiVA.org:liu-114344DiVA: diva2:789457
Projects
Learnable Camera Motion Models
Available from: 2015-02-19 Created: 2015-02-19 Last updated: 2015-12-10Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full textFind book at a Swedish library/Hitta boken i ett svenskt bibliotekFind book in another country/Hitta boken i ett annat land

Authority records BETA

Ovrén, HannesForssén, Per-ErikTörnqvist, David

Search in DiVA

By author/editor
Ovrén, HannesForssén, Per-ErikTörnqvist, David
By organisation
Computer VisionThe Institute of TechnologyAutomatic Control
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 545 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf