liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Ringaby, Erik
Publications (10 of 15) Show all publications
Zografos, V., Lenz, R., Ringaby, E., Felsberg, M. & Nordberg, K. (2015). Fast segmentation of sparse 3D point trajectories using group theoretical invariants. In: D. Cremers, I. Reid, H. Saito, M.-H. Yang (Ed.), COMPUTER VISION - ACCV 2014, PT IV: . Paper presented at 12th Asian Conference on Computer Vision (ACCV) Singapore, Singapore, November 1-5 2014 (pp. 675-691). Springer, 9006
Open this publication in new window or tab >>Fast segmentation of sparse 3D point trajectories using group theoretical invariants
Show others...
2015 (English)In: COMPUTER VISION - ACCV 2014, PT IV / [ed] D. Cremers, I. Reid, H. Saito, M.-H. Yang, Springer, 2015, Vol. 9006, p. 675-691Conference paper, Published paper (Refereed)
Abstract [en]

We present a novel approach for segmenting different motions from 3D trajectories. Our approach uses the theory of transformation groups to derive a set of invariants of 3D points located on the same rigid object. These invariants are inexpensive to calculate, involving primarily QR factorizations of small matrices. The invariants are easily converted into a set of robust motion affinities and with the use of a local sampling scheme and spectral clustering, they can be incorporated into a highly efficient motion segmentation algorithm. We have also captured a new multi-object 3D motion dataset, on which we have evaluated our approach, and compared against state-of-the-art competing methods from literature. Our results show that our approach outperforms all methods while being robust to perspective distortions and degenerate configurations.

Place, publisher, year, edition, pages
Springer, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9006
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-114313 (URN)10.1007/978-3-319-16817-3_44 (DOI)000362444500044 ()978-3-31916-816-6 (ISBN)978-3-31916-817-3 (ISBN)
Conference
12th Asian Conference on Computer Vision (ACCV) Singapore, Singapore, November 1-5 2014
Projects
VPSCUASETT
Available from: 2015-02-18 Created: 2015-02-18 Last updated: 2018-10-15
Ringaby, E. & Forssén, P.-E. (2014). A Virtual Tripod for Hand-held Video Stacking on Smartphones. In: 2014 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY (ICCP): . Paper presented at IEEE International Conference on Computational Photography (ICCP 2014), May 2-4, 2014, Intel, Santa Clara, USA. IEEE
Open this publication in new window or tab >>A Virtual Tripod for Hand-held Video Stacking on Smartphones
2014 (English)In: 2014 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY (ICCP), IEEE , 2014Conference paper, Published paper (Refereed)
Abstract [en]

We propose an algorithm that can capture sharp, low-noise images in low-light conditions on a hand-held smartphone. We make use of the recent ability to acquire bursts of high resolution images on high-end models such as the iPhone5s. Frames are aligned, or stacked, using rolling shutter correction, based on motion estimated from the built-in gyro sensors and image feature tracking. After stacking, the images may be combined, using e.g. averaging to produce a sharp, low-noise photo. We have tested the algorithm on a variety of different scenes, using several different smartphones. We compare our method to denoising, direct stacking, as well as a global-shutter based stacking, with favourable results.

Place, publisher, year, edition, pages
IEEE, 2014
Series
IEEE International Conference on Computational Photography, ISSN 2164-9774
National Category
Engineering and Technology Electrical Engineering, Electronic Engineering, Information Engineering Signal Processing
Identifiers
urn:nbn:se:liu:diva-108109 (URN)10.1109/ICCPHOT.2014.6831799 (DOI)000356494100001 ()978-1-4799-5188-8 (ISBN)
Conference
IEEE International Conference on Computational Photography (ICCP 2014), May 2-4, 2014, Intel, Santa Clara, USA
Projects
VPS
Available from: 2014-06-25 Created: 2014-06-25 Last updated: 2015-12-10Bibliographically approved
Ringaby, E., Forssén, P.-E., Friman, O., Olsvik Opsahl, T., Vegard Haavardsholm, T. & Kåsen, I. (2014). Anisotropic Scattered Data Interpolation for Pushbroom Image Rectification. IEEE Transactions on Image Processing, 23(5), 2302-2314
Open this publication in new window or tab >>Anisotropic Scattered Data Interpolation for Pushbroom Image Rectification
Show others...
2014 (English)In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 23, no 5, p. 2302-2314Article in journal (Refereed) Published
Abstract [en]

This article deals with fast and accurate visualization of pushbroom image data from airborne and spaceborne platforms. A pushbroom sensor acquires images in a line-scanning fashion, and this results in scattered input data that needs to be resampled onto a uniform grid for geometrically correct visualization. To this end, we model the anisotropic spatial dependence structure caused by the acquisition process. Several methods for scattered data interpolation are then adapted to handle the induced anisotropic metric and compared for the pushbroom image rectification problem. A trick that exploits the semi-ordered line structure of pushbroom data to improve the computational complexity several orders of magnitude is also presented.

Place, publisher, year, edition, pages
IEEE, 2014
Keywords
pushbroom, rectification, hyperspectral, interpolation, anisotropic, scattered data
National Category
Engineering and Technology Electrical Engineering, Electronic Engineering, Information Engineering Signal Processing
Identifiers
urn:nbn:se:liu:diva-108105 (URN)10.1109/TIP.2014.2316377 (DOI)000350284400001 ()
Available from: 2014-06-25 Created: 2014-06-25 Last updated: 2018-09-25Bibliographically approved
Ringaby, E. (2014). Geometric Models for Rolling-shutter and Push-broom Sensors. (Doctoral dissertation). Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>Geometric Models for Rolling-shutter and Push-broom Sensors
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Almost all cell-phones and camcorders sold today are equipped with a  CMOS (Complementary Metal Oxide Semiconductor) image sensor and there is also a general trend to incorporate CMOS sensors in other types of cameras. The CMOS sensor has many advantages over the more conventional CCD (Charge-Coupled Device) sensor such as lower power consumption, cheaper manufacturing and the potential for onchip processing. Nearly all CMOS sensors make use of what is called a rolling shutter readout. Unlike a global shutter readout, which images all the pixels at the same time, a rolling-shutter exposes the image row-by-row. If a mechanical shutter is not used this will lead to geometric distortions in the image when either the camera or the objects in the scene are moving. Smaller cameras, like those in cell-phones, do not have mechanical shutters and systems that do have them will not use them when recording video. The result will look wobbly (jello eect), skewed or otherwise strange and this is often not desirable. In addition, many computer vision algorithms assume that the camera used has a global shutter and will break down if the distortions are too severe.

In airborne remote sensing it is common to use push-broom sensors. These sensors exhibit a similar kind of distortion as that of a rolling-shutter camera, due to the motion of the aircraft. If the acquired images are to be registered to maps or other images, the distortions need to be suppressed.

The main contributions in this thesis are the development of the three-dimensional models for rolling-shutter distortion correction. Previous attempts modelled the distortions as taking place in the image plane, and we have shown that our techniques give better results for hand-held camera motions. The basic idea is to estimate the camera motion, not only between frames, but also the motion during frame capture. The motion is estimated using image correspondences and with these a non-linear optimisation problem is formulated and solved. All rows in the rollingshutter image are imaged at dierent times, and when the motion is known, each row can be transformed to its rectied position. The same is true when using depth sensors such as the Microsoft Kinect, and the thesis describes how to estimate its 3D motion and how to rectify 3D point clouds.

In the thesis it has also been explored how to use similar techniques as for the rolling-shutter case, to correct push-broom images. When a transformation has been found, the images need to be resampled to a regular grid in order to be visualised. This can be done in many ways and dierent methods have been tested and adapted to the push-broom setup.

In addition to rolling-shutter distortions, hand-held footage often has shaky camera motion. It is possible to do ecient video stabilisation in combination with the rectication using rotation smoothing. Apart from these distortions, motion blur is a big problem for hand-held photography. The images will be blurry due to the camera motion and also noisy if taken in low light conditions. One of the contributions in the thesis is a method which uses gyroscope measurements and feature tracking to combine several images, taken with a smartphone, into one resulting image with less blur and noise. This enables the user to take photos which would have otherwise required a tripod.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2014. p. 41
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1615
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Engineering
Identifiers
urn:nbn:se:liu:diva-110085 (URN)10.3384/diss.diva-110085 (DOI)978-91-7519-255-0 (ISBN)
Public defence
2014-09-19, Visionen, hus B, Campus Valla, Linköpings universitet, Linköping, 10:15 (English)
Opponent
Supervisors
Note

The research leading to this thesis has received funding from CENIIT through the Virtual Global Shutters for CMOS Cameras project.

Available from: 2014-09-02 Created: 2014-09-02 Last updated: 2019-11-19Bibliographically approved
Ringaby, E. & Forssén, P.-E. (2012). Efficient Video Rectification and Stabilisation for Cell-Phones. International Journal of Computer Vision, 96(3), 335-352
Open this publication in new window or tab >>Efficient Video Rectification and Stabilisation for Cell-Phones
2012 (English)In: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, Vol. 96, no 3, p. 335-352Article in journal (Refereed) Published
Abstract [en]

This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.

Place, publisher, year, edition, pages
Springer Verlag (Germany), 2012
Keywords
Cell-phone, Rolling shutter, CMOS, Video stabilisation
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-75277 (URN)10.1007/s11263-011-0465-8 (DOI)000299769400005 ()
Note
Funding Agencies|CENIIT organisation at Linkoping Institute of Technology||Swedish Research Council||Available from: 2012-02-27 Created: 2012-02-24 Last updated: 2017-12-07
Ringaby, E. (2012). Geometric Computer Vision for Rolling-shutter and Push-broom Sensors. (Licentiate dissertation). Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>Geometric Computer Vision for Rolling-shutter and Push-broom Sensors
2012 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Almost all cell-phones and camcorders sold today are equipped with a CMOS (Complementary Metal Oxide Semiconductor) image sensor and there is also a general trend to incorporate CMOS sensors in other types of cameras. The sensor has many advantages over the more conventional CCD (Charge-Coupled Device) sensor such as lower power consumption, cheaper manufacturing and the potential for on-chip processing. Almost all CMOS sensors make use of what is called a rolling shutter. Compared to a global shutter, which images all the pixels at the same time, a rolling-shutter camera exposes the image row-by-row. This leads to geometric distortions in the image when either the camera or the objects in the scene are moving. The recorded videos and images will look wobbly (jello effect), skewed or otherwise strange and this is often not desirable. In addition, many computer vision algorithms assume that the camera used has a global shutter, and will break down if the distortions are too severe.

In airborne remote sensing it is common to use push-broom sensors. These sensors exhibit a similar kind of distortion as a rolling-shutter camera, due to the motion of the aircraft. If the acquired images are to be matched with maps or other images, then the distortions need to be suppressed.

The main contributions in this thesis are the development of the three dimensional models for rolling-shutter distortion correction. Previous attempts modelled the distortions as taking place in the image plane, and we have shown that our techniques give better results for hand-held camera motions.

The basic idea is to estimate the camera motion, not only between frames, but also the motion during frame capture. The motion can be estimated using inter-frame image correspondences and with these a non-linear optimisation problem can be formulated and solved. All rows in the rolling-shutter image are imaged at different times, and when the motion is known, each row can be transformed to the rectified position.

In addition to rolling-shutter distortions, hand-held footage often has shaky camera motion. It has been shown how to do efficient video stabilisation, in combination with the rectification, using rotation smoothing.

In the thesis it has been explored how to use similar techniques as for the rolling-shutter case in order to correct push-broom images, and also how to rectify 3D point clouds from e.g. the Kinect depth sensor.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2012. p. 85
Series
Linköping Studies in Science and Technology. Thesis, ISSN 0280-7971 ; 1535
Keywords
rolling shutter cmos video rectification stabilisation push-broom kinect
National Category
Engineering and Technology Computer Vision and Robotics (Autonomous Systems) Signal Processing
Identifiers
urn:nbn:se:liu:diva-77391 (URN)978-91-7519-872-9 (ISBN)
Presentation
2012-06-08, Visionen, Hus B, Campus Valla, Linköpings universitet, Linköping, 13:00 (English)
Opponent
Supervisors
Projects
VGS
Available from: 2012-05-28 Created: 2012-05-14 Last updated: 2019-12-19Bibliographically approved
Hedborg, J., Forssén, P.-E., Felsberg, M. & Ringaby, E. (2012). Rolling Shutter Bundle Adjustment. In: : . Paper presented at IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012 (pp. 1434-1441). IEEE Computer Society; 1999
Open this publication in new window or tab >>Rolling Shutter Bundle Adjustment
2012 (English)Conference paper, Published paper (Refereed)
Abstract [en]

This paper introduces a bundle adjustment (BA) method that obtains accurate structure and motion from rolling shutter (RS) video sequences: RSBA. When a classical BA algorithm processes a rolling shutter video, the resultant camera trajectory is brittle, and complete failures are not uncommon. We exploit the temporal continuity of the camera motion to define residuals of image point trajectories with respect to the camera trajectory. We compare the camera trajectories from RSBA to those from classical BA, and from classical BA on rectified videos. The comparisons are done on real video sequences from an iPhone 4, with ground truth obtained from a global shutter camera, rigidly mounted to the iPhone 4. Compared to classical BA, the rolling shutter model requires just six extra parameters. It also degrades the sparsity of the system Jacobian slightly, but as we demonstrate, the increase in computation time is moderate. Decisive advantages are that RSBA succeeds in cases where competing methods diverge, and consistently produces more accurate results.

Place, publisher, year, edition, pages
IEEE Computer Society; 1999, 2012
Series
Computer Vision and Pattern Recognition, ISSN 1063-6919
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-76903 (URN)10.1109/CVPR.2012.6247831 (DOI)000309166201074 ()978-1-4673-1227-1 (ISBN)
Conference
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012
Projects
VPS
Available from: 2012-04-24 Created: 2012-04-24 Last updated: 2017-06-01Bibliographically approved
Ringaby, E. & Forssén, P.-E. (2011). Rectifying rolling shutter video from hand-held devices. In: Proceedings SSBA´11 Symposium on Image Analysis. Paper presented at SSBA´11 Symposium on Image Analysis, Linköping, 17-18 mars 2011.
Open this publication in new window or tab >>Rectifying rolling shutter video from hand-held devices
2011 (English)In: Proceedings SSBA´11 Symposium on Image Analysis, 2011Conference paper, Published paper (Other academic)
Abstract [en]

This paper presents a method for rectifying video sequences from rolling shutter (RS) cameras. In contrast to previous RS rectification attempts we model distortions as being caused by the 3D motion of the camera. The camera motion is parametrised as a continuous curve, with knots at the last row of each frame. Curve parameters are solved for using non-linear least squares over inter-frame correspondences obtained from a KLT tracker. We have generated synthetic RS sequences with associated ground-truth to allow controlled evaluation. Using these sequences, we demonstrate that our algorithm improves over two previously published methods. The RS dataset is available on the web to allow comparison with other methods.

National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-70707 (URN)
Conference
SSBA´11 Symposium on Image Analysis, Linköping, 17-18 mars 2011
Available from: 2011-09-15 Created: 2011-09-15 Last updated: 2015-12-10Bibliographically approved
Ringaby, E. & Forssén, P.-E. (2011). Scan Rectification for Structured Light Range Sensors with Rolling Shutters. In: IEEE International Conference on Computer Vision. Paper presented at IEEE International Conference on Computer Vision(ICCV11), 8-11 November 2011, Barcelona, Spain (pp. 1575-1582). Barcelona Spain
Open this publication in new window or tab >>Scan Rectification for Structured Light Range Sensors with Rolling Shutters
2011 (English)In: IEEE International Conference on Computer Vision, Barcelona Spain, 2011, p. 1575-1582Conference paper, Published paper (Other academic)
Abstract [en]

Structured light range sensors, such as the Microsoft Kinect, have recently become popular as perception devices for computer vision and robotic systems. These sensors use CMOS imaging chips with electronic rolling shutters (ERS). When using such a sensor on a moving platform, both the image, and the depth map, will exhibit geometric distortions. We introduce an algorithm that can suppress such distortions, by rectifying the 3D point clouds from the range sensor. This is done by first estimating the time continuous 3D camera trajectory, and then transforming the 3D points to where they would have been, if the camera had been stationary. To ensure that image and range data are synchronous, the camera trajectory is computed from KLT tracks on the structured-light frames, after suppressing the structured-light pattern. We evaluate our rectification, by measuring angles between the visible sides of a cube, before and after rectification. We also measure how much better the 3D point clouds can be aligned after rectification. The obtained improvement is also related to the actual rotational velocity, measured using a MEMS gyroscope.

Place, publisher, year, edition, pages
Barcelona Spain: , 2011
Series
International Conference on Computer Vision (ICCV), ISSN 1550-5499
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-77059 (URN)10.1109/ICCV.2011.6126417 (DOI)978-1-4577-1101-5 (ISBN)
Conference
IEEE International Conference on Computer Vision(ICCV11), 8-11 November 2011, Barcelona, Spain
Available from: 2012-05-07 Created: 2012-05-03 Last updated: 2015-12-10Bibliographically approved
Hanning, G., Forslöw, N., Forssén, P.-E., Ringaby, E., Törnqvist, D. & Callmer, J. (2011). Stabilizing Cell Phone Video using Inertial Measurement Sensors. In: The Second IEEE International Workshop on Mobile Vision: . Paper presented at The Second IEEE International Workshop on Mobile Vision(IWMV11), November 2011, Barcelona, Spain (pp. 1-8). Barcelona Spain
Open this publication in new window or tab >>Stabilizing Cell Phone Video using Inertial Measurement Sensors
Show others...
2011 (English)In: The Second IEEE International Workshop on Mobile Vision, Barcelona Spain, 2011, p. 1-8Conference paper, Published paper (Other academic)
Abstract [en]

We present a system that rectifies and stabilizes video sequences on mobile devices with rolling-shutter cameras. The system corrects for rolling-shutter distortions using measurements from accelerometer and gyroscope sensors, and a 3D rotational distortion model. In order to obtain a stabilized video, and at the same time keep most content in view, we propose an adaptive low-pass filter algorithm to obtain the output camera trajectory. The accuracy of the orientation estimates has been evaluated experimentally using ground truth data from a motion capture system. We have conducted a user study, where the output from our system, implemented in iOS, has been compared to that of three other applications, as well as to the uncorrected video. The study shows that users prefer our sensor-based system.

Place, publisher, year, edition, pages
Barcelona Spain: , 2011
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-77060 (URN)10.1109/ICCVW.2011.6130215 (DOI)978-1-4673-0062-9 (ISBN)
Conference
The Second IEEE International Workshop on Mobile Vision(IWMV11), November 2011, Barcelona, Spain
Available from: 2012-05-07 Created: 2012-05-03 Last updated: 2015-12-10
Organisations

Search in DiVA

Show all publications