liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 15 of 15
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Forssén, Per-Erik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ringaby, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Rectifying rolling shutter video from hand-held devices2010In: IEEE  Conference on  Computer Vision and Pattern Recognition (CVPR), 2010, Los Alamitos, CA, USA: IEEE Computer Society, 2010, p. 507-514Conference paper (Other academic)
    Abstract [en]

    This paper presents a method for rectifying video sequences from rolling shutter (RS) cameras. In contrast to previous RS rectification attempts we model distortions as being caused by the 3D motion of the camera. The camera motion is parametrised as a continuous curve, with knots at the last row of each frame. Curve parameters are solved for using non-linear least squares over inter-frame correspondences obtained from a KLT tracker. We have generated synthetic RS sequences with associated ground-truth to allow controlled evaluation. Using these sequences, we demonstrate that our algorithm improves over to two previously published methods. The RS dataset is available on the web to allow comparison with other methods

  • 2.
    Hanning, Gustav
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forslöw, Nicklas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ringaby, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Törnqvist, David
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Callmer, Jonas
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Stabilizing Cell Phone Video using Inertial Measurement Sensors2011In: The Second IEEE International Workshop on Mobile Vision, Barcelona Spain, 2011, p. 1-8Conference paper (Other academic)
    Abstract [en]

    We present a system that rectifies and stabilizes video sequences on mobile devices with rolling-shutter cameras. The system corrects for rolling-shutter distortions using measurements from accelerometer and gyroscope sensors, and a 3D rotational distortion model. In order to obtain a stabilized video, and at the same time keep most content in view, we propose an adaptive low-pass filter algorithm to obtain the output camera trajectory. The accuracy of the orientation estimates has been evaluated experimentally using ground truth data from a motion capture system. We have conducted a user study, where the output from our system, implemented in iOS, has been compared to that of three other applications, as well as to the uncorrected video. The study shows that users prefer our sensor-based system.

  • 3.
    Hedborg, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ringaby, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Rolling Shutter Bundle Adjustment2012Conference paper (Refereed)
    Abstract [en]

    This paper introduces a bundle adjustment (BA) method that obtains accurate structure and motion from rolling shutter (RS) video sequences: RSBA. When a classical BA algorithm processes a rolling shutter video, the resultant camera trajectory is brittle, and complete failures are not uncommon. We exploit the temporal continuity of the camera motion to define residuals of image point trajectories with respect to the camera trajectory. We compare the camera trajectories from RSBA to those from classical BA, and from classical BA on rectified videos. The comparisons are done on real video sequences from an iPhone 4, with ground truth obtained from a global shutter camera, rigidly mounted to the iPhone 4. Compared to classical BA, the rolling shutter model requires just six extra parameters. It also degrades the sparsity of the system Jacobian slightly, but as we demonstrate, the increase in computation time is moderate. Decisive advantages are that RSBA succeeds in cases where competing methods diverge, and consistently produces more accurate results.

  • 4.
    Hedborg, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ringaby, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Structure and Motion Estimation from Rolling Shutter Video2011In: IEEE International Conference onComputer Vision Workshops (ICCV Workshops), 2011, IEEE Xplore , 2011, p. 17-23Conference paper (Refereed)
    Abstract [en]

    The majority of consumer quality cameras sold today have CMOS sensors with rolling shutters. In a rolling shutter camera, images are read out row by row, and thus each row is exposed during a different time interval. A rolling-shutter exposure causes geometric image distortions when either the camera or the scene is moving, and this causes state-of-the-art structure and motion algorithms to fail. We demonstrate a novel method for solving the structure and motion problem for rolling-shutter video. The method relies on exploiting the continuity of the camera motion, both between frames, and across a frame. We demonstrate the effectiveness of our method by controlled experiments on real video sequences. We show, both visually and quantitatively, that our method outperforms standard structure and motion, and is more accurate and efficient than a two-step approach, doing image rectification and structure and motion.

  • 5.
    Ringaby, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Geometric Computer Vision for Rolling-shutter and Push-broom Sensors2012Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Almost all cell-phones and camcorders sold today are equipped with a CMOS (Complementary Metal Oxide Semiconductor) image sensor and there is also a general trend to incorporate CMOS sensors in other types of cameras. The sensor has many advantages over the more conventional CCD (Charge-Coupled Device) sensor such as lower power consumption, cheaper manufacturing and the potential for on-chip processing. Almost all CMOS sensors make use of what is called a rolling shutter. Compared to a global shutter, which images all the pixels at the same time, a rolling-shutter camera exposes the image row-by-row. This leads to geometric distortions in the image when either the camera or the objects in the scene are moving. The recorded videos and images will look wobbly (jello effect), skewed or otherwise strange and this is often not desirable. In addition, many computer vision algorithms assume that the camera used has a global shutter, and will break down if the distortions are too severe.

    In airborne remote sensing it is common to use push-broom sensors. These sensors exhibit a similar kind of distortion as a rolling-shutter camera, due to the motion of the aircraft. If the acquired images are to be matched with maps or other images, then the distortions need to be suppressed.

    The main contributions in this thesis are the development of the three dimensional models for rolling-shutter distortion correction. Previous attempts modelled the distortions as taking place in the image plane, and we have shown that our techniques give better results for hand-held camera motions.

    The basic idea is to estimate the camera motion, not only between frames, but also the motion during frame capture. The motion can be estimated using inter-frame image correspondences and with these a non-linear optimisation problem can be formulated and solved. All rows in the rolling-shutter image are imaged at different times, and when the motion is known, each row can be transformed to the rectified position.

    In addition to rolling-shutter distortions, hand-held footage often has shaky camera motion. It has been shown how to do efficient video stabilisation, in combination with the rectification, using rotation smoothing.

    In the thesis it has been explored how to use similar techniques as for the rolling-shutter case in order to correct push-broom images, and also how to rectify 3D point clouds from e.g. the Kinect depth sensor.

    List of papers
    1. Rectifying rolling shutter video from hand-held devices
    Open this publication in new window or tab >>Rectifying rolling shutter video from hand-held devices
    2010 (English)In: IEEE  Conference on  Computer Vision and Pattern Recognition (CVPR), 2010, Los Alamitos, CA, USA: IEEE Computer Society, 2010, p. 507-514Conference paper, Published paper (Other academic)
    Abstract [en]

    This paper presents a method for rectifying video sequences from rolling shutter (RS) cameras. In contrast to previous RS rectification attempts we model distortions as being caused by the 3D motion of the camera. The camera motion is parametrised as a continuous curve, with knots at the last row of each frame. Curve parameters are solved for using non-linear least squares over inter-frame correspondences obtained from a KLT tracker. We have generated synthetic RS sequences with associated ground-truth to allow controlled evaluation. Using these sequences, we demonstrate that our algorithm improves over to two previously published methods. The RS dataset is available on the web to allow comparison with other methods

    Place, publisher, year, edition, pages
    Los Alamitos, CA, USA: IEEE Computer Society, 2010
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-70572 (URN)10.1109/CVPR.2010.5540173 (DOI)978-1-4244-6984-0 (ISBN)
    Conference
    CVPR10, San Fransisco, USA, June 13-18, 2010
    Available from: 2011-09-13 Created: 2011-09-13 Last updated: 2015-12-10
    2. Efficient Video Rectification and Stabilisation for Cell-Phones
    Open this publication in new window or tab >>Efficient Video Rectification and Stabilisation for Cell-Phones
    2012 (English)In: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, Vol. 96, no 3, p. 335-352Article in journal (Refereed) Published
    Abstract [en]

    This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.

    Place, publisher, year, edition, pages
    Springer Verlag (Germany), 2012
    Keywords
    Cell-phone, Rolling shutter, CMOS, Video stabilisation
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-75277 (URN)10.1007/s11263-011-0465-8 (DOI)000299769400005 ()
    Note
    Funding Agencies|CENIIT organisation at Linkoping Institute of Technology||Swedish Research Council||Available from: 2012-02-27 Created: 2012-02-24 Last updated: 2017-12-07
    3. Scan Rectification for Structured Light Range Sensors with Rolling Shutters
    Open this publication in new window or tab >>Scan Rectification for Structured Light Range Sensors with Rolling Shutters
    2011 (English)In: IEEE International Conference on Computer Vision, Barcelona Spain, 2011, p. 1575-1582Conference paper, Published paper (Other academic)
    Abstract [en]

    Structured light range sensors, such as the Microsoft Kinect, have recently become popular as perception devices for computer vision and robotic systems. These sensors use CMOS imaging chips with electronic rolling shutters (ERS). When using such a sensor on a moving platform, both the image, and the depth map, will exhibit geometric distortions. We introduce an algorithm that can suppress such distortions, by rectifying the 3D point clouds from the range sensor. This is done by first estimating the time continuous 3D camera trajectory, and then transforming the 3D points to where they would have been, if the camera had been stationary. To ensure that image and range data are synchronous, the camera trajectory is computed from KLT tracks on the structured-light frames, after suppressing the structured-light pattern. We evaluate our rectification, by measuring angles between the visible sides of a cube, before and after rectification. We also measure how much better the 3D point clouds can be aligned after rectification. The obtained improvement is also related to the actual rotational velocity, measured using a MEMS gyroscope.

    Place, publisher, year, edition, pages
    Barcelona Spain: , 2011
    Series
    International Conference on Computer Vision (ICCV), ISSN 1550-5499
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-77059 (URN)10.1109/ICCV.2011.6126417 (DOI)978-1-4577-1101-5 (ISBN)
    Conference
    IEEE International Conference on Computer Vision(ICCV11), 8-11 November 2011, Barcelona, Spain
    Available from: 2012-05-07 Created: 2012-05-03 Last updated: 2015-12-10Bibliographically approved
    4. Co-alignmnent of Aerial Push-broom Strips using Trajectory Smoothness Constraints
    Open this publication in new window or tab >>Co-alignmnent of Aerial Push-broom Strips using Trajectory Smoothness Constraints
    2010 (English)Conference paper, Published paper (Other academic)
    Abstract [en]

    We study the problem of registering a sequence of scan lines (a strip) from an airborne push-broom imager to another sequence partly covering the same area. Such a registration has to compensate for deformations caused by attitude and speed changes in the aircraft. The registration is challenging, as both strips contain such deformations. Our algorithm estimates the 3D rotation of the camera for each scan line, by parametrising it as a linear spline with a number of knots evenly distributed in one of the strips. The rotations are estimated from correspondences between strips of the same area. Once the rotations are known, they can be compensated for, and each line of pixels can be transformed such that ground trace of the two strips are registered with respect to each other.

    Place, publisher, year, edition, pages
    Swedish Society for automated image analysis, 2010
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-70706 (URN)
    Conference
    SSBA10, Symposium on Image Analysis 11-12 March, Uppsala
    Available from: 2011-09-15 Created: 2011-09-15 Last updated: 2018-01-12Bibliographically approved
    5. Co-aligning Aerial Hyperspectral Push-broom Strips for Change Detection
    Open this publication in new window or tab >>Co-aligning Aerial Hyperspectral Push-broom Strips for Change Detection
    2010 (English)In: Proc. SPIE 7835, Electro-Optical Remote Sensing, Photonic Technologies, and Applications IV / [ed] Gary W. Kamerman; Ove Steinvall; Keith L. Lewis; Richard C. Hollins; Thomas J. Merlet; Gary J. Bishop; John D. Gonglewski, SPIE - International Society for Optical Engineering, 2010, p. Art.nr. 7835B-36-Conference paper, Published paper (Refereed)
    Abstract [en]

    We have performed a field trial with an airborne push-broom hyperspectral sensor, making several flights over the same area and with known changes (e.g., moved vehicles) between the flights. Each flight results in a sequence of scan lines forming an image strip, and in order to detect changes between two flights, the two resulting image strips must be geometrically aligned and radiometrically corrected. The focus of this paper is the geometrical alignment, and we propose an image- and gyro-based method for geometric co-alignment (registration) of two image strips. The method is particularly useful when the sensor is not stabilized, thus reducing the need for expensive mechanical stabilization. The method works in several steps, including gyro-based rectification, global alignment using SIFT matching, and a local alignment using KLT tracking. Experimental results are shown but not quantified, as ground truth is, by the nature of the trial, lacking.

    Place, publisher, year, edition, pages
    SPIE - International Society for Optical Engineering, 2010
    Series
    Proceedings Spie, ISSN 0277-786X ; 7835
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-70464 (URN)10.1117/12.865034 (DOI)978-0-8194-8353-9 (ISBN)
    Conference
    Electro-Optical Remote Sensing, Photonic Technologies, and Applications IV, 20-23 September, Toulouse, France
    Available from: 2011-09-13 Created: 2011-09-09 Last updated: 2018-01-12Bibliographically approved
  • 6.
    Ringaby, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Geometric Models for Rolling-shutter and Push-broom Sensors2014Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Almost all cell-phones and camcorders sold today are equipped with a  CMOS (Complementary Metal Oxide Semiconductor) image sensor and there is also a general trend to incorporate CMOS sensors in other types of cameras. The CMOS sensor has many advantages over the more conventional CCD (Charge-Coupled Device) sensor such as lower power consumption, cheaper manufacturing and the potential for onchip processing. Nearly all CMOS sensors make use of what is called a rolling shutter readout. Unlike a global shutter readout, which images all the pixels at the same time, a rolling-shutter exposes the image row-by-row. If a mechanical shutter is not used this will lead to geometric distortions in the image when either the camera or the objects in the scene are moving. Smaller cameras, like those in cell-phones, do not have mechanical shutters and systems that do have them will not use them when recording video. The result will look wobbly (jello eect), skewed or otherwise strange and this is often not desirable. In addition, many computer vision algorithms assume that the camera used has a global shutter and will break down if the distortions are too severe.

    In airborne remote sensing it is common to use push-broom sensors. These sensors exhibit a similar kind of distortion as that of a rolling-shutter camera, due to the motion of the aircraft. If the acquired images are to be registered to maps or other images, the distortions need to be suppressed.

    The main contributions in this thesis are the development of the three-dimensional models for rolling-shutter distortion correction. Previous attempts modelled the distortions as taking place in the image plane, and we have shown that our techniques give better results for hand-held camera motions. The basic idea is to estimate the camera motion, not only between frames, but also the motion during frame capture. The motion is estimated using image correspondences and with these a non-linear optimisation problem is formulated and solved. All rows in the rollingshutter image are imaged at dierent times, and when the motion is known, each row can be transformed to its rectied position. The same is true when using depth sensors such as the Microsoft Kinect, and the thesis describes how to estimate its 3D motion and how to rectify 3D point clouds.

    In the thesis it has also been explored how to use similar techniques as for the rolling-shutter case, to correct push-broom images. When a transformation has been found, the images need to be resampled to a regular grid in order to be visualised. This can be done in many ways and dierent methods have been tested and adapted to the push-broom setup.

    In addition to rolling-shutter distortions, hand-held footage often has shaky camera motion. It is possible to do ecient video stabilisation in combination with the rectication using rotation smoothing. Apart from these distortions, motion blur is a big problem for hand-held photography. The images will be blurry due to the camera motion and also noisy if taken in low light conditions. One of the contributions in the thesis is a method which uses gyroscope measurements and feature tracking to combine several images, taken with a smartphone, into one resulting image with less blur and noise. This enables the user to take photos which would have otherwise required a tripod.

    List of papers
    1. Rectifying rolling shutter video from hand-held devices
    Open this publication in new window or tab >>Rectifying rolling shutter video from hand-held devices
    2010 (English)In: IEEE  Conference on  Computer Vision and Pattern Recognition (CVPR), 2010, Los Alamitos, CA, USA: IEEE Computer Society, 2010, p. 507-514Conference paper, Published paper (Other academic)
    Abstract [en]

    This paper presents a method for rectifying video sequences from rolling shutter (RS) cameras. In contrast to previous RS rectification attempts we model distortions as being caused by the 3D motion of the camera. The camera motion is parametrised as a continuous curve, with knots at the last row of each frame. Curve parameters are solved for using non-linear least squares over inter-frame correspondences obtained from a KLT tracker. We have generated synthetic RS sequences with associated ground-truth to allow controlled evaluation. Using these sequences, we demonstrate that our algorithm improves over to two previously published methods. The RS dataset is available on the web to allow comparison with other methods

    Place, publisher, year, edition, pages
    Los Alamitos, CA, USA: IEEE Computer Society, 2010
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-70572 (URN)10.1109/CVPR.2010.5540173 (DOI)978-1-4244-6984-0 (ISBN)
    Conference
    CVPR10, San Fransisco, USA, June 13-18, 2010
    Available from: 2011-09-13 Created: 2011-09-13 Last updated: 2015-12-10
    2. Efficient Video Rectification and Stabilisation for Cell-Phones
    Open this publication in new window or tab >>Efficient Video Rectification and Stabilisation for Cell-Phones
    2012 (English)In: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, Vol. 96, no 3, p. 335-352Article in journal (Refereed) Published
    Abstract [en]

    This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.

    Place, publisher, year, edition, pages
    Springer Verlag (Germany), 2012
    Keywords
    Cell-phone, Rolling shutter, CMOS, Video stabilisation
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-75277 (URN)10.1007/s11263-011-0465-8 (DOI)000299769400005 ()
    Note
    Funding Agencies|CENIIT organisation at Linkoping Institute of Technology||Swedish Research Council||Available from: 2012-02-27 Created: 2012-02-24 Last updated: 2017-12-07
    3. Scan Rectification for Structured Light Range Sensors with Rolling Shutters
    Open this publication in new window or tab >>Scan Rectification for Structured Light Range Sensors with Rolling Shutters
    2011 (English)In: IEEE International Conference on Computer Vision, Barcelona Spain, 2011, p. 1575-1582Conference paper, Published paper (Other academic)
    Abstract [en]

    Structured light range sensors, such as the Microsoft Kinect, have recently become popular as perception devices for computer vision and robotic systems. These sensors use CMOS imaging chips with electronic rolling shutters (ERS). When using such a sensor on a moving platform, both the image, and the depth map, will exhibit geometric distortions. We introduce an algorithm that can suppress such distortions, by rectifying the 3D point clouds from the range sensor. This is done by first estimating the time continuous 3D camera trajectory, and then transforming the 3D points to where they would have been, if the camera had been stationary. To ensure that image and range data are synchronous, the camera trajectory is computed from KLT tracks on the structured-light frames, after suppressing the structured-light pattern. We evaluate our rectification, by measuring angles between the visible sides of a cube, before and after rectification. We also measure how much better the 3D point clouds can be aligned after rectification. The obtained improvement is also related to the actual rotational velocity, measured using a MEMS gyroscope.

    Place, publisher, year, edition, pages
    Barcelona Spain: , 2011
    Series
    International Conference on Computer Vision (ICCV), ISSN 1550-5499
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-77059 (URN)10.1109/ICCV.2011.6126417 (DOI)978-1-4577-1101-5 (ISBN)
    Conference
    IEEE International Conference on Computer Vision(ICCV11), 8-11 November 2011, Barcelona, Spain
    Available from: 2012-05-07 Created: 2012-05-03 Last updated: 2015-12-10Bibliographically approved
    4. Co-alignmnent of Aerial Push-broom Strips using Trajectory Smoothness Constraints
    Open this publication in new window or tab >>Co-alignmnent of Aerial Push-broom Strips using Trajectory Smoothness Constraints
    2010 (English)Conference paper, Published paper (Other academic)
    Abstract [en]

    We study the problem of registering a sequence of scan lines (a strip) from an airborne push-broom imager to another sequence partly covering the same area. Such a registration has to compensate for deformations caused by attitude and speed changes in the aircraft. The registration is challenging, as both strips contain such deformations. Our algorithm estimates the 3D rotation of the camera for each scan line, by parametrising it as a linear spline with a number of knots evenly distributed in one of the strips. The rotations are estimated from correspondences between strips of the same area. Once the rotations are known, they can be compensated for, and each line of pixels can be transformed such that ground trace of the two strips are registered with respect to each other.

    Place, publisher, year, edition, pages
    Swedish Society for automated image analysis, 2010
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-70706 (URN)
    Conference
    SSBA10, Symposium on Image Analysis 11-12 March, Uppsala
    Available from: 2011-09-15 Created: 2011-09-15 Last updated: 2018-01-12Bibliographically approved
    5. Anisotropic Scattered Data Interpolation for Pushbroom Image Rectification
    Open this publication in new window or tab >>Anisotropic Scattered Data Interpolation for Pushbroom Image Rectification
    Show others...
    2014 (English)In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 23, no 5, p. 2302-2314Article in journal (Refereed) Published
    Abstract [en]

    This article deals with fast and accurate visualization of pushbroom image data from airborne and spaceborne platforms. A pushbroom sensor acquires images in a line-scanning fashion, and this results in scattered input data that needs to be resampled onto a uniform grid for geometrically correct visualization. To this end, we model the anisotropic spatial dependence structure caused by the acquisition process. Several methods for scattered data interpolation are then adapted to handle the induced anisotropic metric and compared for the pushbroom image rectification problem. A trick that exploits the semi-ordered line structure of pushbroom data to improve the computational complexity several orders of magnitude is also presented.

    Place, publisher, year, edition, pages
    IEEE, 2014
    Keywords
    pushbroom, rectification, hyperspectral, interpolation, anisotropic, scattered data
    National Category
    Engineering and Technology Electrical Engineering, Electronic Engineering, Information Engineering Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-108105 (URN)10.1109/TIP.2014.2316377 (DOI)000350284400001 ()
    Available from: 2014-06-25 Created: 2014-06-25 Last updated: 2018-09-25Bibliographically approved
    6. A Virtual Tripod for Hand-held Video Stacking on Smartphones
    Open this publication in new window or tab >>A Virtual Tripod for Hand-held Video Stacking on Smartphones
    2014 (English)In: 2014 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY (ICCP), IEEE , 2014Conference paper, Published paper (Refereed)
    Abstract [en]

    We propose an algorithm that can capture sharp, low-noise images in low-light conditions on a hand-held smartphone. We make use of the recent ability to acquire bursts of high resolution images on high-end models such as the iPhone5s. Frames are aligned, or stacked, using rolling shutter correction, based on motion estimated from the built-in gyro sensors and image feature tracking. After stacking, the images may be combined, using e.g. averaging to produce a sharp, low-noise photo. We have tested the algorithm on a variety of different scenes, using several different smartphones. We compare our method to denoising, direct stacking, as well as a global-shutter based stacking, with favourable results.

    Place, publisher, year, edition, pages
    IEEE, 2014
    Series
    IEEE International Conference on Computational Photography, ISSN 2164-9774
    National Category
    Engineering and Technology Electrical Engineering, Electronic Engineering, Information Engineering Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-108109 (URN)10.1109/ICCPHOT.2014.6831799 (DOI)000356494100001 ()978-1-4799-5188-8 (ISBN)
    Conference
    IEEE International Conference on Computational Photography (ICCP 2014), May 2-4, 2014, Intel, Santa Clara, USA
    Projects
    VPS
    Available from: 2014-06-25 Created: 2014-06-25 Last updated: 2015-12-10Bibliographically approved
  • 7.
    Ringaby, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Optical Flow Computation on CUDA2009In: SSBA, 2009, p. 81-84Conference paper (Other academic)
    Abstract [en]

    There has been a rapid progress of the graphics processor the last years, much because of the demands from computer games on speed and realistic rendering. Because of the graphics processor’s special architecture it is much faster at solving parallel problems than the normal processor. Due to its increasing generality it is possible to use it for other tasks than it was originally designed for.

    Even though graphics processors have been programmable for some time, it has been quite difficult to learn how to use them. CUDA (Compute Unified Device Architecture) enables the programmer to use C-code, with a few extensions, to program NVIDIA’s graphics processor and completely skip the traditional programming models. This paper investigates if the graphics processor can be used for calculations without knowledge of how the hardware mechanisms work. An image processing algorithm calculating the optical flow has been implemented. The result shows that it is rather easy to implement programs using CUDA, but some knowledge of how the graphics processor works is required to achieve high performance.

  • 8.
    Ringaby, Erik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ahlberg, Jörgen
    Sensor Informatics Group, Swedish Defence Research Agenc y (FOI), Linköping.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Wadströmer, Niclas
    Sensor Informatics Group, Swedish Defence Research Agenc y (FOI), Linköping.
    Co-alignmnent of Aerial Push-broom Strips using Trajectory Smoothness Constraints2010Conference paper (Other academic)
    Abstract [en]

    We study the problem of registering a sequence of scan lines (a strip) from an airborne push-broom imager to another sequence partly covering the same area. Such a registration has to compensate for deformations caused by attitude and speed changes in the aircraft. The registration is challenging, as both strips contain such deformations. Our algorithm estimates the 3D rotation of the camera for each scan line, by parametrising it as a linear spline with a number of knots evenly distributed in one of the strips. The rotations are estimated from correspondences between strips of the same area. Once the rotations are known, they can be compensated for, and each line of pixels can be transformed such that ground trace of the two strips are registered with respect to each other.

  • 9.
    Ringaby, Erik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ahlberg, Jörgen
    FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Wadströmer, Niclas
    FOI, Swedish Defence Research Agency, Linköping, Sweden.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Co-aligning Aerial Hyperspectral Push-broom Strips for Change Detection2010In: Proc. SPIE 7835, Electro-Optical Remote Sensing, Photonic Technologies, and Applications IV / [ed] Gary W. Kamerman; Ove Steinvall; Keith L. Lewis; Richard C. Hollins; Thomas J. Merlet; Gary J. Bishop; John D. Gonglewski, SPIE - International Society for Optical Engineering, 2010, p. Art.nr. 7835B-36-Conference paper (Refereed)
    Abstract [en]

    We have performed a field trial with an airborne push-broom hyperspectral sensor, making several flights over the same area and with known changes (e.g., moved vehicles) between the flights. Each flight results in a sequence of scan lines forming an image strip, and in order to detect changes between two flights, the two resulting image strips must be geometrically aligned and radiometrically corrected. The focus of this paper is the geometrical alignment, and we propose an image- and gyro-based method for geometric co-alignment (registration) of two image strips. The method is particularly useful when the sensor is not stabilized, thus reducing the need for expensive mechanical stabilization. The method works in several steps, including gyro-based rectification, global alignment using SIFT matching, and a local alignment using KLT tracking. Experimental results are shown but not quantified, as ground truth is, by the nature of the trial, lacking.

  • 10.
    Ringaby, Erik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    A Virtual Tripod for Hand-held Video Stacking on Smartphones2014In: 2014 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY (ICCP), IEEE , 2014Conference paper (Refereed)
    Abstract [en]

    We propose an algorithm that can capture sharp, low-noise images in low-light conditions on a hand-held smartphone. We make use of the recent ability to acquire bursts of high resolution images on high-end models such as the iPhone5s. Frames are aligned, or stacked, using rolling shutter correction, based on motion estimated from the built-in gyro sensors and image feature tracking. After stacking, the images may be combined, using e.g. averaging to produce a sharp, low-noise photo. We have tested the algorithm on a variety of different scenes, using several different smartphones. We compare our method to denoising, direct stacking, as well as a global-shutter based stacking, with favourable results.

  • 11.
    Ringaby, Erik
    et al.
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Efficient Video Rectification and Stabilisation for Cell-Phones2012In: International Journal of Computer Vision, ISSN 0920-5691, E-ISSN 1573-1405, Vol. 96, no 3, p. 335-352Article in journal (Refereed)
    Abstract [en]

    This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.

  • 12.
    Ringaby, Erik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Rectifying rolling shutter video from hand-held devices2011In: Proceedings SSBA´11 Symposium on Image Analysis, 2011Conference paper (Other academic)
    Abstract [en]

    This paper presents a method for rectifying video sequences from rolling shutter (RS) cameras. In contrast to previous RS rectification attempts we model distortions as being caused by the 3D motion of the camera. The camera motion is parametrised as a continuous curve, with knots at the last row of each frame. Curve parameters are solved for using non-linear least squares over inter-frame correspondences obtained from a KLT tracker. We have generated synthetic RS sequences with associated ground-truth to allow controlled evaluation. Using these sequences, we demonstrate that our algorithm improves over two previously published methods. The RS dataset is available on the web to allow comparison with other methods.

  • 13.
    Ringaby, Erik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Scan Rectification for Structured Light Range Sensors with Rolling Shutters2011In: IEEE International Conference on Computer Vision, Barcelona Spain, 2011, p. 1575-1582Conference paper (Other academic)
    Abstract [en]

    Structured light range sensors, such as the Microsoft Kinect, have recently become popular as perception devices for computer vision and robotic systems. These sensors use CMOS imaging chips with electronic rolling shutters (ERS). When using such a sensor on a moving platform, both the image, and the depth map, will exhibit geometric distortions. We introduce an algorithm that can suppress such distortions, by rectifying the 3D point clouds from the range sensor. This is done by first estimating the time continuous 3D camera trajectory, and then transforming the 3D points to where they would have been, if the camera had been stationary. To ensure that image and range data are synchronous, the camera trajectory is computed from KLT tracks on the structured-light frames, after suppressing the structured-light pattern. We evaluate our rectification, by measuring angles between the visible sides of a cube, before and after rectification. We also measure how much better the 3D point clouds can be aligned after rectification. The obtained improvement is also related to the actual rotational velocity, measured using a MEMS gyroscope.

  • 14.
    Ringaby, Erik
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Friman, Ola
    Linköping University, Department of Biomedical Engineering, Medical Informatics. Linköping University, The Institute of Technology. Sick IVP AB, Linköping, Sweden.
    Olsvik Opsahl, Thomas
    Norwegian Defence Research Establishment.
    Vegard Haavardsholm, Trym
    Norwegian Defence Research Establishment.
    Kåsen, Ingebjørg
    Norwegian Defence Research Establishment.
    Anisotropic Scattered Data Interpolation for Pushbroom Image Rectification2014In: IEEE Transactions on Image Processing, ISSN 1057-7149, E-ISSN 1941-0042, Vol. 23, no 5, p. 2302-2314Article in journal (Refereed)
    Abstract [en]

    This article deals with fast and accurate visualization of pushbroom image data from airborne and spaceborne platforms. A pushbroom sensor acquires images in a line-scanning fashion, and this results in scattered input data that needs to be resampled onto a uniform grid for geometrically correct visualization. To this end, we model the anisotropic spatial dependence structure caused by the acquisition process. Several methods for scattered data interpolation are then adapted to handle the induced anisotropic metric and compared for the pushbroom image rectification problem. A trick that exploits the semi-ordered line structure of pushbroom data to improve the computational complexity several orders of magnitude is also presented.

  • 15.
    Zografos, Vasileios
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Lenz, Reiner
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, The Institute of Technology. Linköping University, Center for Medical Image Science and Visualization (CMIV).
    Ringaby, Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Felsberg, Michael
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Nordberg, Klas
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Fast segmentation of sparse 3D point trajectories using group theoretical invariants2015In: COMPUTER VISION - ACCV 2014, PT IV / [ed] D. Cremers, I. Reid, H. Saito, M.-H. Yang, Springer, 2015, Vol. 9006, p. 675-691Conference paper (Refereed)
    Abstract [en]

    We present a novel approach for segmenting different motions from 3D trajectories. Our approach uses the theory of transformation groups to derive a set of invariants of 3D points located on the same rigid object. These invariants are inexpensive to calculate, involving primarily QR factorizations of small matrices. The invariants are easily converted into a set of robust motion affinities and with the use of a local sampling scheme and spectral clustering, they can be incorporated into a highly efficient motion segmentation algorithm. We have also captured a new multi-object 3D motion dataset, on which we have evaluated our approach, and compared against state-of-the-art competing methods from literature. Our results show that our approach outperforms all methods while being robust to perspective distortions and degenerate configurations.

1 - 15 of 15
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf