liu.seSearch for publications in DiVA
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Independently Moving Object Trajectories from Sequential Hierarchical Ransac
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.ORCID iD: 0000-0002-5698-5983
2021 (English)In: VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 5: VISAPP, SCITEPRESS , 2021, p. 722-731Conference paper, Published paper (Refereed)
Abstract [en]

Safe robot navigation in a dynamic environment, requires the trajectories of each independently moving object (IMO). We present the novel and effective system Sequential Hierarchical Ransac Estimation (Shire) designed for this purpose. The system uses a stereo camera stream to find the objects and trajectories in real time. Shire detects moving objects using geometric consistency and finds their trajectories using bundle adjustment. Relying on geometric consistency allows the system to handle objects regardless of semantic class, unlike approaches based on semantic segmentation. Most Visual Odometry (VO) systems are inherently limited to single motion by the choice of tracker. This limitation allows for efficient and robust ego-motion estimation in real time, but preclude tracking the multiple motions sought. Shire instead uses a generic tracker and achieves accurate VO and IMO estimates using track analysis. This removes the restriction to a single motion while retaining the real-time performance required for live navigation. We evaluate the system by bounding box intersection over union and ID persistence on a public dataset, collected from an autonomous test vehicle driving in real traffic. We also show the velocities of estimated IMOs. We investigate variations of the system that provide trade offs between accuracy, performance and limitations.

Place, publisher, year, edition, pages
SCITEPRESS , 2021. p. 722-731
Keywords [en]
Robot Navigation; Moving Object Trajectory Estimation; Visual Odometry; SLAM
National Category
Robotics
Identifiers
URN: urn:nbn:se:liu:diva-180066DOI: 10.5220/0010253407220731ISI: 000661288200077ISBN: 9789897584886 (print)OAI: oai:DiVA.org:liu-180066DiVA, id: diva2:1602445
Conference
16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP) / 16th International Conference on Computer Vision Theory and Applications (VISAPP), ELECTR NETWORK, feb 08-10, 2021
Available from: 2021-10-12 Created: 2021-10-12 Last updated: 2022-02-07
In thesis
1. Visual Odometryin Principle and Practice
Open this publication in new window or tab >>Visual Odometryin Principle and Practice
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Vision is the primary means by which we know where we are, what is nearby, and how we are moving. The corresponding computer-vision task is the simultaneous mapping of the surroundings and the localization of the camera. This goes by many names of which this thesis uses Visual Odometry. This name implies the images are sequential and emphasizes the accuracy of the pose and the real time requirements. This field has seen substantial improvements over the past decade and visual odometry is used extensively in robotics for localization, navigation and obstacle detection. 

The main purpose of this thesis is the study and advancement of visual odometry systems, and makes several contributions. The first of which is a high performance stereo visual odometry system, which through geometrically supported tracking achieved top rank on the KITTI odometry benchmark. 

The second is the state-of-the-art perspective three point solver. Such solvers find the pose of a camera given the projections of three known 3d points and are a core part of many visual odometry systems. By reformulating the underlying problem we avoided a problematic quartic polynomial. As a result we achieved substantially higher computational performance and numerical accuracy. 

The third is a system which generalizes stereo visual odometry to the simultaneous estimation of multiple independently moving objects. The main contribution is a real time system which allows the identification of generic moving rigid objects and the prediction of their trajectories in real time, with applications to robotic navigation in in dynamic environments. 

The fourth is an improved spline type continuous pose trajectory estimation framework, which simplifies the integration of general dynamic models. The framework is used to show that visual odometry systems based on continuous pose trajectories are both practical and can operate in real time. 

The visual odometry pipeline is considered from both a theoretical and a practical perspective. The systems described have been tested both on benchmarks and real vehicles. This thesis places the published work into context, highlighting key insights and practical observations.  

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2022. p. 133
Series
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 2201
Keywords
Visual Odometry, Continuous Pose Trajectory, P3P, PNP, VO, Tracking, Calibration
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-182731 (URN)10.3384/9789179291693 (DOI)9789179291686 (ISBN)9789179291693 (ISBN)
Public defence
2022-03-04, Ada Lovelace, B-building and Zoom: https://liuse. zoom.us/j/66219624757, Campus Valla, Linköping, 09:00 (English)
Opponent
Supervisors
Note

ISBN has been added for the PDF-version.

URL has been corrected in the PDF-version.

Available from: 2022-02-07 Created: 2022-02-07 Last updated: 2022-02-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
Persson, MikaelForssén, Per-Erik
By organisation
Computer VisionFaculty of Science & Engineering
Robotics

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 169 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf