liu.seSearch for publications in DiVA
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Probabilistic Hough Voting for Attitude Estimation from Aerial Fisheye Images
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.ORCID iD: 0000-0002-6096-3648
2013 (English)In: Image Analysis: 18th Scandinavian Conference, SCIA 2013, Espoo, Finland, June 17-20, 2013. Proceedings / [ed] Joni-Kristian Kämäräinen and Markus Koskela, Springer Berlin/Heidelberg, 2013, 478-488 p.Conference paper, Published paper (Refereed)
Abstract [en]

For navigation of unmanned aerial vehicles (UAVs), attitude estimation is essential. We present a method for attitude estimation (pitch and roll angle) from aerial fisheye images through horizon detection. The method is based on edge detection and a probabilistic Hough voting scheme.  In a flight scenario, there is often some prior knowledge of the vehicle altitude and attitude. We exploit this prior to make the attitude estimation more robust by letting the edge pixel votes be weighted based on the probability distributions for the altitude and pitch and roll angles. The method does not require any sky/ground segmentation as most horizon detection methods do. Our method has been evaluated on aerial fisheye images from the internet. The horizon is robustly detected in all tested images. The deviation in the attitude estimate between our automated horizon detection and a manual detection is less than 1 degree.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2013. 478-488 p.
Series
Lecture Notes in Computer Science, ISSN 0302-9743 (print), 1611-3349 (online) ; 7944
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:liu:diva-98066DOI: 10.1007/978-3-642-38886-6_45ISI: 000342988500045ISBN: 978-3-642-38885-9 (print)ISBN: 978-3-642-38886-6 (print)OAI: oai:DiVA.org:liu-98066DiVA: diva2:651774
Conference
18th Scandinavian Conferences on Image Analysis (SCIA 2013), 17-20 June 2013, Espoo, Finland.
Projects
CIMSMAP
Available from: 2013-09-27 Created: 2013-09-27 Last updated: 2016-05-04Bibliographically approved
In thesis
1. Global Pose Estimation from Aerial Images: Registration with Elevation Models
Open this publication in new window or tab >>Global Pose Estimation from Aerial Images: Registration with Elevation Models
2014 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Over the last decade, the use of unmanned aerial vehicles (UAVs) has increased drastically. Originally, the use of these aircraft was mainly military, but today many civil applications have emerged. UAVs are frequently the preferred choice for surveillance missions in disaster areas, after earthquakes or hurricanes, and in hazardous environments, e.g. for detection of nuclear radiation. The UAVs employed in these missions are often relatively small in size which implies payload restrictions.

For navigation of the UAVs, continuous global pose (position and attitude) estimation is mandatory. Cameras can be fabricated both small in size and light in weight. This makes vision-based methods well suited for pose estimation onboard these vehicles. It is obvious that no single method can be used for pose estimation in all dierent phases throughout a ight. The image content will be very dierent on the runway, during ascent, during  ight at low or high altitude, above urban or rural areas, etc. In total, a multitude of pose estimation methods is required to handle all these situations. Over the years, a large number of vision-based pose estimation methods for aerial images have been developed. But there are still open research areas within this eld, e.g. the use of omnidirectional images for pose estimation is relatively unexplored.

The contributions of this thesis are three vision-based methods for global egopositioning and/or attitude estimation from aerial images. The rst method for full 6DoF (degrees of freedom) pose estimation is based on registration of local height information with a geo-referenced 3D model. A dense local height map is computed using motion stereo. A pose estimate from navigation sensors is used as an initialization. The global pose is inferred from the 3D similarity transform between the local height map and the 3D model. Aligning height information is assumed to be more robust to season variations than feature matching in a single-view based approach.

The second contribution is a method for attitude (pitch and roll angle) estimation via horizon detection. It is one of only a few methods in the literature that use an omnidirectional (sheye) camera for horizon detection in aerial images. The method is based on edge detection and a probabilistic Hough voting scheme. In a  ight scenario, there is often some knowledge on the probability density for the altitude and the attitude angles. The proposed method allows this prior information to be used to make the attitude estimation more robust.

The third contribution is a further development of method two. It is the very rst method presented where the attitude estimates from the detected horizon in omnidirectional images is rened through registration with the geometrically expected horizon from a digital elevation model. It is one of few methods where the ray refraction in the atmosphere is taken into account, which contributes to the highly accurate pose estimates. The attitude errors obtained are about one order of magnitude smaller than for any previous vision-based method for attitude estimation from horizon detection in aerial images.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2014. 53 p.
Series
Linköping Studies in Science and Technology. Thesis, ISSN 0280-7971 ; 1672
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-108213 (URN)10.3384/lic.diva-108213 (DOI)978-91-7519-279-6 (ISBN)
Presentation
2014-08-22, Visionen, B-huset, Campus Valla, Linköpings universitet, Linköping, 13:15 (Swedish)
Opponent
Supervisors
Available from: 2014-06-26 Created: 2014-06-26 Last updated: 2016-05-04Bibliographically approved

Open Access in DiVA

fulltext(1675 kB)306 downloads
File information
File name FULLTEXT01.pdfFile size 1675 kBChecksum SHA-512
27e3c3824eaf399084e145ffd1436ff42680eb2897ff3833f3d2e66110246cae125d409957f140f6b22c9dbfa7712274f39fc39ce85a71e6970fdf273627400e
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records BETA

Grelsson, BertilFelsberg, Michael

Search in DiVA

By author/editor
Grelsson, BertilFelsberg, Michael
By organisation
Computer VisionThe Institute of Technology
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 306 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 431 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf