Fusing Object Detections to Obtain Geolocated Salient Points Using Aerial Images
2025 (English)In: Multi-disciplinary Trends in Artificial Intelligence. MIWAI 2024. Lecture Notes in Computer Science. Springer Nature Singapore / [ed] Sombattheera, Chattrakul and Weng, Paul and Pang, Jun, Singapore: Springer Nature, 2025, Vol. 15432, p. 155-166Conference paper, Published paper (Refereed)
Abstract [en]
This paper addresses the problem of vision-based object geolocation using Unmanned Aerial Vehicles in Search and Rescue settings. It focuses on the task of automatically and accurately geolocating objects of different classes, focusing on human bodies, to provide a map of the detected objects as salient locations. Such maps can be used by responders to plan rescue operations or by other robotic platforms where geolocation is necessary, such as with delivery of medical supplies. The proposed solution strategy leverages recent developments in the field of Convolutional Neural Networks for vision-based object detection with a method for fusing detections. Occupancy probabilities of locations in the environment containing objects of specific classes, or lack thereof, are also computed. This is achieved by taking advantage of a novel sensor model for fusing vision-based detections using both positive and negative observations. The method is validated in simulation as well as with real field experiments.
Place, publisher, year, edition, pages
Singapore: Springer Nature, 2025. Vol. 15432, p. 155-166
Series
Lecture notes in artificial intelligence ; 15432Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
Keywords [en]
Detection fusion; Geolocation; UAVs; Drones; CNN
National Category
Artificial Intelligence Computer Sciences
Identifiers
URN: urn:nbn:se:liu:diva-211975DOI: 10.1007/978-981-96-0695-5_13ISBN: 978-981-96-0695-5 (electronic)OAI: oai:DiVA.org:liu-211975DiVA, id: diva2:1941599
Conference
International Conference on Multi-disciplinary Trends in Artificial Intelligence
Note
Funding Agencies|
ELLIIT Network Organization for Information and Communication Technology, Sweden (Project B09), the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, and Sweden’s Innovation Agency Vinnova (Projects: 2022-00086, 2023-01035, 2024-01322, 2024-01775). The 3rd author is also supported by a research grant from Mahasarakham University, Thailand.
2025-03-012025-03-012025-03-01