liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Rudol, Piotr
Publications (10 of 22) Show all publications
Doherty, P., Kvarnström, J., Rudol, P., Wzorek, M., Conte, G., Berger, C., . . . Stastny, T. (2016). A Collaborative Framework for 3D Mapping using Unmanned Aerial Vehicles. In: Baldoni, M., Chopra, A.K., Son, T.C., Hirayama, K., Torroni, P. (Ed.), PRIMA 2016: Principles and Practice of Multi-Agent Systems: . Paper presented at PRIMA 2016: Principles and Practice of Multi-Agent Systems (pp. 110-130). Springer Publishing Company
Open this publication in new window or tab >>A Collaborative Framework for 3D Mapping using Unmanned Aerial Vehicles
Show others...
2016 (English)In: PRIMA 2016: Principles and Practice of Multi-Agent Systems / [ed] Baldoni, M., Chopra, A.K., Son, T.C., Hirayama, K., Torroni, P., Springer Publishing Company, 2016, p. 110-130Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes an overview of a generic framework for collaboration among humans and multiple heterogeneous robotic systems based on the use of a formal characterization of delegation as a speech act. The system used contains a complex set of integrated software modules that include delegation managers for each platform, a task specification language for characterizing distributed tasks, a task planner, a multi-agent scan trajectory generation and region partitioning module, and a system infrastructure used to distributively instantiate any number of robotic systems and user interfaces in a collaborative team. The application focusses on 3D reconstruction in alpine environments intended to be used by alpine rescue teams. Two complex UAV systems used in the experiments are described. A fully autonomous collaborative mission executed in the Italian Alps using the framework is also described.

Place, publisher, year, edition, pages
Springer Publishing Company, 2016
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9862
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-130558 (URN)10.1007/978-3-319-44832-9_7 (DOI)000388796200007 ()978-3-319-44831-2 (ISBN)
Conference
PRIMA 2016: Principles and Practice of Multi-Agent Systems
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsEU, FP7, Seventh Framework ProgrammeVINNOVASwedish Research Council
Note

Accepted for publication.

Available from: 2016-08-16 Created: 2016-08-16 Last updated: 2018-02-20
Häger, G., Bhat, G., Danelljan, M., Khan, F. S., Felsberg, M., Rudol, P. & Doherty, P. (2016). Combining Visual Tracking and Person Detection for Long Term Tracking on a UAV. In: Proceedings of the 12th International Symposium on Advances in Visual Computing: . Paper presented at International Symposium on Advances in Visual Computing.
Open this publication in new window or tab >>Combining Visual Tracking and Person Detection for Long Term Tracking on a UAV
Show others...
2016 (English)In: Proceedings of the 12th International Symposium on Advances in Visual Computing, 2016Conference paper, Published paper (Refereed)
Abstract [en]

Visual object tracking performance has improved significantly in recent years. Most trackers are based on either of two paradigms: online learning of an appearance model or the use of a pre-trained object detector. Methods based on online learning provide high accuracy, but are prone to model drift. The model drift occurs when the tracker fails to correctly estimate the tracked object’s position. Methods based on a detector on the other hand typically have good long-term robustness, but reduced accuracy compared to online methods.

Despite the complementarity of the aforementioned approaches, the problem of fusing them into a single framework is largely unexplored. In this paper, we propose a novel fusion between an online tracker and a pre-trained detector for tracking humans from a UAV. The system operates at real-time on a UAV platform. In addition we present a novel dataset for long-term tracking in a UAV setting, that includes scenarios that are typically not well represented in standard visual tracking datasets.

National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-137897 (URN)10.1007/978-3-319-50835-1_50 (DOI)2-s2.0-85007039301 (Scopus ID)978-3-319-50834-4 (ISBN)978-3-319-50835-1 (ISBN)
Conference
International Symposium on Advances in Visual Computing
Available from: 2017-05-31 Created: 2017-05-31 Last updated: 2018-01-13Bibliographically approved
Berger, C., Rudol, P., Wzorek, M. & Kleiner, A. (2016). Evaluation of Reactive Obstacle Avoidance Algorithms for a Quadcopter. In: Proceedings of the 14th International Conference on Control, Automation, Robotics and Vision 2016 (ICARCV): . Paper presented at 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), Phuket, Thailand, November 13-15, 2016. IEEE conference proceedings, Article ID Tu31.3.
Open this publication in new window or tab >>Evaluation of Reactive Obstacle Avoidance Algorithms for a Quadcopter
2016 (English)In: Proceedings of the 14th International Conference on Control, Automation, Robotics and Vision 2016 (ICARCV), IEEE conference proceedings, 2016, article id Tu31.3Conference paper, Published paper (Refereed)
Abstract [en]

In this work we are investigating reactive avoidance techniques which can be used on board of a small quadcopter and which do not require absolute localisation. We propose a local map representation which can be updated with proprioceptive sensors. The local map is centred around the robot and uses spherical coordinates to represent a point cloud. The local map is updated using a depth sensor, the Inertial Measurement Unit and a registration algorithm. We propose an extension of the Dynamic Window Approach to compute a velocity vector based on the current local map. We propose to use an OctoMap structure to compute a 2-pass A* which provide a path which is converted to a velocity vector. Both approaches are reactive as they only make use of local information. The algorithms were evaluated in a simulator which offers a realistic environment, both in terms of control and sensors. The results obtained were also validated by running the algorithms on a real platform.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2016
Series
International Conference on Control Automation Robotics and Vision, ISSN 2474-2953
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-130956 (URN)10.1109/ICARCV.2016.7838803 (DOI)000405520900204 ()2-s2.0-85015170851 (Scopus ID)9781509035496 (ISBN)9781509047574 (ISBN)9781509035502 (ISBN)
Conference
14th International Conference on Control, Automation, Robotics and Vision (ICARCV), Phuket, Thailand, November 13-15, 2016
Note

Funding agencies:This work is partially supported by the Swedish Research Council (VR) Linnaeus Center CADICS, the ELLIIT network organization for Information and Communication Technology, and the Swedish Foundation for Strategic Research (CUAS Project, SymbiKCIoud Project).

Available from: 2016-09-01 Created: 2016-09-01 Last updated: 2018-01-10Bibliographically approved
Andersson, O., Wzorek, M., Rudol, P. & Doherty, P. (2016). Model-Predictive Control with Stochastic Collision Avoidance using Bayesian Policy Optimization. In: IEEE International Conference on Robotics and Automation (ICRA), 2016: . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), 2016, Stockholm, May 16-21 (pp. 4597-4604). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Model-Predictive Control with Stochastic Collision Avoidance using Bayesian Policy Optimization
2016 (English)In: IEEE International Conference on Robotics and Automation (ICRA), 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 4597-4604Conference paper, Published paper (Refereed)
Abstract [en]

Robots are increasingly expected to move out of the controlled environment of research labs and into populated streets and workplaces. Collision avoidance in such cluttered and dynamic environments is of increasing importance as robots gain more autonomy. However, efficient avoidance is fundamentally difficult since computing safe trajectories may require considering both dynamics and uncertainty. While heuristics are often used in practice, we take a holistic stochastic trajectory optimization perspective that merges both collision avoidance and control. We examine dynamic obstacles moving without prior coordination, like pedestrians or vehicles. We find that common stochastic simplifications lead to poor approximations when obstacle behavior is difficult to predict. We instead compute efficient approximations by drawing upon techniques from machine learning. We propose to combine policy search with model-predictive control. This allows us to use recent fast constrained model-predictive control solvers, while gaining the stochastic properties of policy-based methods. We exploit recent advances in Bayesian optimization to efficiently solve the resulting probabilistically-constrained policy optimization problems. Finally, we present a real-time implementation of an obstacle avoiding controller for a quadcopter. We demonstrate the results in simulation as well as with real flight experiments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Series
Proceedings of IEEE International Conference on Robotics and Automation, ISSN 1050-4729
Keyword
Robot Learning, Collision Avoidance, Robotics, Bayesian Optimization, Model Predictive Control
National Category
Robotics Computer Sciences
Identifiers
urn:nbn:se:liu:diva-126769 (URN)10.1109/ICRA.2016.7487661 (DOI)000389516203138 ()
Conference
IEEE International Conference on Robotics and Automation (ICRA), 2016, Stockholm, May 16-21
Projects
CADICSELLIITNFFP6CUASSHERPA
Funder
Linnaeus research environment CADICSELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsEU, FP7, Seventh Framework ProgrammeSwedish Foundation for Strategic Research
Available from: 2016-04-04 Created: 2016-04-04 Last updated: 2018-01-10Bibliographically approved
Danelljan, M., Khan, F. S., Felsberg, M., Granström, K., Heintz, F., Rudol, P., . . . Doherty, P. (2015). A Low-Level Active Vision Framework for Collaborative Unmanned Aircraft Systems. In: Lourdes Agapito, Michael M. Bronstein and Carsten Rother (Ed.), Lourdes Agapito, Michael M. Bronstein and Carsten Rother (Ed.), COMPUTER VISION - ECCV 2014 WORKSHOPS, PT I: . Paper presented at 13th European Conference on Computer Vision (ECCV) Switzerland, September 6-7 and 12 (pp. 223-237). Springer Publishing Company, 8925
Open this publication in new window or tab >>A Low-Level Active Vision Framework for Collaborative Unmanned Aircraft Systems
Show others...
2015 (English)In: COMPUTER VISION - ECCV 2014 WORKSHOPS, PT I / [ed] Lourdes Agapito, Michael M. Bronstein and Carsten Rother, Springer Publishing Company, 2015, Vol. 8925, p. 223-237Conference paper, Published paper (Refereed)
Abstract [en]

Micro unmanned aerial vehicles are becoming increasingly interesting for aiding and collaborating with human agents in myriads of applications, but in particular they are useful for monitoring inaccessible or dangerous areas. In order to interact with and monitor humans, these systems need robust and real-time computer vision subsystems that allow to detect and follow persons.

In this work, we propose a low-level active vision framework to accomplish these challenging tasks. Based on the LinkQuad platform, we present a system study that implements the detection and tracking of people under fully autonomous flight conditions, keeping the vehicle within a certain distance of a person. The framework integrates state-of-the-art methods from visual detection and tracking, Bayesian filtering, and AI-based control. The results from our experiments clearly suggest that the proposed framework performs real-time detection and tracking of persons in complex scenarios

Place, publisher, year, edition, pages
Springer Publishing Company, 2015
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 8925
Keyword
Visual tracking; Visual surveillance; Micro UAV; Active vision
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Sciences
Identifiers
urn:nbn:se:liu:diva-115847 (URN)10.1007/978-3-319-16178-5_15 (DOI)000362493800015 ()978-3-319-16177-8 (ISBN)978-3-319-16178-5 (ISBN)
Conference
13th European Conference on Computer Vision (ECCV) Switzerland, September 6-7 and 12
Available from: 2015-03-20 Created: 2015-03-20 Last updated: 2018-02-07Bibliographically approved
Conte, G., Rudol, P. & Doherty, P. (2014). Evaluation of a Light-weight Lidar and a Photogrammetric System for Unmanned Airborne Mapping Applications: [Bewertung eines Lidar-systems mit geringem Gewicht und eines photogrammetrischen Systems für Anwendungen auf einem UAV]. Photogrammetrie - Fernerkundung - Geoinformation (4), 287-298
Open this publication in new window or tab >>Evaluation of a Light-weight Lidar and a Photogrammetric System for Unmanned Airborne Mapping Applications: [Bewertung eines Lidar-systems mit geringem Gewicht und eines photogrammetrischen Systems für Anwendungen auf einem UAV]
2014 (English)In: Photogrammetrie - Fernerkundung - Geoinformation, ISSN 1432-8364, no 4, p. 287-298Article in journal (Refereed) Published
Abstract [en]

This paper presents a comparison of two light-weight and low-cost airborne mapping systems. One is based on a lidar technology and the other on a video camera. The airborne lidar system consists of a high-precision global navigation satellite system (GNSS) receiver, a microelectromechanical system (MEMS) inertial measurement unit, a magnetic compass and a low-cost lidar scanner. The vision system is based on a consumer grade video camera. A commercial photogrammetric software package is used to process the acquired images and generate a digital surface model. The two systems are described and compared in terms of hardware requirements and data processing. The systems are also tested and compared with respect to their application on board of an unmanned aerial vehicle (UAV). An evaluation of the accuracy of the two systems is presented. Additionally, the multi echo capability of the lidar sensor is evaluated in a test site covered with dense vegetation. The lidar and the camera systems were mounted and tested on-board an industrial unmanned helicopter with maximum take-off weight of around 100 kilograms. The presented results are based on real flight-test data.

Abstract [de]

Dieser Beitrag präsentiert einen Vergleich von zwei leichten und kostengünstigen luftgestützten Kartiersystemen. Eines der Systeme basiert auf Laserscanner-Technologie, während das andere eine Videokamera benutzt. Das luftgestützte Laserscannersystem besteht aus einem hochgenauen Empfänger für globale Navigationssatellitensysteme (GNSS), einer inertialen Messeinheit (IMU) auf Basis eines mikro-elektromechanischen Systems (MEMS), einem magnetischen Kompass und einem kostengünstigen Laserscanner. Das optische System basiert auf einer handelsüblichen Videokamera. Ein kommerzielles photogrammetrisches Softwarepaket wird verwendet, um die damit aufgenommenen Bilder zu prozessieren und digitale Oberflächenmodelle abzuleiten. Die beiden Systeme werden beschrieben und in Hinblick auf ihre Anforderungen an Hardware und Datenprozessierung verglichen. Außerdem werden sie in Hinblick auf ihre Eigenschaften bei der Verwendung auf unbemannten Flugkörpern (UAV) getestet und verglichen. Die Genauigkeit beider Systeme wird evaluiert. Zusätzlich wird die Fähigkeit des Laserscanner-Sensors in Hinblick auf Mehrfachechos in einem Testgebiet mit dichter Vegetation untersucht. Beide Systeme wurden auf einem unbemannten Industrie-Helikopter mit einem maximalen Startgewicht von ca. 100 kg montiert. Alle hier präsentierten Daten beruhen auf tatsächlich im Zuge von Testflügen aufgenommenen Daten.

Place, publisher, year, edition, pages
Stuttgart, Germany: E. Schweizerbart'sche Verlagsbuchhandlung, 2014
Keyword
UAS, lidar, sensor fusion, photogrammetry
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-109086 (URN)000340440700007 ()
Projects
SHERPACADICSELLIITNFFP6 KISACUAS
Funder
EU, FP7, Seventh Framework Programme, 600958Linnaeus research environment CADICSELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish Foundation for Strategic Research
Available from: 2014-08-06 Created: 2014-08-06 Last updated: 2018-01-11
Doherty, P., Kvarnström, J., Wzorek, M., Rudol, P., Heintz, F. & Conte, G. (2014). HDRC3 - A Distributed Hybrid Deliberative/Reactive Architecture for Unmanned Aircraft Systems. In: Kimon P. Valavanis, George J. Vachtsevanos (Ed.), Handbook of Unmanned Aerial Vehicles: (pp. 849-952). Dordrecht: Springer Science+Business Media B.V.
Open this publication in new window or tab >>HDRC3 - A Distributed Hybrid Deliberative/Reactive Architecture for Unmanned Aircraft Systems
Show others...
2014 (English)In: Handbook of Unmanned Aerial Vehicles / [ed] Kimon P. Valavanis, George J. Vachtsevanos, Dordrecht: Springer Science+Business Media B.V., 2014, p. 849-952Chapter in book (Other academic)
Abstract [en]

This chapter presents a distributed architecture for unmanned aircraft systems that provides full integration of both low autonomy and high autonomy. The architecture has been instantiated and used in a rotorbased aerial vehicle, but is not limited to use in particular aircraft systems. Various generic functionalities essential to the integration of both low autonomy and high autonomy in a single system are isolated and described. The architecture has also been extended for use with multi-platform systems. The chapter covers the full spectrum of functionalities required for operation in missions requiring high autonomy.  A control kernel is presented with diverse flight modes integrated with a navigation subsystem. Specific interfaces and languages are introduced which provide seamless transition between deliberative and reactive capability and reactive and control capability. Hierarchical Concurrent State Machines are introduced as a real-time mechanism for specifying and executing low-level reactive control. Task Specification Trees are introduced as both a declarative and procedural mechanism for specification of high-level tasks. Task planners and motion planners are described which are tightly integrated into the architecture. Generic middleware capability for specifying data and knowledge flow within the architecture based on a stream abstraction is also described. The use of temporal logic is prevalent and is used both as a specification language and as an integral part of an execution monitoring mechanism. Emphasis is placed on the robust integration and interaction between these diverse functionalities using a principled architectural framework.  The architecture has been empirically tested in several complex missions, some of which are described in the chapter.

Place, publisher, year, edition, pages
Dordrecht: Springer Science+Business Media B.V., 2014
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-113613 (URN)10.1007/978-90-481-9707-1_118 (DOI)978-90-481-9706-4 (ISBN)978-90-481-9707-1 (ISBN)
Funder
EU, FP7, Seventh Framework Programme, 600958Swedish Foundation for Strategic Research Linnaeus research environment CADICSeLLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2015-01-26 Created: 2015-01-26 Last updated: 2018-01-11
Kolling, A., Kleiner, A. & Rudol, P. (2013). Fast Guaranteed Search With Unmanned Aerial Vehicles. In: Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013): . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), November 3-8, 2013, Tokyo, Japan (pp. 6013-6018). IEEE
Open this publication in new window or tab >>Fast Guaranteed Search With Unmanned Aerial Vehicles
2013 (English)In: Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), IEEE , 2013, p. 6013-6018Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we consider the problem of searching for an arbitrarily smart and fast evader in a large environment with a team of unmanned aerial vehicles (UAVs) while providing guarantees of detection. Our emphasis is on the fast execution of efficient search strategies that minimize the number of UAVs and the search time. We present the first approach for computing fast search strategies utilizing additional searchers to speed up the execution time and thereby enabling large scale UAV search. In order to scale to very large environments when using UAVs one would either have to overcome the energy limitations of UAVs or pay the cost of utilizing additional UAVs to speed up the search. Our approach is based on coordinating UAVs on sweep lines, covered by the UAV sensors, that move simultaneously through an environment. We present some simulation results that show a significant reduction in execution time when using multiple UAVs and a demonstration of a real system with three ARDrones. 

Place, publisher, year, edition, pages
IEEE, 2013
Series
IEEE International Conference on Intelligent Robots and Systems. Proceedings, ISSN 2153-0858
National Category
Computer Systems Robotics
Identifiers
urn:nbn:se:liu:diva-95888 (URN)10.1109/IROS.2013.6697229 (DOI)000331367406013 ()
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013), November 3-8, 2013, Tokyo, Japan
Projects
ELLIITCUASSHERPA
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications, 1025
Available from: 2013-08-07 Created: 2013-08-07 Last updated: 2017-02-13
Conte, G., Kleiner, A., Rudol, P., Korwel, K., Wzorek, M. & Doherty, P. (2013). Performance evaluation of a light weight multi-echo LIDAR for unmanned rotorcraft applications. In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W2: . Paper presented at Conference on Unmanned Aerial Vehicle in Geomatics (UAV-g 2013), 4-6 September 2013, Rostock, Germany.
Open this publication in new window or tab >>Performance evaluation of a light weight multi-echo LIDAR for unmanned rotorcraft applications
Show others...
2013 (English)In: International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-1/W2, 2013Conference paper, Published paper (Refereed)
Abstract [en]

The paper presents a light-weight and low-cost airborne terrain mapping system. The developed Airborne LiDAR Scanner (ALS) sys- tem consists of a high-precision GNSS receiver, an inertial measurement unit and a magnetic compass which are used to complement a LiDAR sensor in order to compute the terrain model. Evaluation of the accuracy of the generated 3D model is presented. Additionally, a comparison is provided between the terrain model generated from the developed ALS system and a model generated using a commer- cial photogrammetric software. Finally, the multi-echo capability of the used LiDAR sensor is evaluated in areas covered with dense vegetation. The ALS system and camera systems were mounted on-board an industrial unmanned helicopter of around 100 kilograms maximum take-off weight. Presented results are based on real flight-test data.

Series
International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, ISSN 2194-9034
National Category
Robotics
Identifiers
urn:nbn:se:liu:diva-95889 (URN)000358305000016 ()
Conference
Conference on Unmanned Aerial Vehicle in Geomatics (UAV-g 2013), 4-6 September 2013, Rostock, Germany
Projects
Artificial Intelligence & Integrated Computer Systems
Funder
eLLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsEU, FP7, Seventh Framework ProgrammeLinnaeus research environment CADICS
Available from: 2013-08-07 Created: 2013-08-07 Last updated: 2016-08-22
Rudol, P. (2011). Increasing Autonomy of Unmanned Aircraft Systems Through the Use of Imaging Sensors. (Licentiate dissertation). Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>Increasing Autonomy of Unmanned Aircraft Systems Through the Use of Imaging Sensors
2011 (English)Licentiate thesis, monograph (Other academic)
Abstract [en]

The range of missions performed by Unmanned Aircraft Systems (UAS) has been steadily growing in the past decades thanks to continued development in several disciplines. The goal of increasing the autonomy of UAS's is widening the range of tasks which can be carried out without, or with minimal, external help. This thesis presents methods for increasing specific aspects of autonomy of UAS's operating both in outdoor and indoor environments where cameras are used as the primary sensors.

First, a method for fusing color and thermal images for object detection, geolocation and tracking for UAS's operating primarily outdoors is presented. Specifically, a method for building saliency maps where human body locations are marked as points of interest is described. Such maps can be used in emergency situations to increase the situational awareness of first responders or a robotic system itself. Additionally, the same method is applied to the problem of vehicle tracking. A generated stream of geographical locations of tracked vehicles increases situational awareness by allowing for qualitative reasoning about, for example, vehicles overtaking, entering or leaving crossings.

Second, two approaches to the UAS indoor localization problem in the absence of GPS-based positioning are presented. Both use cameras as the main sensors and enable autonomous indoor ight and navigation. The first approach takes advantage of cooperation with a ground robot to provide a UAS with its localization information. The second approach uses marker-based visual pose estimation where all computations are done onboard a small-scale aircraft which additionally increases its autonomy by not relying on external computational power.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2011. p. 96
Series
Linköping Studies in Science and Technology. Thesis, ISSN 0280-7971 ; 1510
Keyword
UAV, UAS, UAV autonomy, human-body detection, color-thermal image fusion, vehicle tracking, geolocation, UAV indoor navigation
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-71295 (URN)LiU-Tek-Lic-2011:49 (Local ID)978-91-7393-034-5 (ISBN)LiU-Tek-Lic-2011:49 (Archive number)LiU-Tek-Lic-2011:49 (OAI)
Presentation
2011-11-04, Alan Turing, Hus E, Campus Valla, Linköpings universitet, Linköping, 13:15 (English)
Opponent
Supervisors
Available from: 2011-11-28 Created: 2011-10-10 Last updated: 2018-01-12Bibliographically approved
Organisations

Search in DiVA

Show all publications