liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Andersson, Olov
Publications (5 of 5) Show all publications
Andersson, O., Sidén, P., Dahlin, J., Doherty, P. & Villani, M. (2019). Real-Time Robotic Search using Structural Spatial Point Processes. In: : . Paper presented at Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI 2019), Tel Aviv, Israel, July 22-25, 2019.
Open this publication in new window or tab >>Real-Time Robotic Search using Structural Spatial Point Processes
Show others...
2019 (English)Conference paper, Published paper (Refereed)
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-159698 (URN)
Conference
Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI 2019), Tel Aviv, Israel, July 22-25, 2019
Available from: 2019-08-19 Created: 2019-08-19 Last updated: 2019-08-27Bibliographically approved
Andersson, O., Ljungqvist, O., Tiger, M., Axehill, D. & Heintz, F. (2018). Receding-Horizon Lattice-based Motion Planning with Dynamic Obstacle Avoidance. In: 2018 IEEE Conference on Decision and Control (CDC): . Paper presented at 2018 IEEE 57th Annual Conference on Decision and Control (CDC),17-19 December, Miami, Florida, USA (pp. 4467-4474). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Receding-Horizon Lattice-based Motion Planning with Dynamic Obstacle Avoidance
Show others...
2018 (English)In: 2018 IEEE Conference on Decision and Control (CDC), Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 4467-4474Conference paper, Published paper (Refereed)
Abstract [en]

A key requirement of autonomous vehicles is the capability to safely navigate in their environment. However, outside of controlled environments, safe navigation is a very difficult problem. In particular, the real-world often contains both complex 3D structure, and dynamic obstacles such as people or other vehicles. Dynamic obstacles are particularly challenging, as a principled solution requires planning trajectories with regard to both vehicle dynamics, and the motion of the obstacles. Additionally, the real-time requirements imposed by obstacle motion, coupled with real-world computational limitations, make classical optimality and completeness guarantees difficult to satisfy. We present a unified optimization-based motion planning and control solution, that can navigate in the presence of both static and dynamic obstacles. By combining optimal and receding-horizon control, with temporal multi-resolution lattices, we can precompute optimal motion primitives, and allow real-time planning of physically-feasible trajectories in complex environments with dynamic obstacles. We demonstrate the framework by solving difficult indoor 3D quadcopter navigation scenarios, where it is necessary to plan in time. Including waiting on, and taking detours around, the motions of other people and quadcopters.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2018
Series
Conference on Decision and Control (CDC), ISSN 2576-2370 ; 2018
Keywords
Motion Planning, Optimal Control, Autonomous System, UAV, Dynamic Obstacle Avoidance, Robotics
National Category
Control Engineering
Identifiers
urn:nbn:se:liu:diva-152131 (URN)10.1109/CDC.2018.8618964 (DOI)9781538613955 (ISBN)9781538613948 (ISBN)9781538613962 (ISBN)
Conference
2018 IEEE 57th Annual Conference on Decision and Control (CDC),17-19 December, Miami, Florida, USA
Funder
VINNOVAKnut and Alice Wallenberg FoundationSwedish Foundation for Strategic Research ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish Research CouncilLinnaeus research environment CADICSCUGS (National Graduate School in Computer Science)
Note

This work was partially supported by FFI/VINNOVA, the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wallenberg Foundation, the Swedish Foundation for Strategic Research (SSF) project Symbicloud, the ELLIIT Excellence Center at Linköping-Lund for Information Technology, Swedish Research Council (VR) Linnaeus Center CADICS, and the National Graduate School in Computer Science, Sweden (CUGS).

Available from: 2018-10-18 Created: 2018-10-18 Last updated: 2019-01-30Bibliographically approved
Andersson, O., Wzorek, M. & Doherty, P. (2017). Deep Learning Quadcopter Control via Risk-Aware Active Learning. In: Satinder Singh and Shaul Markovitch (Ed.), Proceedings of The Thirty-first AAAI Conference on Artificial Intelligence (AAAI): . Paper presented at Thirty-First AAAI Conference on Artificial Intelligence (AAAI), 2017, San Francisco, February 4–9. (pp. 3812-3818). AAAI Press, 5
Open this publication in new window or tab >>Deep Learning Quadcopter Control via Risk-Aware Active Learning
2017 (English)In: Proceedings of The Thirty-first AAAI Conference on Artificial Intelligence (AAAI) / [ed] Satinder Singh and Shaul Markovitch, AAAI Press, 2017, Vol. 5, p. 3812-3818Conference paper, Published paper (Refereed)
Abstract [en]

Modern optimization-based approaches to control increasingly allow automatic generation of complex behavior from only a model and an objective. Recent years has seen growing interest in fast solvers to also allow real-time operation on robots, but the computational cost of such trajectory optimization remains prohibitive for many applications. In this paper we examine a novel deep neural network approximation and validate it on a safe navigation problem with a real nano-quadcopter. As the risk of costly failures is a major concern with real robots, we propose a risk-aware resampling technique. Contrary to prior work this active learning approach is easy to use with existing solvers for trajectory optimization, as well as deep learning. We demonstrate the efficacy of the approach on a difficult collision avoidance problem with non-cooperative moving obstacles. Our findings indicate that the resulting neural network approximations are least 50 times faster than the trajectory optimizer while still satisfying the safety requirements. We demonstrate the potential of the approach by implementing a synthesized deep neural network policy on the nano-quadcopter microcontroller.

Place, publisher, year, edition, pages
AAAI Press, 2017
Series
Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399, E-ISSN 2374-3468 ; 5
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Sciences
Identifiers
urn:nbn:se:liu:diva-132800 (URN)978-1-57735-784-1 (ISBN)
Conference
Thirty-First AAAI Conference on Artificial Intelligence (AAAI), 2017, San Francisco, February 4–9.
Projects
ELLIITCADICSNFFP6SYMBICLOUDCUGS
Funder
Linnaeus research environment CADICSELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsEU, FP7, Seventh Framework ProgrammeCUGS (National Graduate School in Computer Science)Swedish Foundation for Strategic Research
Available from: 2016-11-25 Created: 2016-11-25 Last updated: 2018-01-13Bibliographically approved
Andersson, O., Wzorek, M., Rudol, P. & Doherty, P. (2016). Model-Predictive Control with Stochastic Collision Avoidance using Bayesian Policy Optimization. In: IEEE International Conference on Robotics and Automation (ICRA), 2016: . Paper presented at IEEE International Conference on Robotics and Automation (ICRA), 2016, Stockholm, May 16-21 (pp. 4597-4604). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Model-Predictive Control with Stochastic Collision Avoidance using Bayesian Policy Optimization
2016 (English)In: IEEE International Conference on Robotics and Automation (ICRA), 2016, Institute of Electrical and Electronics Engineers (IEEE), 2016, p. 4597-4604Conference paper, Published paper (Refereed)
Abstract [en]

Robots are increasingly expected to move out of the controlled environment of research labs and into populated streets and workplaces. Collision avoidance in such cluttered and dynamic environments is of increasing importance as robots gain more autonomy. However, efficient avoidance is fundamentally difficult since computing safe trajectories may require considering both dynamics and uncertainty. While heuristics are often used in practice, we take a holistic stochastic trajectory optimization perspective that merges both collision avoidance and control. We examine dynamic obstacles moving without prior coordination, like pedestrians or vehicles. We find that common stochastic simplifications lead to poor approximations when obstacle behavior is difficult to predict. We instead compute efficient approximations by drawing upon techniques from machine learning. We propose to combine policy search with model-predictive control. This allows us to use recent fast constrained model-predictive control solvers, while gaining the stochastic properties of policy-based methods. We exploit recent advances in Bayesian optimization to efficiently solve the resulting probabilistically-constrained policy optimization problems. Finally, we present a real-time implementation of an obstacle avoiding controller for a quadcopter. We demonstrate the results in simulation as well as with real flight experiments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2016
Series
Proceedings of IEEE International Conference on Robotics and Automation, ISSN 1050-4729
Keywords
Robot Learning, Collision Avoidance, Robotics, Bayesian Optimization, Model Predictive Control
National Category
Robotics Computer Sciences
Identifiers
urn:nbn:se:liu:diva-126769 (URN)10.1109/ICRA.2016.7487661 (DOI)000389516203138 ()
Conference
IEEE International Conference on Robotics and Automation (ICRA), 2016, Stockholm, May 16-21
Projects
CADICSELLIITNFFP6CUASSHERPA
Funder
Linnaeus research environment CADICSELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsEU, FP7, Seventh Framework ProgrammeSwedish Foundation for Strategic Research
Available from: 2016-04-04 Created: 2016-04-04 Last updated: 2018-01-10Bibliographically approved
Andersson, O., Heintz, F. & Doherty, P. (2015). Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization. In: Blai Bonet and Sven Koenig (Ed.), Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI): . Paper presented at Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI), January 25-30, 2015, Austin, Texas, USA. (pp. 2497-2503). AAAI Press
Open this publication in new window or tab >>Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization
2015 (English)In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI) / [ed] Blai Bonet and Sven Koenig, AAAI Press, 2015, p. 2497-2503Conference paper, Published paper (Refereed)
Abstract [en]

Reinforcement learning for robot control tasks in continuous environments is a challenging problem due to the dimensionality of the state and action spaces, time and resource costs for learning with a real robot as well as constraints imposed for its safe operation. In this paper we propose a model-based reinforcement learning approach for continuous environments with constraints. The approach combines model-based reinforcement learning with recent advances in approximate optimal control. This results in a bounded-rationality agent that makes decisions in real-time by efficiently solving a sequence of constrained optimization problems on learned sparse Gaussian process models. Such a combination has several advantages. No high-dimensional policy needs to be computed or stored while the learning problem often reduces to a set of lower-dimensional models of the dynamics. In addition, hard constraints can easily be included and objectives can also be changed in real-time to allow for multiple or dynamic tasks. The efficacy of the approach is demonstrated on both an extended cart pole domain and a challenging quadcopter navigation task using real data.

Place, publisher, year, edition, pages
AAAI Press, 2015
Keywords
Reinforcement Learning, Gaussian Processes, Optimization, Robotics
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-113385 (URN)978-1-57735-698-1 (ISBN)
Conference
Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI), January 25-30, 2015, Austin, Texas, USA.
Funder
Linnaeus research environment CADICSeLLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsSwedish Foundation for Strategic Research VINNOVAEU, FP7, Seventh Framework Programme
Available from: 2015-01-16 Created: 2015-01-16 Last updated: 2018-01-11Bibliographically approved
Organisations

Search in DiVA

Show all publications