Model-Predictive Control with Stochastic Collision Avoidance using Bayesian Policy Optimization
2016 (English)In: IEEE International Conference on Robotics and Automation (ICRA), 2016, 2016Conference paper (Refereed)
Robots are increasingly expected to move out of the controlled environment of research labs and into populated streets and workplaces. Collision avoidance in such cluttered and dynamic environments is of increasing importance as robots gain more autonomy. However, efficient avoidance is fundamentally difficult since computing safe trajectories may require considering both dynamics and uncertainty. While heuristics are often used in practice, we take a holistic stochastic trajectory optimization perspective that merges both collision avoidance and control. We examine dynamic obstacles moving without prior coordination, like pedestrians or vehicles. We find that common stochastic simplifications lead to poor approximations when obstacle behavior is difficult to predict. We instead compute efficient approximations by drawing upon techniques from machine learning. We propose to combine policy search with model-predictive control. This allows us to use recent fast constrained model-predictive control solvers, while gaining the stochastic properties of policy-based methods. We exploit recent advances in Bayesian optimization to efficiently solve the resulting probabilistically-constrained policy optimization problems. Finally, we present a real-time implementation of an obstacle avoiding controller for a quadcopter. We demonstrate the results in simulation as well as with real flight experiments.
Place, publisher, year, edition, pages
Robot Learning, Collision Avoidance, Robotics, Bayesian Optimization, Model Predictive Control
IdentifiersURN: urn:nbn:se:liu:diva-126769OAI: oai:DiVA.org:liu-126769DiVA: diva2:916711
IEEE International Conference on Robotics and Automation (ICRA), 2016, Stockholm, May 16-21