AIICS

Olov Andersson

Conference and Workshop Publications

Hide abstracts BibTeX entries
2020
[6] Olov Andersson, Per Sidén, Johan Dahlin, Patrick Doherty and Mattias Villani. 2020.
Real-Time Robotic Search using Structural Spatial Point Processes.
In 35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), pages 995–1005. In series: Proceedings of Machine Learning Research (PMLR) #115. Association For Uncertainty in Artificial Intelligence (AUAI).
Note: Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP); WASP Autonomous Research Arenas - Knut and Alice Wallenberg Foundation; Swedish Foundation for Strategic Research (SSF)Swedish Foundation for Strategic Research; ELLIIT Excellence Center at Link opingLund for Information Technology
Link: http://auai.org/uai2019/proceedings/pape...

Aerial robots hold great potential for aiding Search and Rescue (SAR) efforts over large areas, such as during natural disasters. Traditional approaches typically search an area exhaustively, thereby ignoring that the density of victims varies based on predictable factors, such as the terrain, population density and the type of disaster. We present a probabilistic model to automate SAR planning, with explicit minimization of the expected time to discovery. The proposed model is a spatial point process with three interacting spatial fields for i) the point patterns of persons in the area, ii) the probability of detecting persons and iii) the probability of injury. This structure allows inclusion of informative priors from e.g. geographic or cell phone traffic data, while falling back to latent Gaussian processes when priors are missing or inaccurate. To solve this problem in real-time, we propose a combination of fast approximate inference using Integrated Nested Laplace Approximation (INLA), and a novel Monte Carlo tree search tailored to the problem. Experiments using data simulated from real world Geographic Information System (GIS) maps show that the framework outperforms competing approaches, finding many more injured in the crucial first hours.

2019
[5] Olov Andersson and Patrick Doherty. 2019.
Deep RL for Autonomous Robots: Limitations and Safety Challenges.
In , pages 489–495. ESANN. ISBN: 9782875870650.

With the rise of deep reinforcement learning, there has also been a string of successes on continuous control problems using physics simulators. This has lead to some optimism regarding use in autonomous robots and vehicles. However, to successful apply such techniques to the real world requires a firm grasp of their limitations. As recent work has raised questions of how diverse these simulation benchmarks really are, we here instead analyze a popular deep RL approach on toy examples from robot obstacle avoidance. We find that these converge very slowly, if at all, to safe policies. We identify convergence issues on stochastic environments and local minima as problems that warrant more attention for safety-critical control applications.

2018
[4] Full text  Olov Andersson, Oskar Ljungqvist, Mattias Tiger, Daniel Axehill and Fredrik Heintz. 2018.
Receding-Horizon Lattice-based Motion Planning with Dynamic Obstacle Avoidance.
In 2018 IEEE Conference on Decision and Control (CDC), pages 4467–4474. In series: Conference on Decision and Control (CDC) #2018. Institute of Electrical and Electronics Engineers (IEEE). ISBN: 9781538613955, 9781538613948, 9781538613962.
DOI: 10.1109/CDC.2018.8618964.
Note: This work was partially supported by FFI/VINNOVA, the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wallenberg Foundation, the Swedish Foundation for Strategic Research (SSF) project Symbicloud, the ELLIIT Excellence Center at Linköping-Lund for Information Technology, Swedish Research Council (VR) Linnaeus Center CADICS, and the National Graduate School in Computer Science, Sweden (CUGS).
fulltext:postprint: http://liu.diva-portal.org/smash/get/div...

A key requirement of autonomous vehicles is the capability to safely navigate in their environment. However, outside of controlled environments, safe navigation is a very difficult problem. In particular, the real-world often contains both complex 3D structure, and dynamic obstacles such as people or other vehicles. Dynamic obstacles are particularly challenging, as a principled solution requires planning trajectories with regard to both vehicle dynamics, and the motion of the obstacles. Additionally, the real-time requirements imposed by obstacle motion, coupled with real-world computational limitations, make classical optimality and completeness guarantees difficult to satisfy. We present a unified optimization-based motion planning and control solution, that can navigate in the presence of both static and dynamic obstacles. By combining optimal and receding-horizon control, with temporal multi-resolution lattices, we can precompute optimal motion primitives, and allow real-time planning of physically-feasible trajectories in complex environments with dynamic obstacles. We demonstrate the framework by solving difficult indoor 3D quadcopter navigation scenarios, where it is necessary to plan in time. Including waiting on, and taking detours around, the motions of other people and quadcopters.

2017
[3] Full text  Olov Andersson, Mariusz Wzorek and Patrick Doherty. 2017.
Deep Learning Quadcopter Control via Risk-Aware Active Learning.
In Satinder Singh and Shaul Markovitch, editors, Proceedings of The Thirty-first AAAI Conference on Artificial Intelligence (AAAI), pages 3812–3818. In series: Proceedings of the AAAI Conference on Artificial Intelligence #5. AAAI Press. ISBN: 978-1-57735-784-1.

Modern optimization-based approaches to control increasingly allow automatic generation of complex behavior from only a model and an objective. Recent years has seen growing interest in fast solvers to also allow real-time operation on robots, but the computational cost of such trajectory optimization remains prohibitive for many applications. In this paper we examine a novel deep neural network approximation and validate it on a safe navigation problem with a real nano-quadcopter. As the risk of costly failures is a major concern with real robots, we propose a risk-aware resampling technique. Contrary to prior work this active learning approach is easy to use with existing solvers for trajectory optimization, as well as deep learning. We demonstrate the efficacy of the approach on a difficult collision avoidance problem with non-cooperative moving obstacles. Our findings indicate that the resulting neural network approximations are least 50 times faster than the trajectory optimizer while still satisfying the safety requirements. We demonstrate the potential of the approach by implementing a synthesized deep neural network policy on the nano-quadcopter microcontroller.

2016
[2] Full text  Olov Andersson, Mariusz Wzorek, Piotr Rudol and Patrick Doherty. 2016.
Model-Predictive Control with Stochastic Collision Avoidance using Bayesian Policy Optimization.
In IEEE International Conference on Robotics and Automation (ICRA), 2016, pages 4597–4604. In series: Proceedings of IEEE International Conference on Robotics and Automation #??. Institute of Electrical and Electronics Engineers (IEEE).
DOI: 10.1109/ICRA.2016.7487661.

Robots are increasingly expected to move out of the controlled environment of research labs and into populated streets and workplaces. Collision avoidance in such cluttered and dynamic environments is of increasing importance as robots gain more autonomy. However, efficient avoidance is fundamentally difficult since computing safe trajectories may require considering both dynamics and uncertainty. While heuristics are often used in practice, we take a holistic stochastic trajectory optimization perspective that merges both collision avoidance and control. We examine dynamic obstacles moving without prior coordination, like pedestrians or vehicles. We find that common stochastic simplifications lead to poor approximations when obstacle behavior is difficult to predict. We instead compute efficient approximations by drawing upon techniques from machine learning. We propose to combine policy search with model-predictive control. This allows us to use recent fast constrained model-predictive control solvers, while gaining the stochastic properties of policy-based methods. We exploit recent advances in Bayesian optimization to efficiently solve the resulting probabilistically-constrained policy optimization problems. Finally, we present a real-time implementation of an obstacle avoiding controller for a quadcopter. We demonstrate the results in simulation as well as with real flight experiments.

2015
[1] Full text  Olov Andersson, Fredrik Heintz and Patrick Doherty. 2015.
Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization.
In Blai Bonet and Sven Koenig, editors, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI), pages 2497–2503. AAAI Press. ISBN: 978-1-57735-698-1.
fulltext:print: http://liu.diva-portal.org/smash/get/div...

Reinforcement learning for robot control tasks in continuous environments is a challenging problem due to the dimensionality of the state and action spaces, time and resource costs for learning with a real robot as well as constraints imposed for its safe operation. In this paper we propose a model-based reinforcement learning approach for continuous environments with constraints. The approach combines model-based reinforcement learning with recent advances in approximate optimal control. This results in a bounded-rationality agent that makes decisions in real-time by efficiently solving a sequence of constrained optimization problems on learned sparse Gaussian process models. Such a combination has several advantages. No high-dimensional policy needs to be computed or stored while the learning problem often reduces to a set of lower-dimensional models of the dynamics. In addition, hard constraints can easily be included and objectives can also be changed in real-time to allow for multiple or dynamic tasks. The efficacy of the approach is demonstrated on both an extended cart pole domain and a challenging quadcopter navigation task using real data.