Hide menu

Representation, Learning and Planning Lab

RLPLAB members: Hector Geffner, Dominik Drexler, Ulf Nilsson, Daniel Gnad, Jendrik Seipp, Paul Höft and Simon Stålberg.


RLPLAB research topics of interest include:

  • Learning representations for planning: the ability to plan, which is crucial in intelligent systems, relies on models that describe how the world and sensors work. These models are usually expressed in declarative languages that make the structure of problems explicit, and support reuse and effective computation. A key open question is how these model representations can be learned automatically. The problem ranges from learning symbolic representations from non-symbolic data, to learning hierarchies of (learned or symbolic) representations supporting planning at different levels of abstraction.
  • Planning models, algorithms, and techniques: planning models come in different forms depending on the assumptions about actions, states, and sensing. Classical planning is planning under the assumption of deterministic actions, a full initial state, and goal states to be reached. Other forms of planning like MDP and POMDP planning relax some of these assumptions or address other aspects like continuous state spaces and actions. The challenge is to develop scalable algorithms and techniques for addressing the variety of planning models.
  • Planning and reinforcement learning: reinforcement learning (RL) is a generalization of planning where the planning models are not assumed to be known and goals are replaced by rewards to be maximized. In model-based RL, the RL problem is split into two: learning the models and then using them for planning. In model-free RL, a controller is obtained directly from trial and error without the need for learning a model. Some of the biggest AI breakthroughs in recent years have been in Deep RL where the value and policy functions are represented by deep neural networks whose weights are learned by trial and error. The current limitation of these methods is that they require huge amounts of data and that the policy and value functions learned do not generalize well. The use of latent model-representations that are learned from data without supervision is aimed at addressing these limitations and is closely connected with the problem of learning planning representations from data.
  • Generalized planning: in the standard planning setting new problems are solved from scratch. In generalized planning, on the other hand, one looks for general plans or policies that provide solutions to many problems from the same domain. For this, suitable formulations, models, and algorithms are needed. Generalized planning provides another angle from which to study the connection between learning and planning, as in reinforcement learning one is also interested in learning things that have some generality and apply to many problem instances.
  • Model-based vs. model-free intelligence: the topics of learning, representation, and planning are also at the center of the big split in AI between model-free approaches based on learners, and model-based approaches based on solvers. Truly intelligent systems must involve both, very much like human intelligence, which is often described in terms of a fast reactive System (1) and a slow deliberative System (2), which are tightly integrated (See Daniel Kahneman 2011). For this integration, the models used by solvers such as planners have to be learned automatically. This integration is a key challenge in AI and a central goal for the research group.

The research of RLPLAB (Representation Learning and Planning Lab) is primarily supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP). WASP is a major national initiative for strategically motivated basic research, education and faculty recruitment. The funding is generously provided by the Knut and Alice Wallenberg Foundation (KAW).

 



Page responsible: Ulf Nilsson
Last updated: 2022-04-13