The main literature for this course is the book Automated Planning: Theory and Practice by Malik Ghallab, Dana Nau and Paolo Traverso. The current printing of the book contains a number of errors – please check the errata list! We may also provide additional references for specific topics.
Much of the course material will also be covered during the lectures. However, some of the essential issues discussed in the book are more suitable for reading at your own pace. Lectures are a complement to the book, not a replacement!
What are you expected to learn? As in any course, it is naturally impossible to give you an exact definition of the relative importance of every issue covered in the book. However, we can still give you some general guidelines through the reading instructions below. We would also like to point out that in general, we focus more on understanding than on formal definitions. Here are a few illustrating examples:
Notation: You must also be able to read and understand basic notation such as γ(s,a) for the state transition function and S for the set of states. These symbols are used in many places throughout the book and have also been used during the lectures. You do not necessarily have to remember that Γ^(s) is the set of states reachable from s, but you should understand the concept of reachable states.
You must understand and be able to explain the general structure of a state transition system and its components (states, actions, transition function). However, you don't necessarily have to remember if the transition system is defined as Σ=(S,A,γ) or Σ=(A,S,γ).
You should understand the various restrictions we place on the state transition system for classical planning (A0-A7), and if asked whether classical planning allows (for example) non-deterministic actions, you must know the answer. However, you don't necessarily have to be able to enumerate all restrictions by heart.
You should also understand and be able to explain how algorithms such as TFD and PFD work, and why choosing a single method to expand in PFD can lead to many successor networks (second line from the end in Figure 11.9, page 243). But remembering the order of the arguments to delta(w,u,m,sigma) in PFD is irrelevant.
Though we don't focus on formalities, you still need to clearly demonstrate what you have understood. For example, if you are asked to explain a particular method for planning, such as PFD, your explanation should be sufficiently strict, detailed, coherent and comprehensible that (a) someone who is not familiar with the area easily understands the main issues involved, and (b) someone familiar with the area sees that you have understood the critical characteristics of the method. It should be possible to apply the explanation to a simple example and see how it works. As long as you can do this, being able to write down the exact definition of an algorithm in the book from memory in pseudo-code is less important – but using pseudo-code is of course one possible way of being strict and detailed in a description!
The following chapters are part of the course during 2011. The reading instructions have now been extended with additional examples of what we focus on in this course. These are only examples, not an exhaustive list of everything you need to know in each chapter.
Chapter 1: Introduction and Overview.
Appendix A: Search Procedures and Computational Complexity.
Appendix B: First-Order Logic.
The first chapter provides a useful introduction to various
forms of planning. Reading it will help you understand the
remainder of the book.
Also, please take a look at Appendixes A and B to ensure that you have the basic prerequisites for taking this course. In particular, sections A.1, A.2 and B.[1-3] are essential.
Chapter 2: Representations for Classical
Planning.
We will mostly be using the classical and
state-variable representations. However, a basic
understanding of the set-theoretic representation (2.2) will
also be useful in order to fully understand the properties
of the other representations.
You must understand these representations to the extent that you can use them to model various planning problems.
Again, notatation is less important than understanding concepts. You do have to understand common notation that has been used in the lectures, such as γ(s,a), pi, S=(Σ,s0,g), and so on. If we use less common notation, such as Γ(s) [page 22], it will be briefly explained: The exam will remind you that "Γ(s) is the set of all successors of s", or "Γ-1(g) is defined as { γ-1(g,a) | a is relevant for g }". You will still have to understand the concepts "successor", "relevant action for a goal", and so on.
Sections 2.4.5-2.4.8 are not included.
Chapter 3: Complexity of Classical Planning.
Those parts of this chapter that are included in the course
will be presented during one of the lectures. Other than
this, we do not require detailed knowledge of the different
complexity classes resulting from various representations
and limitations on expressivity.
In other words, this is one of the few cases where the contents of the course is defined by the lectures rather than by the book.
Chapter 4: State-Space Planning.
Chapter 5: Plan-Space Planning.
These chapters
provide a number of fundamental concepts used in a very
large number of planners, and are consequently very
important for your understanding of the material in the
remainder of the course.
Lifted versions of planning are not part of the course. Section 4.4, on STRIPS, is not included. Sections 5.4.2, on the PoP procedure, and 5.5, on extensions to partial-order planning, are not included.
Chapter 6: Planning-Graph Techniques.
You should understand the basic workings of Graphplan and
the underlying concepts. Section 6.4 is not included.
You will not be expected to provide any of the proofs in this chapter. Neither will you have to remember the exact propositions ("theorems") stated – but you may have to know their intuitive meaning. For example, Proposition 6.5 essentially means that expanding a planning graph to allow plans of a certain length only takes time polynomial in the length, not exponential as if you calculated the actions that were truly reachable in that many steps, and this is an important reason why Graphplan "works". Similarly, proposition 6.6 essentially means that you can't expand the graph indefinitely: You will reach a point where no more facts become true and no mutexes can change. Understanding this is far more important than being able to reproduce the exact propositions in their formal form.
You should be able to expand a plan graph for a small example (a couple of levels) and show how a plan can be found in this example through backward search.
Chapter 7: Propositional Satisfiability
Techniques.
You need to understand the basic
concepts and techniques involved in converting classical
planning problems into satisfiability problems (7.2),
including but not limited to action encodings, bounded
planning problems, and frame axioms.
The specific details of the Davis-Putnam algorithm or other satisfiability algorithms are not part of the course (7.3). Section 7.4 is not included.
Chapter 8: Constraint Satisfaction Techniques.
Not included in the course.
Chapter 9: Heuristics in Planning.
Heuristics are a very important concept that should be
understood in detail, especially for state-space planning.
Section 9.4 is not included. We cover this chapter in
close connection to state-based planning.
For example, you need to understand the concept of relaxation. You need to understand how forward and backward search can benefit from the use of heuristics. You need to understand how the hm heuristics work. You do not necessarily have to be able to write them down in the shape of formulas as in equations 9.1 and 9.3, but as usual, you do have to be able to provide clear, concise and comprehensible descriptions of what the heuristics achieve and how they are calculated.
Chapter 10: Control Rules in Planning.
Chapter 11: Hierarchical Task Network Planning.
These chapters show how planning can be guided using two
separate forms of domain knowledge.
Apart from understanding the use of control rules or HTN methods in isolation, you should also understand key issues such as how formulas can be progressed through states, how pruning differs from using heuristics and how HTNs and control rules lead to different domain models due to HTNs being based on "recipes" specifying good actions and control rules being based on formulas forbidding bad actions.
Sections 10.5, 10.6, 11.5, 11.6.1, 11.7, and 11.8 are not included. Note that Section 11.6.2 is included.
Chapter 12: Control Strategies in Deductive Planning.
Not included in the course.
Chapter 13: Time for Planning.
Chapter 14: Temporal Planning.
Chapter 15: Planning and Resource Scheduling.
Not included in the course.
Chapter 16: Planning Based on Markov Decision Processes.
Chapter 17: Planning Based on Model Checking.
Appendix C: Model Checking.
Chapter 18: Uncertainty with Neoclassical Techniques.
Sections 16.1 and 16.2 are included in the course. Sections 16.3 and 16.4 are not included.
You should be able to apply policy iteration and value iteration to a simple problem instance.
Chapter 17 and the associated Appendix C are not included.
Chapter 18 is not included.
Chapter 19: Space Applications.
Chapter 21: Planning for Manufacturability Analysis.
Chapter 22: Emergency Evacuation Planning.
Chapter 23: Planning in the Game of Bridge.
These chapters are not included in the course, but provide
an interesting view of different ways in which automated
planning can be applied to solve real-world problems.
Chapter 20: Planning in Robotics.
Section 20.2, covering path and motion planning, is included
in the course. The course slides on path planning are also
included.
Chapter 24: Other Approaches to Planning.
This chapter provides a high-level overview of various
important aspects of planning and is part of the course to
the extent that you should be aware of the issues discussed
in the chapter. There will be no exam questions
specifically on this material.