Electronic Newsletter Actions and Change

Electronic Newsletter on
Reasoning about Actions and Change


Issue 97025 Editor: Erik Sandewall 21.11.1997

The ETAI is organized and published under the auspices of the
European Coordinating Committee for Artificial Intelligence (ECCAI).

Today

Today we announce an important FIRST: Paolo Liberatore's article "The Complexity of the Language A" has been accepted to the Electronic Transactions of Artificial Intelligence after due refereeing. The article itself and the discussion session are available as usual from the ETAI home page. We will be back in a few days with a detailed resume of the process that led up to the acceptance of this article and an explanation of what will happen next, and in particular how it will enter into the quarterly issue and the annual volume of the ETAI. Meanwhile, congratulations to Paolo for being the author of the first-ever ETAI accepted article!

The article has been refereed by two members of the editorial committee and one outside referee. One of the referees has a question to the author; this question is listed below and has been included in the article's interaction page. And note that the discussion about the article does not need to stop just because the article has been accepted! Additional questions will always be welcome; there is no time limit.

Two days ago, in the discussion about ontologies, Vladimir Lifschitz commented on the translation of action languages to classical logic. This has sparked reactions from several colleagues who have other (and different) perspectives on this issue, namely Tom Costello, Patrick Doherty, and myself. The present newsletter issue contains these three debate contributions.


ETAI Publications

Discussion about received articles

Additional debate contributions have been received for the following article(s). Please click the title of the article to link to the interaction page, containing both new and old contributions to the discussion.

Paolo Liberatore
The Complexity of the Language A


Debates

NRAC Panel Discussion on Ontologies for Actions and Change

Erik Sandewall:

Vladimir,

In ENAI 19.11, in the context of the discussion with Tom Costello and Patrick Doherty, you wrote:

  It is impossible, unfortunately, to translate an action language into classical logic in a modular way, because classical logic is monotonic, and action languages are not. The best we can achieve is a modular translation from an action language into a nonmonotonic formalism, such as circumscription, whose semantics can be characterized by a nonmodular translation into classical logic. This is indeed indirect and complex. But we have to pay this price for the convenience of reasoning about actions in classical logic.

Through the Features and Fluents approach, we have demonstrated a much more direct and less complex way of doing these things. However, "convenience of reasoning" is not the only issue, and it's not what Patrick addressed; what he actually wrote and what you quoted in the preceding lines was:

  On the other hand, if provided with well-understood and modular translations into classical logic, it is much easier to evaluate progress and simplify comparisons.

In particular, translating scenario descriptions into first-order logic helps evaluation and comparisons in two ways. First, for transparency, i.e. for allowing us to understand in a precise manner what the scenario descriptions say, which is also what Tom Costello's persistent questions are concerned about. After it, there is the issue of actually carrying out the reasoning about actions, which quite possibly can be done (or implemented) more conveniently if one uses first-order theorem provers as inference engines. Anyway, let's stick to the question of how we can evaluate progress and simplify comparisons for the work that gets done and published.

With respect to transparency, it seems to me that action languages such as cal-A and cal-E add to the obscurity rather than dissolving it. In the Features and Fluents approach, we achieve the same results in a much less elaborate fashion: we have one single language, namely a straightforward multi-sorted first-order theory; we have a set of syntactic abbreviations or "macros" in order to sugar the notation for legibility; and we have two different semantics. There is the classical semantics of the Tarskian type which is almost straight from the textbook, with routine modifications to take care of the multitude of types and for assuring a closed-world assumption with respect to objects. There is also the underlying semantics which specifies what one really means - corresponding to Tom Costello's question to Tony Kakas and Rob Miller, where he wrote:

  The reason I ask for truth conditions for your propositions is that I cannot understand what the intuitive consequences of a set of propositions should be, unless I understand what the propositions say.

As you say, action languages are supposed to be "understandable and intuitive". Languages cannot be understood without semantics.

The underlying semantics does exactly this. To put it another way, here is a recipe for converting an action-language approach to our approach. You take the action-language setup with its two languages, each with its own syntax and semantics. First, you remove the separate syntax of the action language. You retain the use of two distinct semantics, so one and the same scenario description can be mapped into a set of models by two different methods: the set of classical models, and the set of intended models. Also, you make sure that the two semantics use the same space of possible interpretations; it's just that the set of intended models is (usually) a subset of the set of classical models. The you have captured the essentials of our approach.

The definition of the underlying semantics is indeed not truth-functional in a conventional way; it can rather be described as a kind of simulation of the world in question. However, truth conditions are still used within that 'simulation', namely for defining the truth conditions for each individual observation statement. The resulting semantics is concise, intuitively convincing, and formally precise at the same time.

This approach has several important advantages:

A possible objection against defining a quite expressive language from the start is that you may not be able to prove your results for such a broad language. That is one reason why we focus on "range of applicability" results rather than "validation" results. Instead of defining a narrow language and proving that a property holds throughout the language, we identify what is the part of our broad language where the property holds. This helps avoiding the proliferation of languages, but it also helps obtaining results that are as general as possible, since the result is not artificially constrained by the choice of language. This is a real difference between the approaches, since the published general results about cal-A type languages are consistently validation results (unless I have missed something).

Then there is the other reason for reducing an action language to a first-order theory, namely for implementation purposes. There, again, Doherty et al have developed efficient computational mechanisms as a part of our approach; their implementation has been available for on-line use over the net since May of this year. In their case it's only an implementation issue; the purpose of their work is not to assign a meaning to the language since that has already been taken care of. (Patrick writes more about this in his contribution to the discussion, below).

The bottom line is that it is perfectly possible to achieve legibility (initial motivation of action languages), transparency (Tom Costello's request), and effective implementation by the approach used in Features and Fluents, and with much less administrative overhead than in the work based on action languages. My question is, therefore: what are the real benefits of all the definitional work in the action-language papers; what are the results that the rest of us can use?

Tom Costello:

Vladimir writes,

  It seems to me that the semantics of an action language cannot be described by specifying truth conditions for its propositions. The problem is the same as with nonmonotonic languages in general. Take, for instance, the closed-world database {P(1),P(2)}. The negation of P(3) is a consequence of this database, but this fact cannot be justified on the basis of truth conditions for P(1) and P(2).

I do not ask that Action language designer's define their semantics in terms of truth conditions. I ask that the designers give truth conditions to each of their assertions. The difference can be seen in your example if you take P to mean there is a flight and 1 to means Glasgow,London and 2 to mean London,Moscow. The P(1) is true, if there is a flight from Glasgow to London. What is puzzling me about Kakas and Miller's language is what their propositions mean, in exactly this sense.

In particular, what does

   A causes F if G
or
   A initiates F if G
mean. That is, given a model M of a domain description, when is this proposition satisfied in M. I have pointed out that problems arise under certain definitions of model.

The most difficult issue that I am aware of, is that it is unclear whether

   A causes F if G
means that
   in every actual state S where G is true, then F is true in R(A,s),
or the similar, but different
   in every possible state S where G is true, then F is true in R(A,s).
Similarly, does
   Always F,G
or Kakas and Miller's
   F Whenever -G
mean that every actual state satisfies F,G, or every possible state.

Tom Costello

Patrick Doherty:

Vladimir's reply to my paragraph about A Languages introduces two interesting and subtle issues:

  1. What is meant by modular/non-modular translation.
  2. What is meant by a monotonic/nonmonotonic approach.
I am not sure if these topics belong to the ontology panel, but they are highly relevant when dealing with comparative analyses across formalisms and understanding what on the surface appear to be opposing "ideological stances" in methodology.

To clarify my "current" working view of the role action languages should play, I simply quote the original intuition Vladimir himself at one time had about their role in his paper, "Two Components of an Action Language":

  Originally, action languages were meant to play an auxiliary role. The primary goal was to represent properties of actions in less specialized formalisms, such as first-order logic and its nonmonotonic extensions, and the idea was to present methods for doing that as translations from action languages.

My group currently uses this approach and it is also one of the cornerstones of the Features and Fluents methodology. Formally and conceptually, we translate from an action scenario description language into a first-order theory with a circumscription axiom. I consider 2nd-order theories to be part of classical logic. From an analysis of the circumscriptive theory, we can identify different means of deriving efficient computational mechanisms for reasoning or "querying" a class of action scenarios.

The advantages we have derived from this approach are the following:

  1. A separation of the ontological and epistemological analysis from the generation of computational procedures for querying action scenarios.

  2. A direct means of comparing the ontological and epistemological assumptions we make with those of others, including across paradigms.

  3. A somewhat less-direct means of employing part or all of particular computational mechanisms proposed by other groups, such as the use of "extended logic programs", regression techniques, or "explanation closure" approaches, for example.
This brings me to the more specific topics of modularity in translation and monotonicity/nonmonotonicity.

As regards modularity:

In our current family of logics (TAL), we find that the circumscription policies used are really quite simple, basically nothing more than predicate completion. This given, we can either reduce the 2nd-order theory associated with an action scenario using a reduction algorithm in a "nonmodular" manner, or generate a "modular" translation directly from the action scenario description which includes one additional formula for each predicate completion. So, I'd disagree with Vladimir's view on the coupling between modular/nonmodular -- monotonic/nonmonotonic in his reply. Although one can interpret the use of a reduction algorithm as nonmodular, one can also interpret the use of local syntactic translation as modular. Again, it all comes down to what is meant by "modular" and we may have slight disagreements on pinning down a definition.

As regards the monotonicity/nonmonotonicity issue:

One fascinating insight which the translation into a circumscribed theory provides us with, is that the automated reduction of the circumscription axiom to a logically equivalent 1st-order formula, essentially generates what could be interpreted as "explanation closure axioms". This has been noted and used by other researchers in other contexts such as Vladimir, Ray Reiter, and Fangzen Lin, although in these cases, one works directly with syntactic translations on an existing 1st-order theory rather than direct translation from a circumscription formula.

So this could turn out to be a non-issue in the sense that meta-assumptions of a nonmonotonic character are sometimes left outside a formalism, but guide the way we write axioms or use syntactic translations, or the assumptions are part of the actual formal theory as in the case of using circumscription axioms or default inference rules. There should be a straightforward means of providing formal translation between the two "stances". The discusion would than revolve around what criteria, such as elaboration tolerence or efficient theorem-proving methods, contribute to a particular choice of "stance".

One other related point worth discussion is that if one takes a good look at the diverse approaches being proposed and actually applied in some sense of the word "applied", analysis at the classical logic level shows that

  1. the minimization policies are very similar to each other, even across ontological choices. Currently, we are not doing much more than a somewhat "enhanced" form of predicate completion.

  2. The language fragment used is generally not more than an "enhanced" Horn fragment.
Not surprising, because we want to develop efficient query mechanisms, be they logic programs or non-standard procedural mechanisms.

This is why I like approaches which relate in a direct manner to classical logic. We can say useful things like:

These techniques have informal correlates in the original work by McCarthy and have been refined into formal techniques used by a number of researchers in this area. The problem is that this type of analysis is rare. A contributing factor to the lack of generic analyses could very well be the diversity of specialized action languages and the often complex and direct translations to procedural or computationally oriented frameworks.