ETAI Newsletter Actions and Change

ETAI Newsletter on
Reasoning about Actions and Change


Issue 97003 Editor: Erik Sandewall 26.9.1997

The ETAI is organized and published under the auspices of the
European Coordinating Committee for Artificial Intelligence (ECCAI).

Debate

Discussion with Wolfgang Bibel about his IJCAI lecture

From: Wolfgang Bibel

Dear Mark Friedman,

Thank you very much for taking your time and listening to my lecture at IJCAI-97, and even giving it further thoughts afterwards.

0. Linear transition proofs solve the classical planning problem. This is true. Linear backward chainer (LBC), and LIF and LIF+ (Fronhoefer's more recent solutions) are correct. LBC is very clean, too.

Thanks for these kind remarks.

1. I think you suggest that deduction has a priveleged place as a basis for classical planning. But planning has other theoretical foundataions: the modal truth criterion, and the theory of refinement search. What makes deduction superior to these other bases?

Let me take this question to start with reminding you of the general gist of my lecture. My lecture ended by generalizing the title to ``Let's plan AI deductively!''. As we all know the AI endeavor is a very complex one. This complexity led many of us to specialize in small niches such as planning, nonmonotonic reasoning, scheduling, theorem proving, vision, speach, NL, ... you name one of the hundreds of others. In each of them smart (functional - in a broad sense of the word) solutions are developed in a great variety of different, mostly incompatible languages. I do not see how all this could ever converge towards something coherent which deserves the label ``artificial intelligence'', our common goal. I am not the first to recognize a deplorable splintering of our field. An artificially intelligent agent will have to feature those (and more) smart solutions all at the same time. I do not see how the functional approach could ever achieve this if it does not even get us to the point of hooking a new machine easily to a local network (a problem I faced recently coming here to UBC with my Voyager).

I am therefore one of those who strongly believe that only through a rather uniform approach to any of these different facets can we ever hope to accomplish systems that are able to do more than ``just'' playing chess on a worldmaster level (but nothing else) or prove open problems like Robbins' one as done recently (but again nothing else) etc. It is an illusion to think we could just combine all these niche systems to get out something like a general intelligence. Rather the entire approach must be a more universal one from the very beginning. And if so only through a uniform approach could the enormous complexity be overcome.

If you buy these arguments the next question will be ``what uniform approach''. There are not that many available. In fact I believe that the logical approach triggered by John McCarthy has no real competitor satisfying all the requirements coming up for such a universal task. Of course this is a rather vague statement since it leaves open what we in detail mean by ``logical approach''. For the time being many believe that core first-order logic would be part of it, but that there might be variations of it not yet found (like the transitions in TL, second-order predicates etc). Another concern is the lack of efficiency of existing deductive methods a point I come back to shortly.

After these, as I think, important remarks in view of what we are up to I come now back to the details of your question. The modal truth criterion or the theory of refinement search could actually be seen as logical theories. But they are very specialized theories customized to planning and nothing else than planning. While I could well imagine a logical planning environment where they ALSO might play some role (as meta-knowledge) by itself they are by far not sufficient to allow even the simplest extension of the planning task to include for instance a bit of classical reasoning of the planning agent. By the way deduction provides a generic problem solving mechanism in a uniform and rather universal logical language and as such plays a different role in comparison with the examples you mention. For instance the theory of refinement search applies to general deductive search as it does to specific search in the space of partial plans.

2. You present LBC as an encoding of transition logic (TL) into logic, in particular the languuage of the SETHEO theorem prover. If this were true, AND the implementation compared well with other classical planners, this would be a major step -- giving at once a formal AND operational reduction of the problem to deduction!

However, if we look closely at LBC, there is a work-around to make SETHEO into a transition logic engine. TL is not in fact translated to first order logic. Instead, the available propositions are tracked, to prevent two connections from sharing a single proposition. This approach is not truly a reduction. It is an encoding, much like a program that implements a formally sound algorithm, like UCPOP or graphplan, in a formally sound substrate, like PROLOG, or a functional programming language, or as a satisfiable formula. TL loses its priveleged position. Thus it must compete with other approaches on their terms: is it faster or easier to understand, does it do less search, etc.

I have to start again with a general remark. We (ie my group in Darmstadt, my former group in Munich now represented by people like Fronh"ofer, Letz, Schumann and others, and the entire deduction community for that matter) see our task to provide the best possible generic problem solving mechanism for this universal language (mostly fol). And this challenge keeps all of us busy enough. As an aside I might mention that we have been quite successful in it. For instance in 1996 SETHEO won the world competition among all existing theorem provers. Why then should we above all these efforts do more than offering other specialists (such as those working in planning - but there are many more potential applications) our tools for use in their special field of application? So the little experiment that Fronh"ofer did with SETHEO was indeed only a side-effort done in a few days. It is true that beyond SETHEO we need a TL-SETHEO, ie a theorem prover customized to the logic TL. We did such extensions already for other logics, especially for intuitionistic logic which is relevant for program synthesis (another of our interests) and will, if circumstances permit, eventually do the same for TL. But given that there is so much to be done anyway no promises are given at this point.

As to the privileged position of TL I just point to what I said before: it is the universal logic (ie language and calculus) which gives it the privileged position in comparison with UCPOP (a special algorithm with a narrow range of applications) or graphplan (a coding in propositional logic which does not provide rich enough a logical language to serve the more general purposes - by the way graphplan is a deductive solution to planning as well! but as recent experiments by a Swedish/German group seem to demonstrate rule-based encoding seems to be a more suitable encoding).

3. LBC beats UCPOP. But many algorithms have. How does it compare with these?

Sure. But if I take a general tool from the shelf (such as SETHEO), spend a few hours or days to customize it to a general task like planning and beat with the result a system that was specially developed for the task of planning and only a few years ago was deemed the best of its kind then this is a very strong experimental hint that the deduction technology subsumes that needed for planning and that the efficiency problem already mentioned above is less severe than many might think. Therefore, to my strongest conviction, Dan Weld and others would have contributed more to the advancement of AI had they built those special systems ON TOP of the mature technology reached in deduction at the time of implementation rather than as an independent sideline (which does not at all diminish their remarkable achievements seen in themselves).

4. Transition logic solves the frame problem. So does TWEAK.

TWEAK is based on STRIPS (sort of) and - as I mentioned explicitly in my lecture - STRIPS is very closely related to TL as far as transitions are concerned. But STRIPS (and TWEAK) is not a logic so lacks the generality needed for the purposes outline above.

Transiion logic solves the ramification problem. So does UCPOP, via a theorem-proving subroutine. Perhaps TL's ramification solution is a more uniform mechanism, but it is not truly uniform -- the linearity restriction is removed. Why prefer one solution to the other?

To the best of my knowledge Michael Thielscher (to whose work I referred in my lecture in this context) was the first who gave a solution to the ramification problem which overcomes deficiencies of any previous solution (including UCPOP's one). The lecture as well as the paper (and further references therein) point out these examples where no previous solution would model reality in a correct way. A better solution in this sense must be preferred to a deficient one. In addition there is the uniformity and universality provided by the logic as pointed out now already several times. I do not understand what you mean by the phrase ``the linearity restriction is removed''.

Wolfgang Bibel

From: Marc Friedman

by the logic as pointed out now already several times. I do not understand what you mean by the phrase ``the linearity restriction is removed''.

Oh. Maybe I said it wrong. I meant that if there are synchronic rules, and transition rules, represented in your talk by two different kinds of implication arrows, then there are two different mechanisms -- one which limits each proposition to use in a single connection, and one which does not.

Thanks, Marc

Awareness of applications-based work

The discussion about Austin Tate's article, which has been received by ETAI, has raised a question about how our field relates to (or ignores?) contributions that are made in the framework of broad application areas. The two debate contributions follow here, the latter one slightly edited:

From: Austin Tate

The ETAI Colloqium on Actions and Change (see: general debate) is raising issues from a formal representation of action perspective which could usefully be linked with the more practically derived representation that <I-N-OVA> represents. Murray Shanahan's message raises a number of requirements for an action formalism that could usefully be checked against any proposed action, plan or process representation. He also suggests the use of practical scenarios as a way to validate any proposal.

In this context it may be worth noting that <I-N-OVA> is based on 20 year's experience of the use of plan representations for a wide range of domains in AI planners. It also seeks to bring in work from a very wide range of process and activity modelling communities beyond AI.

Analysis of about 20 candidate activity representations against an extensive set of requirements and against a set of engineering, manufacturing and workflow scenarios is being undertaken in recent work in the National Institute of Standard's and Technology (NIST) on the Process Specification Language which is seeking to create a meta-model for activities that has a formal semantics (see http://www.nist.gov/psl/). The OMWG Core Plan Representation work (now at RFC version 2) is also being validated against a range of military planning problems.

<I-N-OVA> has being used as a conceptual framework to input to both these programmes.

From: Erik Sandewall

Austin, I think you are bringing up a very important point when you mention "process and activity modelling communities beyond AI" in the discussion (your comment C1). Besides the work in engineering and manufacturing, there is active work in the healthcare area, where they have an interest in characterizing the medical history of a patient as a process, involving both health events ("raise in temperature", "severe back pain" and medication and other treatment events. The work has progressed so far that there is reportedly a European prestandard, ENV 12831, called "Medical Informatics - Time Standards for Healthcare Problems".

In addition, there is of course the work in the research communities for databases and information systems, where they want to model processes within an enterprise. The recent conference on "Active, Real-Time, and Temporal Database Systems" is one example of research in that area. (See the Actions & Change conference calendar for a link to that conference).

It seems to me that the AI field is not sufficiently aware of these developments. The world doesn't stand still while we try to figure out the best way of dealing with the ramification problem. The present newsletter will be a good forum for exchanging pointers and points of view with regard to contributions from applied areas.