Electronic Newsletter Actions and Change

Electronic Newsletter on
Reasoning about Actions and Change


Issue 97017 Editor: Erik Sandewall 5.11.1997

The ETAI is organized and published under the auspices of the
European Coordinating Committee for Artificial Intelligence (ECCAI).

Today

Today we have the answer of Rob Miller and Tony Kakas to Michael Gelfond's question the other day. We also have a late appearance of a position statement by Hector Geffner for the panel on theory evaluation.

When you visit our web page, you will notice that it has become more colorful, and reorganized a bit. Also, if you wish to go directly to the webpage for actions and change, without going via the general ETAI entry page, please try

for first visits: (recommend this one to your friends) http://www.ida.liu.se/ext/etai/actions/

for repeated use: http://www.ida.liu.se/ext/etai/actions/indexframe.html


ETAI Publications

Discussion about received articles

Additional debate contributions have been received for the following article(s). Please click the title of the article to link to the interaction page, containing both new and old contributions to the discussion.

Antonis Kakas and Rob Miller
Reasoning about Actions, Narratives and Ramification


Debates

NRAC Panel Discussion on Theory Evaluation

The following is a position statement by Hector Geffner, who was also a member of the panel at NRAC. By accident it was not included in the initial set of position statements in this Newsletter debate.

Hector Geffner:

KR/Non-mon is about Modeling

  1. I think the goal in KR/Non-Mon is modeling, not logic. A formalism may be interesting from a logical point of view, and yet useless as a modeling language.

    A "solution" is thus a good modeling language:

    declarative, general, meaningful, concise, that non-experts can understand and use, etc. (I agree with David's remark on teaching the stuff to "cynical" undergrads)

    The analogy to Bayesian Networks and Logic Programs that David makes is very good. We want to develop modeling languages that are like Bayesian networks, but that, on the one hand, are more qualitative (assumptions in place of probabilities), and on the other, more expressive (domain constraints, time, first-order extensions, etc).

  2. For many years, it was believed that the problem was mathematical (which device to add to FOL to make it non-monotonic). That, however, turned out to be only part of the problem; a part that has actually been solved: we have a number of formal devices that yield non-mon behavior (model preference, kappa functions, fixed points, etc.); the question is how to use them to define good modeling languages

  3. The remaning problem, that we can call the semantic problem, involves things like the frame problem, causality, etc.

    To a large extent, I think the most basic of these problems have also been solved:

    Basically, thanks to Michael and Vladimir, Erik, Ray, and others we know that a rule like:

    if A, then B

    where A is a formula that refers to time i or situation, and B is a literal that refers to the next time point of situation, is just a constraint on the possible transitions from the the states at i or s, and the following states.

    Or put in another way, temporal rules are nothing else but a convenient way for specifying a dynamic system (or transition function)

    Actually, for causal rules, the solution (due to Moises, Judea, and others) is very similar: causal default rules are just a convenient way for specifying (qualitative) Bayesian Networks

  4. These solutions (that appear in different dresses) are limited (e.g., in neither case B can be an aribtrary formula) but are meaningful: not only the work, we can also understand why.

    We also understand now a number of things we didn't understand before.

    e.g., 1. a formula can have different "meanings" according to whether it represents a causal rule, an observation or a domain constraint.

    (this is not surprising from a Bayesian Net or Dynamic systems point of view, but is somewhat surprising from a logical point of view)

    2. reasoning forward (causally or in time) is often but not always sound and/or complete; i.e., in many cases, forward chaining and sleeping dog strategies will be ok, in other cases, they won't.

  5. It's not difficult to change the basic solutions to accommodate additional features (e.g., non-deterministic transition functions, unlikely initial conditions, concurrent actions, etc.) in a principled way.

    So, i think, quite a few problems have been solved and default languages, in many cases, are ripe for use by non non-mon people.

  6. We have to make a better job in packaging the available theory for the outside world, and in delineating the solved problems, the unsolved problems and the non-problems, for the inner community and students.

    Actually i have been doing some of this myself, giving a number of tutorials in the last couple of years at a number of places (I invite you to look at the slides in http://wwww.ldc.usb.ve/~hector)

Pat Hayes:

I think this meta discussion, though at times confused (mea culpa, of course), has been useful in revealing a clear divergence between two methodologies, giving different answers to the original question about how we should evaluate work in the field. ('NRAC PANEL ON THEORY EVALUATION' ETAI 21.10)

One view appeals to our human intuitions, one way or another. In this it is reminiscent of linguistics, where the basic data against which a theory is tested are human judgements of grammaticality. We might call this a 'cognitive' approach to theory testing. Talk of 'common sense' is rife in this methodology. Based on the views expressed in these messages, I would place myself, Erik Sandewall, Michael Gelfond in this category. The other, exemplified by the responses of Ray Reiter, Mickail Soutchanski and Murray Shanahan, emphasises instead the ability of the formalism to produce successful behavior in a robot; let me call this the 'behavioral' approach.

This distinction lies orthogonal to the extent to which people find formality more or less congenial. Both Ray and Erik dislike 'vague claims', and Erik's suggested methodology (Newsletter 23.10) meticulously avoids all contact with psychology, as he emphasises; yet he ultimately appeals to capturing our intuition, rather than any successful application in a robot, to tell us which kinds of model-theoretic structures are more acceptable than others. It also lies orthogonal to the extent to which people see their ultimate goal as that of creating a full-blown artificial intelligence (as both Wolfgang Bibel and Mickail Soutchanski seem to, for example, along with our founder, John McCarthy), or might be satisfied with something less ambitious. This distinction in approaches - start with insects and work 'up', or start with human common sense and work 'down' - is also a methodological split within AI in general, and seems to be largely independent of whether one feels oneself to be really working towards a kind of ultimate HAL.

Do people find this distinction seriously incomplete or oversimplifying? (Why?) Or on the other hand if they find it useful, which side of the division they would place themselves? In a nutshell, is the immediate goal of the field to understand and accurately model human intuitions about actions, or is it to help produce artifacts which behave in useful or plausible ways? I think this is worth getting clear not to see which 'side' wins, but to acknowledge that this difference is real, and likely to produce divergent pressures on research.