Electronic Newsletter Actions and Change

Electronic Newsletter on
Reasoning about Actions and Change


Issue 97018 Editor: Erik Sandewall 6.11.1997

The ETAI is organized and published under the auspices of the
European Coordinating Committee for Artificial Intelligence (ECCAI).

Today

The debate continues in both the panels. Yesterday, in the theory evaluation panel, Pat Hayes proposed a way of structuring the set of positions that have been taken by different contributors. Today, I agree with some of what Pat wrote there, disagree with some, and also I correct a misunderstanding in his earlier contributions.

For the ontologies panel, we have contributions by Rob Miller and by one new combattant, Ernie Davis. However, so far, nooone has taken up the challenge to amend the table in Ray Reiter's panel position statement.


Debates

NRAC Panel Discussion on Theory Evaluation

Erik Sandewall:

Pat,

I agree with you that it's time to sort out the different perspectives, goals, and methods for reaching the goals that have confronted each other here. You write of two dimensions; in the first one you make the following distinction:

  One view appeals to our human intuitions, one way or another. In this it is reminiscent of linguistics, where the basic data against which a theory is tested are human judgements of grammaticality. We might call this a 'cognitive' approach to theory testing. Talk of 'common sense' is rife in this methodology. Based on the views expressed in these messages, I would place myself, Erik Sandewall, Michael Gelfond in this category. The other, exemplified by the responses of Ray Reiter, Mickail Soutchanski and Murray Shanahan, emphasises instead the ability of the formalism to produce successful behavior in a robot; let me call this the 'behavioral' approach.

I agree with this, except that the term `behavioral' is maybe not the best one, and also you put me in the wrong category; more about that later. Anyway, the distinction you make here seems to coincide with the one that David Poole made in his position statement:

  There are two quite different goals people have in building KR system; --- These are:

1. A knowledge representation as a modelling language. If you have a domain in your head you can use the KR to represent that domain. ---

2. A knowledge representation as a repository of facts for commonsense reasoning. Under this scenario, you assume you are given a knowledge base and you are to make as much sense out of it as possible. ---

If you are going to design a robot in a good engineering sense, you are going to need to model both the robot itself and its environment. That's why what you call the `behavioral' approach coincides with the use of KR for modelling physical systems. Since `modelling' can mean many things, I'll further qualify it with the term `design goal'.

As for the other dimension, you propose

  --- the extent to which people find formality more or less congenial. Both Ray and Erik dislike 'vague claims' ---

This distinction I find less informative, since all the work in this area is formal in one way or another. Even the kludgiest of programs exhibits `formality'. However, different researchers do take different stands wrt how we choose and motivate our theories. One approach is what you described in your first response to the panel (ENAI Newsletter on 22.10):

  Knowledge-hackers try to formalise an intuition using logic A and find it hard to match formal inference against intuition no matter how ingenious they are with their ontologies and axioms; so they turn to logic B, which enables them to hack the examples to fit intuition rather better.

The key word here is examples. In this example-based methodology, proposed logics are treated like hypotheses in a pure empirical paradigm: they are accepted until a counterexample is found; then one has to find another logic that deals correctly at least with that example. Ernie Davis characterized this approach in his book (Representation of Commonsense Knowledge, 1990). See also the discussion of this approach in my book (Features and Fluents, 1994, p. 63).

The example-based methodology has several problems:

The choice of methodology is indeed orthogonal to your first distinction, since the example-based methodology can be used both in the pursuit of theories of common sense, and in the development of intelligent robots by design iteration (try a design, see how it works, revise the design).

The alternative to this is to use a systematic methodology where, instead of searching for the "right" theory of actions and change, we identify a few plausible theories and investigate their properties. For this, we need to use an underlying semantics and a taxonomy of scenario descriptions; we can then proceed to analyse the range of applicability of proposed theories (entailment methods).

Your answer to this was (31.10):

  Yes, but to what end? The things you characterize as `ill-defined' are the very subject-matter which defines our field. There is no objective account of `action', `state', etc. to be found in physics, or indeed in any other science; intuitions about these things is the only ultimate test we have for the correctness or appropriateness of our formalisms.---

This would be true if the `cognitive' (in your terms) goal were the only one. From the point of view of modelling and design, on the other hand, these are perfectly valid concepts. The concept of state is used extensively in control engineering (yes, control theory does deal with discrete states, not only with differential equations!), and I am sure our colleagues in that area would be most surprised to hear that our intuitions is "the only ultimate test we have" for the correctness or appropriateness of the formalisms that they share with us.

Now, when you placed me in the cognitive category, you got me wrong. As I wrote in my position statement for this panel, my heart is with the use of knowledge representations as modelling languages. The present major project in our group is concerned with intelligent UAV:s (unmanned aircraft), and in this enterprise we need a lot of modelling for design purposes; we have currently no plans to pursue the `cognitive' goal.

However, just as the example-driven methodology can serve both the cognitive goal and the design goal, I do believe that the systematic methodology can also be relevant as one part of a strategy to achieve the `cognitive' goal. More precisely, for the reasons that both you and I have expressed, it's not easy to find any credible methodology for research on understanding the principles of commonsense, and in fact I did not see any concrete proposal for such a methodology in your contributions. However, to the extent that people continue to pursue that goal, my suggestion was to divide the problem into two parts: one where our discipline can say something substantial, and one which is clearly in the domain of the psychologists.

Therefore, the contradiction that you believed having seen when writing

  ... and Erik's suggested methodology (Newsletter 23.10) meticulously avoids all contact with psychology, as he emphasises; yet he ultimately appeals to capturing our intuition, rather than any successful application in a robot, to tell us which kinds of model-theoretic structures are more acceptable than others.

is not a real one; it only arises because your perception that

  ... this distinction in approaches - start with insects and work 'up', or start with human common sense and work 'down' - is also a methodological split within AI in general, and seems to be largely independent of whether one feels oneself to be really working towards a kind of ultimate HAL.

which I also do not share. After all, the behavioral/ commonsense view and the modelling/ design view represent goals, not methodologies, and both choices of methodology (the example-based and the systematic one) can be applied towards both the goals.

NRAC Panel Discussion on Ontologies for Actions and Change

Rob Miller:

Hector,

I'd like to express agreement with your first point, that KR is about modelling. But I'd like to take issue with a couple of your other points, (3) and (5). In point (3) you said:

  3. The remaning problem, that we can call the semantic problem, involves things like the frame problem, causality, etc.

To a large extent, I think the most basic of these problems have also been solved:

Basically, thanks to Michael and Vladimir, Erik, Ray, and others we know that a rule like:

if A, then B

where A is a formula that refers to time i or situation, and B is a literal that refers to the next time point of situation, is just a constraint on the possible transitions from the the states at i or s, and the following states.

Or put in another way, temporal rules are nothing else but a convenient way for specifying a dynamic system (or transition function)

......

My problem with this is that, in general, dynamical systems in the everyday world can't be realistically modelled as state transition systems, because they involve things like continuous change, actions or events with duration, partially overlapping events, interuptable events, etc. That's why other communities involved in modelling dynamical systems (e.g. physicists, engineers, the Q.R. community) choose to model time as the real numbers. In this case, there is no "next time point", so it's difficult to read "if A, then B" as a constraint in the way you suggest. The analogy between everyday dynamical systems and state transition systems/database updates only works for a relatively small class of carefully picked domains.

Your point (5) was:

  5. It's not difficult to change the basic solutions to accommodate additional features (e.g., non-deterministic transition functions, unlikely initial conditions, concurrent actions, etc.) in a principled way.

Well again, it seems to me that if this is true, it's simply because researchers tend to pick "additional features" to work on which will conveniently fit into the state transition view of the world, as opposed to picking from the rather large collection of issues that won't.

Ernie Davis:

Ray Reiter writes, in newsletter ENAI 23.10

  qualitative physics and planning have no difficulty with the FP because, without exception, they adopt the STRIPS sleeping dogs strategy. Which is to say, they assume they have complete information about world states.

I don't think that this is quite right in the case of qualitative physics. My KR '92 article "Axiomatizing Qualitative Physics" presents a theory which, being in first-order logic, is perfectly able to characterize inferences from partial information, but does not require any special frame axioms for the continuous parameters. The reason is that the behavior of a continuous parameter is governed by governed by a qualitative differential equation of the form, ``The derivative of P is the sum of the influences on P''. P remains absolutely constant if the sum of the influences is zero. P retains the same qualitative value to some next modes of the system if it is consistent that some other parameter should change its value before P does. In any case, the behavior of P in staying the same is governed by the same law that governs its behavior in changing. No special law is needed to cover the cases where P stays the same. (For discrete parameters, I did need a frame axiom.)

More generally, for those, like Pat and me, whose primary interest is physical reasoning, a temporal ontology whose central category is "a finite sequence of actions'' seems awkward at best. Physical reasoning is chiefly concerned with continuous, asynchronous, external change, and it is much easier to deal with this by making the continuous time-line primary and adding actions on top of that, rather than vice versa.

-- Ernie Davis