Electronic Newsletter Actions and Change

Electronic Newsletter on
Reasoning about Actions and Change


Issue 98012 Editor: Erik Sandewall 29.1.1998

The ETAI is organized and published under the auspices of the
European Coordinating Committee for Artificial Intelligence (ECCAI).

Today

Today, the ontologies discussion continues, with questions by Michael Gelfond and Luis Pereira.


Debates

Ontologies for actions and change

Michael Gelfond:

I would like to better understand the following comment by Hector Geffner:

  I believe they (models of actions in AI) are all monotonic in the set of observations. In other words, if they predict  F  at time  i , nothing that they observe is going to affect that prediction.

If I understood Hector correctly, the following may be a counter example. Consider the following domain description D0 in the language  L  from [j-jlp-31-201].

The language of D0 contains names for two actions,  A  and  B , and two fluents,  F  and  P . D0 consists of two causal laws and two statements describing the initial situation  S0 :
    A causes F if P.   
    B causes neg(P).   
    true_at(PS0).   
    true_at(neg(F), S0).   
The first statement says that  F  will be true after execution of  A  in any situation in which  P  is true. The third one means that  P  is true in the initial situation  S0 .  neg(P stands for negation of  P .

(Domain descriptions in  L  allow two other types of statements:  occurs(AS - action  A  occurred at situation  S , and  S1 < S2 . We use them later)

Here we are interested in queries of the type
    holds(F,  [A1, ...An] )   
which can be read as ``If sequence  A1...An  were executed starting in the current situation then fluent  F  would be true afterwards''. This seems to correspond to Hector's prediction of  F  at time  i . We can also ask about occurrences of actions, truth of fluents in actual situations, etc).

The entailment relation on  L  between domain descriptions and queries formalizes the following informal assumptions:

As expected, we have that
    D0 entails holds(F,  [A] )   
Now assume that the agent observed (or performed)  B . This will be recorded in his description of the domain. New domain description D1 is D0 plus the statement
    occurs(BS0).   
Now we have that D1 entails  neg(holds(F,  [A] )) . It seems to me that the observation changed the prediction.

The second example shows how observations can change beliefs about the past. Consider a domain description D3
    A causes neg(F).   
    F at S0.   
This description entails  neg(occurs(AS0)) . Now the reasoner observed that in some situation  S1 ,  F  is false. This is recorded by adding to D3
    S0 < S1   
    neg(F) at S1.   
The new description entails  occurs(AS0. Again, observations changed the belief (this time about the past).

Hector, is this really a counter example or you meant something else?

Reference.

C. Baral, M. Gelfond, A. Provetti, ``Representing Actions: Laws, Observations and Hypotheses'', Journal of Logic Programming, vol. 31, Num. 1,2 and 3, pp. 201-245, 1997.

References:

j-jlp-31-201Chitta Baral, Michael Gelfond, and Alessandro Provetti.
Representing Action: Laws, Obervations and Hypotheses.
Journal of Logic Programming, vol. 31 (1997), pp. 201-244.

Luís Moniz Pereira:

Dear Erik,

I noticed in the discussion that you said:
  From the point of view of diagnostic reasoning these are familiar problems, but I can't think of any work in mainstream actions and change that has addressed nonmonotonicity with respect to observations in a serious way.

I have tackled the issue of nonmonotonicty with respect to observations. Cf my home page, the AAAI-96, ECAI-96, AIMSA-96, LPKR97, JANCL97, AI&MATH98 papers. Using a LP approach I perform abuction to explain observations. The abductive explanations may be: non-inertiality of some fluent with respect to some action; occurrence of some erstwhile unsuspected foreign concurrent action along with some action of mine; or opting for a definite initial state of the world up till then given only by a disjunction of possibilities.

You're right, the techniques I and my co-author, Renwei Li, use were first developed by me and others in the context of diagnosis using LP! In fact we haven't yet used them all up yet in actions. For a view of LP and diagnosis, as well as representing actions in LP, see our book [mb-Alferes-96].

Best, Luís

References:

mb-Alferes-96José Júlio Alferes and Luís Moniz Pereira.
Reasoning with Logic Programs.
Springer Verlag, 1996.