Ontologies for actions and change
I would like to better understand the following comment by Hector Geffner:
|
I believe they (models of actions in AI) are all monotonic in the
set of observations. In other words, if they predict F at time i ,
nothing that they observe is going to affect that prediction.
|
If I understood Hector correctly, the following may be a counter
example. Consider the following domain description D0 in the language
L from [j-jlp-31-201].
The language of D0 contains names for two actions, A and B , and two
fluents, F and P . D0 consists of two causal laws and two statements
describing the initial situation S0 :
|
A causes F if P.
| |
|
B causes neg(P).
| |
|
true_at(P, S0).
| |
|
true_at(neg(F), S0).
| |
The first statement says that F will be true after execution of A in
any situation in which P is true. The third one means that P is true
in the initial situation S0 . neg(P) stands for negation of P .
(Domain descriptions in L allow two other types of statements:
occurs(A, S) - action A occurred at situation S , and S1 < S2 .
We use them later)
Here we are interested in queries of the type
which can be read as ``If sequence A1...An were executed starting in
the current situation then fluent F would be true afterwards''.
This seems to correspond to Hector's prediction of F at time i .
We can also ask about occurrences of actions, truth of fluents in actual
situations, etc).
The entailment relation on L between domain descriptions and queries
formalizes the following informal assumptions:
- changes in the values of fluents can only be caused by execution of
actions;
- there are no actions except those from the language of the
domain description;
- there are no effects of actions except those specified by the
causal laws of the domain;
- reasoner assumes that if there is no reason to believe that
action A occured in situation S then it did not (In other words
he normally observes all occurrences of actions).
As expected, we have that
|
D0 entails holds(F, [A] )
| |
Now assume that the agent observed (or performed) B . This will be
recorded in his description of the domain. New domain description D1
is D0 plus the statement
Now we have that D1 entails neg(holds(F, [A] )) .
It seems to me that the observation changed the prediction.
The second example shows how observations can change beliefs
about the past. Consider a domain description D3
|
A causes neg(F).
| |
|
F at S0.
| |
This description entails neg(occurs(A, S0)) . Now the reasoner observed
that in some situation S1 , F is false. This is recorded by adding to D3
The new description entails occurs(A, S0) . Again, observations
changed the belief (this time about the past).
Hector, is this really a counter example or you meant something else?
Reference.
C. Baral, M. Gelfond, A. Provetti,
``Representing Actions: Laws, Observations and Hypotheses'',
Journal of Logic Programming, vol. 31, Num. 1,2 and 3, pp. 201-245,
1997.
References:
j-jlp-31-201 | Chitta Baral, Michael Gelfond, and Alessandro Provetti.
Representing Action: Laws, Obervations and Hypotheses.
Journal of Logic Programming, vol. 31 (1997), pp. 201-244. |
Dear Erik,
I noticed in the discussion that you said:
|
From the point of view of diagnostic reasoning these are familiar
problems, but I can't think of any work in mainstream actions and change
that has addressed nonmonotonicity with respect to observations in a
serious way.
|
I have tackled the issue of nonmonotonicty with respect to observations.
Cf my home page, the AAAI-96, ECAI-96, AIMSA-96, LPKR97, JANCL97,
AI&MATH98
papers. Using a LP approach I perform abuction to explain observations.
The abductive explanations may be: non-inertiality of some fluent with
respect to some action; occurrence of some erstwhile unsuspected foreign
concurrent action along with some action of mine; or opting for a
definite initial state of the world up till then given only by a
disjunction of possibilities.
You're right, the techniques I and my co-author, Renwei Li, use were
first developed by me and others in the context of diagnosis using LP!
In fact we haven't yet used them all up yet in actions. For a view of LP
and diagnosis, as well as representing actions in LP, see our book
[mb-Alferes-96].
Best, Luís
References:
mb-Alferes-96 | José Júlio Alferes and Luís Moniz Pereira.
Reasoning with Logic Programs.
Springer Verlag, 1996.
|
|