******************************************************************** ELECTRONIC NEWSLETTER ON REASONING ABOUT ACTIONS AND CHANGE Issue 98012 Editor: Erik Sandewall 29.1.1998 Back issues available at http://www.ida.liu.se/ext/etai/actions/njl/ ******************************************************************** ********* TODAY ********* Today, the ontologies discussion continues, with questions by Michael Gelfond and Luis Pereira. ********* DEBATES ********* --- ONTOLOGIES FOR ACTIONS AND CHANGE --- -------------------------------------------------------- | FROM: Michael Gelfond -------------------------------------------------------- I would like to better understand the following comment by Hector Geffner: > I believe they (models of actions in AI) are all *monotonic* in the > set of observations. In other words, if they predict F at time i, > nothing that they observe is going to affect that prediction. If I understood Hector correctly, the following may be a counter example. Consider the following domain description D0 in the language L from [1]. The language of D0 contains names for two actions, A and B, and two fluents, F and P. D0 consists of two causal laws and two statements describing the initial situation S0: A causes F if P. B causes neg(P). true_at(P,S0). true_at(neg(F),S0). The first statement says that F will be true after execution of A in any situation in which P is true. The third one means that P is true in the initial situation S0. neg(P) stands for negation of P. (Domain descriptions in L allow two other types of statements: occurs(A,S) - action A occurred at situation S, and S1 < S2. We use them later) Here we are interested in queries of the type holds(F,[A1,...,An]) which can be read as ``If sequence A1...An were executed starting in the current situation then fluent F would be true afterwards''. This seems to correspond to Hector's prediction of F at time i. We can also ask about occurrences of actions, truth of fluents in actual situations, etc). The entailment relation on L between domain descriptions and queries formalizes the following informal assumptions: (a) changes in the values of fluents can only be caused by execution of actions; (b) there are no actions except those from the language of the domain description; (c) there are no effects of actions except those specified by the causal laws of the domain; (d) reasoner assumes that if there is no reason to believe that action A occured in situation S then it did not (In other words he normally observes all occurrences of actions). As expected, we have that D0 entails holds(F,[A]) Now assume that the agent observed (or performed) B. This will be recorded in his description of the domain. New domain description D1 is D0 plus the statement occurs(B,S0). Now we have that D1 entails neg(holds(F,[A])). It seems to me that the observation changed the prediction. The second example shows how observations can change beliefs about the past. Consider a domain description D3 A causes neg(F). F at S0. This description entails neg(occurs(A,S0)). Now the reasoner observed that in some situation S1, F is false. This is recorded by adding to D3 S0 < S1 neg(F) at S1. The new description entails occurs(A,S0). Again, observations changed the belief (this time about the past). Hector, is this really a counter example or you meant something else? Reference. C. Baral, M. Gelfond, A. Provetti, ``Representing Actions: Laws, Observations and Hypotheses'', Journal of Logic Programming, vol. 31, Num. 1,2 and 3, pp. 201-245, 1997. -------------------------------------------------------- | FROM: Luis Moniz Pereira -------------------------------------------------------- Dear Erik, I noticed in the discussion that you said: > From the point of view of diagnostic reasoning these are familiar > problems, but I can't think of any work in mainstream actions and change > that has addressed nonmonotonicity with respect to observations in a > serious way. I have tackled the issue of nonmonotonicty with respect to observations. Cf my home page, the AAAI-96, ECAI-96, AIMSA-96, LPKR97, JANCL97,AI&MATH98 papers. Using a LP approach I perform abuction to explain observations. The abductive explanations may be: non-inertiality of some fluent with respect to some action; occurrence of some erstwhile unsuspected foreign concurrent action along with some action of mine; or opting for a definite initial state of the world up till then given only by a disjunction of possibilities. You're right, the techniques I and my co-author, Renwei Li, use were first developed by me and others in the context of diagnosis using LP! In fact we haven't yet used them all up yet in actions. For a view of LP and diagnosis, as well as representing actions in LP, see our book: Jose' J. Alferes, Luis M. Pereira Reasoning with Logic Programming LNAI vol. 1111, Springer 1996 Best, Lui's ******************************************************************** This Newsletter is issued whenever there is new news, and is sent by automatic E-mail and without charge to a list of subscribers. To obtain or change a subscription, please send mail to the editor, erisa@ida.liu.se. Contributions are welcomed to the same address. Instructions for contributors and other additional information is found at: http://www.ida.liu.se/ext/etai/actions/njl/ ********************************************************************