Antonis Kakas and Rob Miller

Reasoning about Actions, Narratives and Ramification.

 


Overview of interactions

N:o Question Answer(s) Continued discussion
1 24.10  Michael Thielscher
30.10  The authors
 
2 28.10  Tom Costello
30.10  The authors
10.11  Alessandro Provetti
11.11  Tom Costello
12.11  The authors
13.11  Tom Costello
17.11  The authors
28.11  The authors
3 30.10  Tom Costello
30.10  The authors
 
4 3.11  Michael Gelfond
5.11  The authors
 
5 4.3  François Lévy
31.3  The authors
 
6 23.4  Anonymous Reviewer 2
3.5  The authors
 
7 23.4  Anonymous Reviewer 3
3.5  The authors
 
8 23.4  Anonymous Reviewer 3
3.5  The authors
 
9 23.4  Anonymous Reviewer 3
3.5  The authors
 
10 23.4  Anonymous Reviewer 3
3.5  The authors
 
11 23.4  Anonymous Reviewer 3
3.5  The authors
 

Additional reviewing information: Minor details noticed by reviewers.

Q1. Michael Thielscher (24.10):

Antonis and Rob,

I have a question concerning the notion of initiation and termination points in case ramifications are involved. If my understanding of your Definition 14 is correct, then there seems to be a problem with undesired mutual justification. Take, as an example, the two r-propositions
   dead whenever ¬ alive  
   ¬ alive whenever dead  
Suppose there are no other propositions, in particular no events, then
   H(0) = {alive¬ dead}  
   H(1) = {¬ alivedead}  
seems to satisfy all conditions for being a model. The two uncaused changes justify each other: 0 is an initiation point for  dead  since 0 is a termination point for  alive , and vice versa.

Finding some least fixpoint, which you mention after the definition, seems therefore vital for the correctness of the definition itself. However, the corresponding operator must not have an interpretation as argument. So I would think that instead of defining the notions of "initiation and termination points for  F  in  H  relative to  D " one should define "initiation and termination points for  F  relative to  D ," that is, without reference to some  H .

A1. Antonis Kakas and Rob Miller (30.10):

Hello Michael, Thanks for your comments about Definition 14 of initiation and termination points. You are of course right to say that the definition requires the least fixed point construction, so perhaps we should have made this explicit within the definition itself. We omitted this from the paper in an attempt not to overload the definition with too much formalism, but perhaps its omission is causing more rather than less confusion. (Hudson Turner emailed us a comment similar to yours a little while ago.)

So yes, the initiation and termination points are defined by a least fixed point construction (along the lines we say after the definition). The version of the definition that makes this explicit is unfortunately a little too full of mathematical notation to write here in plain text or html format; please refer to the latex/postscript version of this message at [j-enrac-1-66].

You'll see that the operator corresponding to the least fixed point does indeed have an interpretation as argument. But there's no problem with this, because the interpretation is already fixed at the beginning of the definition. It's necessary to include this argument in order to deal with preconditions of c-propositions. For example, consider the following domain (with time as the naturals):
   Take initiates Picture when {Loaded}  
   Take happens-at 2  
   ¬ Picture holds-at 1  
We want 2 models, one in which  Loaded  is true at 1, and one in which  Loaded  is false at 1. In the former model, 2 should be an initiation point for  Picture , but in the latter it shouldn't.


Q2. Tom Costello (28.10):

In your paper you have three types of proposition, h, t and c-propositions. In your definition of an interpretation, you give enough information to establish truth conditions for t-propositions. The following is the obvious truth condition for t-propositions.

A t-proposition,  F holds-at T , is true in an interpretation  E , if  E(FT) =  true  .

However, you do not seem to have enough information to give truth conditions for h or c-propositions.

Consider the domain language with one time-point 0 and one fluent  F  and one action  A . Then the domain description,
   A happens-at 0  
   F holds-at 0  
has one model,  (F, 0) |->  true  

The domain description

   F holds-at 0  
has the same model. However, these two descriptions differ on the h-propositions. Thus from an interpretation you cannot determine the set of true h-propositions.

For a logic to model distinct sets of propositions by the same structure is problematic for many reasons.

As a general point,  A  type languages are not sufficiently formal in defining when a proposition in true in a model. This has led to errors like the above in  A -type languages. Some papers have used a function from sequences of actions to sets of fluents, rather than a labeled transition function/relation from sets of fluents to sets of fluents, to give semantics to action languages. The former collapses domain descriptions that differ on causal propositions, while the latter does not. Giunchiglia, Kartha and Lifschitz are an example of the use of the latter. I know of no paper that explicitly gives truth conditions for all propositions in an  A -type language.

A2. Antonis Kakas and Rob Miller (30.10):

Hello Tom, Thanks for your comments and observations.

Regarding your specific comments about the Language  E , then you're right - from a formal point of view there is no concept of truth or falsity as regards h- and c-propositions. So, from the definitions, it doesn't even make sense to talk about "the set of true h-propositions". For your example, the semantics simply "disregards" the h-proposition  A happens-at 0 , because the occurrence of  A  at 0 that this represents at the syntactic level has no effects.

There's no problem with this from a formal point of view, but it does mean that  E , and languages like it, are very restrictive. That's why they're perhaps best regarded as stepping-stones towards formalisations or axiomatisations written in fuller, general-purpose logics. (However, and as we hope we and others have illustrated, they do have a use in discussing and illustrating approaches to particular issues - in our case, to ramifications - in a relatively intuitive and uncluttered way, and also in proving properties of classes of logic programs.) This is where work such as that of Kartha (translating  A  into various versions of the Situation Calculus) is valuable. In the case of the Language  A , Kartha's translations bring out the fact that there is an implicit completion of causal information ( A 's e-propositions) in  A 's semantics. Much the same thing is true of h- and c-propositions in  E . (This is why adding truth functions for h- and c-propositions in  E  models would be trivial but rather superfluous).

We discussed this in more detail in our first paper on  E  (in the Journal of logic Programming). As we've said in both papers, it's our intention to explore these issues further by developing translations analogous to Kartha's for  E . You might also be interested to look at the papers by Kristof Van Belleghem, Marc Deneker, and Daniele Theseider Dupre, who have developed a language  ER  similar in many respects to  E , but more expressive and with a correspondingly more complex semantics (which includes truth conditions for the equivalent of h- and c-propositions). (We've described this briefly in Section 5 of our paper.)

As regards your general point about " A  type languages", it would be interesting to get some comments from " A  type people" about this. Perhaps "not sufficiently expressive" is a better phrase than "not sufficiently formal". (On this general theme, Mikhail Soutchanski made another good point in the recent ENAI when he pointed out that it's much easier to combine theories of action written in classical logic with other commonsense theories, e.g. of space or shape, than if specialised logics are used.)

C2-1. Alessandro Provetti (10.11):

Dear Antonis and Rob,

I'd like to comment on Tom's example about the role of h-statements in  A -languages. In the language  L  of Baral et al. the theories:
    F at S0   
and
    F at S0   
    A occurs-%at S0   
have different models.

Assume that there are other fluents than  F , the former has models which differ one to another by the interpretation of the initial state (except -of course- for  F ) while agreeing on the fact that nothing happened at all.

The latter yields the same models as far as the initial state is concerned, but all of them sanction that  A  has happened. As a result, the latter theory implies the formula  A occurs-%at S0 .

It appears to me that the equivalence of the two theories above under  E -semantics does not mean in general that  A -style semantics cannot account for h-propositions.

You may want to comment on this in the paper or -possibly- proceed to work on the entailment associated to  E .

Hope this helps. Ciao!

Alessandro Provetti

C2-2. Tom Costello (11.11):

Dear Antonis and Rob, (and Alessandro)

While the languages  L0  and  L1  of Baral et al. give truth conditions for propositions stating   happens  ,   precedes   and   holds  , they do not give truth conditions for   causes   propositions. Like Baral and Gelfond, and Kartha and Lifschitz their models are functions from sequences of actions to (sets of) states. Because of this they conflate domain descriptions that are not conflated by models that are functions from states to sets of states (  Res   etc.).

Consider the following domain description, stated in  A .
    A causes F   
     initially F   
or in  L0 
    A causes F   
    F at S0   
These have the same functions from sequences of actions to sets of states as,
    A causes F   
    A causes G if not F   
     initially F   
or in  L0 
    A causes F   
    A causes G if not F   
    F at S0   
However, if we consider functions from states to sets of states, then these have different models. Thus domain descriptions that were distinguished by  A , are conflated by later languages.

These later approaches conflate models that intuitively differ.

I agree with Alessandro that  E  type languages can give semantics to h-propositions. My complaint is that current approaches fail to give semantics to all their propositions. As  A  and  E  type languages do not have a proof theory, save by being translated into other approaches, it seems strange that they do not even have a model theory for all their propositions. In Antonis and Rob's case they lack truth conditions for some of their propositions, and worse, it seems that it is not even possible to define truth conditions. The same problem arises for causal statements in Baral et al., Baral and Gelfond, and Kartha and Lifschitz. Other models of  A  type languages do not have this problem of collapsing domain descriptions  A  considered distinct, for causal statements, for instance, E. Giunchiglia, N. Kartha and V. Lifschitz, "Representing action: indeterminacy and ramifications". Therefore, I argue that action language models should define truth conditions for all their propositions, and further, should ensure that intuitively different models are distinct.

Tom

C2-3. Antonis Kakas and Rob Miller (12.11):

Hi Tom,

You wrote:
  In Antonis and Rob's case they lack truth conditions for some of their propositions, and worse, it seems that it is not even possible to define truth conditions.

As we said in our original answer to your question, it's trivial to extend the semantics of the Language  E  to include truth conditions for h- and c-propositions, but superfluous (to the main themes of the present and previous papers). However, for the record, you can do this by defining an interpretation as a tuple   <HJK>  .  H  is as before,  J  is a function
     Actions  ×  Time-%points  ·--->  { true false }    
and  K  is a function
     Actions  ×  Fluent-%literals  ×    
    2 Fluent-%literals  ·--->  { true false }    

The definition of a model is exactly as before (Definition 9), with the additional conditions:

But this doesn't really add much insight; you just get that  D  entails a given h- or c-proposition iff the proposition is in  D . Of course, for other extensions of  E  it might become worthwhile complicating the structure of an interpretation in this way. (Similarly for r-propositions.) Again, you might find Van Belleghem, Deneker and Dupre interesting in this respect.

Rob and Tony

C2-4. Tom Costello (13.11):

Dear Rob and Tony,

Your definition of truth for c-propositions seems very unintuitive to me. I would think that if  A  terminates  F  if  G , then  A  terminates  F  if  G ,  H .

Your definition does not give this result. The reason I ask for truth conditions for your propositions is that I cannot understand what the intuitive consequences of a set of propositions should be, unless I understand what the propositions say. If the propositions are expressed in a standard logic, then I understand them using the definition of truth in a model. However, your propositions are not in a standard logic, and therefore, to understand what
    A terminates F if G   
means, I have to know when it is true.

Your paper introduces a new type of proposition,
    F whenever G1com...comGn   
There are some obvious choices for truth conditions for this type of proposition. In particular, it can be understood that every this is obeyed at every time-point, or that this is a property of every "possible" state, not every "actual" state. Without knowing which notion this proposition is trying to express, I cannot understand what the proposition says.

I do not think truth conditions are a side point to the main theme of your paper. As you say, action languages are supposed to be "understandable and intuitive". Languages cannot be understood without semantics.

Yours,

Tom

C2-5. Antonis Kakas and Rob Miller (17.11):

Tom,

We think that perhaps we're in danger of going round in circles in this discussion. As we've said in other answers, we've much sympathy for your stance on the benefits of general purpose logics (and in particular classical logic), and that's why we've stated on numerous occcasions that languages such as  E  are perhaps best regarded as intermediate stages in the development of formalisms written in such logics. However, we do feel that they have a use in initially discussing and illustrating approaches to particular issues - in our case, to ramifications - in a relatively intuitive and uncluttered way. But we do recognise that what is intuitive for one person might not be so for another. (In particular, of course, as regards formalising common sense it is possible to supply classical logic axiomatisations which are intuitive to some people but not others).

Again, it would be interesting to get some views from more people who have developed  A  style languages on some of the general issues that you've raised (if not here, then perhaps in a more general ENRAC panel discussion on the advantages and disadvantages of specialised action languages).

Rob and Tony

  Editor's note: continued discussion on the merits and demerits of Action Description Languages will be referred to the panel discussion on ontologies.

C2-6. Antonis Kakas and Rob Miller (28.11):

Tom,

In ENRAC 21.11, in the context of the general discussion on action description languages, you asked:

  Similarly, does
     Always FcomG   
or Kakas and Miller's
    F whenever {¬ G}    
mean that every actual state satisfies F,G, or every possible state.

In the light of this remark, it now occurs to us that a possible partial explanation of your difficulty in gaining an intuition about the meaning of  E 's c- and r-propositions is that you're thinking in terms of states and state-transitions (natural enough if one is used to working with the Situation Calculus and related formalisms). But  E 's vocabulary and underlying ontology doesn't include (global) states - just fluents, actions and time-points. So it's difficult for us to see what you might be refering to by a "possible state" in the context of  E .

To understand our intentions, it's better to think just in terms of local cause and effect, i.e. to think of the r-proposition  L whenever C  as meaning " C  is a minimally sufficient cause for  F ", and the c-proposition  A initiates F when C  as meaning " C  is a minimally sufficient set of conditions for an occurrence of  A  to have an initiating effect on  F ".

We include "minimally" here to express our feeling that it's not intuitive to include completely irrelevant fluents in the set  C . Hence, as we indicated before, if we were to extend the semantics and entailment relation to include h-, c- and r-propositions, we really would want such propositions to be entailed if and only if they were in the domain description, at least for the simple classes of domain descriptions we've defined so far. (Hence, strictly speaking, we might want to forbid pairs of statements within a single domain description such as  L whenever C1  and  L whenever C2  where  C1  was a proper subset of  C2 , because the second proposition is redundant).

However, we retain sympathy for your general arguments about the need, ultimately, for theories in classical logic or similar, and for defining entailment in terms of truth functions (as we've effectively done for t-propositions). It is of course debatable whether such theories need to be centered around the notions of global states and state transitions. One's intuitions and preferences about this are probably coloured by one's experience.

Rob and Tony


Q3. Tom Costello (30.10):

A question on the choice of approach: Why didn't you write everything in classical logic?. Personally, I find it much more natural to consider classical logical languages than  A -type languages. The enclosed postscript file is a translation of the proposed  E  language to a classical language, which I feel makes much clearer the advantages and disadvantages of the proposal.

A3. Antonis Kakas and Rob Miller (30.10):

Hello Tom, -- We've no objection to using classical logic. Indeed, in both our  E  papers we've mentioned our intention to translate  E  into classical logic and other general-purpose formalisms, in order to gain the obvious benefits. (An obvious candidate as a target for this translation is something like the classical logic Event Calculus in [Miller & Shanahan 1996].) As you indicate in your question, different researchers will find different approaches more natural. We chose to initially express our ideas on ramification in this form because we found it relatively intuitive and uncluttered, and convenient for proving properties of logic programs that we want to use for various applications. As we've stated in our answer to your previous question and in our first paper on  E , these specialised languages are perhaps best regarded as stepping-stones towards formalisations or axiomatisations written in fuller, general-purpose logics. It's great that you have in fact used  E  in exactly this way. Please publish!

One point about your relations   init   and   term   in your classical logic translation. You say that you should take the "smallest relations ... that satisfy the above [axioms partially defining the relations]". But it turns out that this "smallest relation" idea is still not quite sufficient for eliminating the kind of anomalous models that Michael Thielscher was drawing attention to. So you really do need a least fixed point notion or equivalent somewhere in your axiomatisation, where the associated operator generates the least fixed point starting from a pair of empty sets (see our answer to Michael's question).

Of course, another reason for using the specialised language approach was to illustrate that the Language  A  type methodology could be applied using ontologies other than that of the Situation Calculus. We're not sure if authors of Language  A  type papers would reply to your question in the same way, so it would be interesting to get some other responses from this community.

Rob and Tony


Q4. Michael Gelfond (3.11):

Dear Tony and Rob. I am trying to understand the relationship between your language  E  and language  L  by Baral, Provetti and myself.

To do that I need some good intuitive understanding of the meaning of statements of  E  and I am having some difficulties here. My feeling is that the meaning really depends on what you call ``the structure of time''. If time is linear then your   happens-%at   corresponds exactly to our   occurs-%at   and your  F holds-%at T  to our  f at T . In both cases we have actual occurrences at moments of time (or actual situations as we call them). If time is branching as in your second example in the paper where  T  corresponds to the sequence of actions then I do not fully understand the meaning of, say,  A occurs-%at S0 . If it is still a statement of actual occurrence then I think that  A1 occurs-%at S0  and  A2 occurs-%at S0  should cause inconsistency. (In the case of linear time we just have concurrent actions).

The meaning of   holds-%at   also seems to change. Instead of actual observations it becomes hypothetical. If I am right then I think this property of the language should be somewhat stressed. If not then some explanation will help.

The goal of  L  (as well as of the work by Pinto and Reiter) was to combine situation calculus ontology with actual history of the dynamical system. Since we have both we can combine reasoning about actual occurrences of actions and observations about values of fluents at particular moments of time with hypothetical reasoning of situation calculus useful for planning, counterfactual reasoning, etc.

Can you (and do you want to) use  E  for the same purpose?

My other questions are about your logic program. I do not fully understand your definition of initiation point. Do I understand correctly that it should be changed? If so, what happens with the correctness of logic program?

It may be useful to use some semantics of logic program instead of using SLDNF directly. SLDNF can give some results which are correct w.r.t. your specification even though the program is semantically meaningless (Say, its Clark's completion is too weak or inconsistent, or it does not have stable model, etc.) If you prove that the program is semantically correct one will be able to use this result directly even if your program is run on, say, XDB or SLG (which checks for some loops) and not under Prolog.

Finally, more comments on LP4 will help. I find comments like
" Resolve(ABCD is true iff [some English description]" extremely useful. Similarly for disjunctive_form, partition, etc.

A4. Antonis Kakas and Rob Miller (5.11):

Hello Michael, thanks for your question (several questions in fact!). Here are replies to each of your points in turn.

You wrote:
  I am trying to understand the relationship between your language  E  and language  L  by Baral, Provetti and myself.

This is indeed an interesting question, and one that we tried to address to some extent in our first (JLP) paper on  E  (see Section 3, last three paragraphs).

You wrote:
  My feeling is that the meaning really depends on what you call ``the structure of time''. If time is linear then your   happens-%at   corresponds exactly to our   occurs-%at   and your  F holds-%at T  to our  f at T .

Yes, that seems correct.

You wrote:
  If time is branching as in your second example in the paper where  T  corresponds to the sequence of actions then I do not fully understand the meaning of, say,  A occurs-%at S0 . If it is still a statement of actual occurrence then I think that  A1 occurs-%at S0  and  A2 occurs-%at S0  should cause inconsistency.

Yes, the meaning of statements such as  A happens-%at S0  would indeed be hard to dissect if put in this type of domain description, hence we've avoided doing so in our examples.

Our intuition about Situation Calculus terms such as  S0  and  Result(AS0 is that they refer to (hypothetical) periods of time between (hypothetical) action occurrences. In other words, for all actions  A ,  S0  is the period of time immediately before the (hypothetical) occurrence of  A , and  Result(AS0 is the period of time immediately afterwards.

Now, in order to simulate Situation-Calculus-like hypothetical reasoning in  E , we need to refer to the exact points at which actions (hypothetically) occur. Hence we include extra points in our structure of hypothetical time, such as  Start(Result(AS0))  (written  Start( [A] )  in our syntax), and require that
    S0 < Start( [A] ) <  [A]    
We then write  A happens-%at Start( [A] )  to assert that there is indeed a hypothetical occurrence of  A  just before the hypothetical time-point   [A]   (i.e.  Result(AS0. Once we've included the complete set of assertions such as this in the domain description, we can use the same general principles of initiation, termination and persistence (encapsulated in our Definitions 9 and 13 of a model) to reason about what holds in this branching structure of (hypothetical) time.

Like the Situation Calculus and the Language  A , with time structures such as this everything is intended to be in hypothetical mode, so that, as you suggest,  F holds-%at [A1A2]   should be read as " F  is true in the hypothetical situation   [A1A2]  ".

It is straightforward to extend this approach to partially deal with hypothetical reasoning about concurrent actions, by adapting Chitta Baral's and your ideas. Our structure of time would include sequences of sets of action symbols, e.g.   [C1C2]  , and, for example, h-propositions of the form  A happens-%at Start( [C1C2] )  for each  A  in  C2 .

You wrote:
  The goal of  L  (as well as of the work by Pinto and Reiter) was to combine situation calculus ontology with actual history of the dynamical system.
Yes. A plug for Miller and Shanahan (JLC 1994) is irresistible here! That work had the same aim (as you point out in your papers), and there is perhaps more similarity between  L  and [Miller and Shanahan] than with [Pinto and Reiter]. [Miller and Shanahan] also has the advantage that it deals with concurrent, divisible and overlapping actions.

You wrote:
  Since we have both [situation calculus ontology and an actual history] we can combine reasoning about actual occurrences of actions and observations about values of fluents at particular moments of time with hypothetical reasoning of situation calculus useful for planning, counterfactual reasoning, etc.

Can you (and do you want to) use  E  for the same purpose?

We haven't thought about this a great deal, although it seems possible that hypothetical and "actual" reasoning (for want of a better term) could be combined in  E  by an appropriately rich structure of time. (A simple solution might be to index hypothetical time-points such as   [A1A2]   with the actual time-point - typically a natural or real number - from which they were being hypothetically projected, and extend the ordering between all time-points appropriately.)

But (at the risk of re-opening an old and seemingly unstoppable debate), at least for planning our first choice would be to use abduction with a linear time structure rather than deduction with a hypothetical branching time structure. Again, there are some remarks about this in the original (JLP) paper on the Language  E .

You wrote:
  My other questions are about your logic program. I do not fully understand your definition of initiation point. Do I understand correctly that it should be changed? If so, what happens with the correctness of logic program?
The definition doesn't need to be changed. The reply to Michael Thielscher simply fills in the details that we did not include in the paper. So the proof of correctness of the logic programs is unchanged. Also note that the notions of initiation and termination points are implemented in the logic programs using Proposition 2.

You wrote:
  It may be useful to use some semantics of logic program instead of using SLDNF directly. SLDNF can give some results which are correct w.r.t. your specification even though the program is semantically meaningless (Say, its Clark's completion is too weak or inconsistent, or it does not have stable model, etc.) If you prove that the program is semantically correct one will be able to use this result directly even if your program is run on, say, XDB or SLG (which checks for some loops) and not under Prolog.
We agree, and we are in fact working on these lines, as we say in the paper towards the end of Section 5. The point is that the present approach gives us a baseline translation that would be accepted by any semantics of logic programs, at least for those cases (as you say) where the corresponding logic program has a meaning under any semantics. Of course, there is also the debate as to whether every logic program should have a meaning, but this is probably not the place to discuss this issue.

You wrote:
  Finally, more comments on LP4 will help. I find comments like " Resolve(ABCD is true iff [some English description]" extremely useful. Similarly for disjunctive_form, partition, etc.
Yes, sorry.  Resolve  is just a simple implementation of a propositional resolution based prover for positive or negative literals.  Resolve(l1clt means that we can show that  l  holds by resolution starting from the clause corresponding to  Whenever(l1c applied at the time instant  t . (The details are really not that important, and in fact  Resolve  can be replaced by any sound propositional theorem prover). It first transforms the "implication" of the r-proposition into normal disjunctive form, using the predicate  DisjunctiveForm , then the  Partition  predicate picks out the literal  l  that we are interested in proving, and finally we try to show through the predicate  NothingHoldsIn  that the rest of the disjunction is false, by showing that for each of its literals its negation holds. So, as we say above, it is just a simple and naive implementation of resolution.

Rob and Tony.


Q5. François Lévy (4.3):

Dear Antonis and Rob

Here are two late questions about your paper.

First, according to your view of ramifications, fluents can be initiated/terminated in two ways: either when an event occurs, or due to changing fluents in a constraint. The formal difference is that a fluent changing its value is not by itself an event. Do you consider it to rely on an ontological difference -- i.e. in the process of modeling the real world, two kinds of objects of different nature have to be considered : events on the one side, (instantly) changing fluents on the other one. Or do you consider both similar, and make a difference on a purely technical ground (trigered events don't work to render this if the time line is dense)?

Second, as far as I understand, your predicate `Whenever' embeds both a domain constraint and a notion of influence, in Michael Thielscher's sense in his AI97 paper. The domain constraint is what you call the static view -- i.e. `Whenever' being replaced by a material implication. The influence information is: in the formula  L Whenever C , only  L  can be initiated, so one domain constraint yields as many `Whenever' formulas as fluents can be influenced in it. But Thielscher's Influence predicate is binary, and of course his cause --> effect propagation is a different technique. I tried shortly some example, and couldn't find a difference in the flow of causality. Do you agree with these remarks? And do you believe that some formal correspondence could be established between your two formalisms?

Best Regards

François

A5. Antonis Kakas and Rob Miller (31.3):

Dear Francois,

Thanks again for your questions.

  First, according to your view of ramifications, fluents can be initiated/terminated in two ways: either when an event occurs, or due to changing fluents in a constraint. The formal difference is that a fluent changing its value is not by itself an event. Do you consider it to rely on an ontological difference -- i.e. in the process of modeling the real world, two kinds of objects of different nature have to be considered: events on the one side, (instantly) changing fluents on the other one. Or do you consider both similar, and make a difference on a purely technical ground (trigered events don't work to render this if the time line is dense)?

In answer to your first question: we do have an ontological difference between actions and fluents, but we don't make a distinction between different types of fluent. For example, a fluent can be terminated both 'directly' via a c-proposition and 'indirectly' via an r-proposition (e.g. 'Switch2' in the electric circuit example.) The essential point is that all fluents are only initiated via initiation points and only terminated via termination points, and that all initiation and termination points are characterised by a (relevant) action occurrence (i.e. an event). In other words, in our framework all changes in fluent values have as their root cause an event (or a set of concurrent events). Both the types of change identified above are 'direct' in the sense that the effect of the corresponding event is 'instantaneous' (where 'instantaneous' has a slightly different interpretation for discrete time than for dense or continuous time).

So the choice of whether to use r-propositions as well as c-propositions when modelling a particular domain is partly pragmatic. "-Switch2 whenever {Relay}" can be read as "all the events which initiate Relay also terminate Switch2". The use of this r-proposition thus enables us to avoid writing a whole series of terminates propositions for Switch2 corresponding to each of the initiates propositions for Relay.

But there are other advantages in using r-propositions, as identified in the paper. Not least, it helps with succinctly and correctly capturing the effects of concurrent events. For example, the concurrent 'stuffy room' example in the paper is difficult to describe in  E  without r-propositions, and difficult to describe in conventional Event Calculus (at least without making some sort of distinction between 'frame' and 'non-frame' fluents).

  Second, as far as I understand, your predicate 'Whenever' embeds both a domain constraint and a notion of influence, in Michael Thielscher's sense in his AI97 paper. The domain constraint is what you call the static view -- i.e. 'Whenever' being replaced by a material implication. The influence information is: in the formula L Whenever C, only L can be initiated, so one domain constraint yields as many Whenever formulas as fluents can be influenced in it. But Thielscher's  Influence  predicate is binary, and of course his cause   ·->   effect propagation is a different technique. I tried shortly some example, and could'nt find a difference in the flow of causality. Do you agree with these remarks ? And do you believe that some formal correspondance could be established between your two formalisms ?

In answer to your second question: yes, we broadly agree with these remarks, in that r-propositions act both as static domain constraints and as unidirectional propagators of change. You're right as well with your observation that one domain constraint can yield a number of r-propositions. For example, a definitional domain constraint such as

    Alive <-> ¬ Dead   

would be represented with the r-propositions

           Alive whenever {-Dead} 
           -Alive whenever {Dead}
           Dead whenever {-Alive}
           -Dead whenever {Alive} 

However, we feel that it would be difficult to establish a formal correspondence between our approach to ramifications and Michael Thielscher's, for the reasons outlined in our discussion section. There seems to be a difference in the approaches in that Michael's effect propagation is 'approximately' instantaneous, whereas ours is 'truly' instantaneous (this is not to say that either is right or wrong - just that they're modelling slightly different concepts). This difference manifests itself in domains such as Michael's 'light detector' example (see Section 5 of his AI97 paper). We'd model the introduction of the detector with the single r-proposition

           Detect whenever {Light}

But we wouldn't get the same 'non-deterministic' behaviour of the detector that Michael gets - i.e. we wouldn't get the model in which the detector is activated when Switch1 is connected. Indeed, this model wouldn't make sense in a narrative-based formalism with explicit time - the detector would have been activated even though there was no time-point at which the light was on. (Michael expands on the theme of 'approximately' verses 'truly' instantaneous effects in his related paper in the proceedings of Common Sense '98.)

Tony and Rob.


Q6. Anonymous Reviewer 2 (23.4):

The paper by Baral, Gelfond and Provetti published recently in JLP describes an  A -like language,  L , which, like  E , attempts to combine ontologies of situation and event calculus. It is done in a manner substantially different from that in  E  and so a reference to this paper may be appropriate.

A6. Antonis Kakas and Rob Miller (3.5):

Yes, this paper is clearly related to the themes of both the present article and our previous paper on the Language  E  (in the same special issue of the JLP as Baral et al.). We've referenced it in the revised version of our paper now available via the ETAI web pages, and had discussed the relationship between these two approaches in some detail in our JLP paper. (See also Question 4 from Michael Gelfond on our ETAI interactions page, and our reply.)


Q7. Anonymous Reviewer 3 (23.4):

Your paper makes the following contributions:

  1. Extending the declarative temporal language  E  to deal with ramifications

  2. Furnishing a translation between  E  and logic programs

Both these contributions are welcome. The ramification problem is an important problem in temporal reasoning which is still not well understood. Studying the problem in the context of a unified temporal language has the potential to shed light on the connection between the ramification problem and other problems in temporal reasoning, though see below for further comments. The translation between  E  and logic programs is very welcome as well, as it grounds theoretical and formal results on theories of action to implementable programs. Although the paper is in general well written and well organized, and I consider it acceptable for publication as it is, I also suggest that it could be improved in the following ways. [Suggestions in the present discussion item and the four following ones].

First, and most saliently, the paper does not explain why your approach solves the ramification problem. (Indeed, you don't explain why the approach solves the frame problem either, though that presumably was the job of the 1997 JLP paper.) It would be helpful to give some intuition of why this central problem in temporal reasoning arises, what other approaches have been suggested, how these approaches succeed and fail, what this approach provides, intuitively, in the way of a solution to the ramification problem, and how this approach compares to other approaches.

You do the last (comparing your approach to other approaches) briefly, in the beginning of section 5, but this treatment is too cursory and raises almost more questions than it answers. For example, in comparing your approach to those of Thielscher, McCain and Turner, and Lin, you aruge that their approach is essentially a causal-based approach, because the effect of action occurrences cannot be propagated backward through r(amification)-propositions. To this reviewer, this fact hardly seems to be the characteristic fact of causal theories. A deeper analysis of what makes a causal theory, whether sets of axioms in E can be considered causal theories, and how causal approaches can be used to solve the ramification problem, would be helpful here.

Also very desirable would be a discussion of how solutions to the ramification problem interact with solutions to the frame problem. In particular, there is often a duality between the two problems, in that the frame problem is often seen as a mainly representational problem, whose solutions may worsen things from the computational point of view, and the ramification problem is often seen as mainly a computational problem, whose solutions may worsen things from the representational point of view. How do your two solutions interact? A discussion would be useful.

A7. Antonis Kakas and Rob Miller (3.5):

We're not sure if we would go as far as to state that we have "solved the ramification problem." Like the frame problem, not everyone agrees exactly what this problem is. The analysis in our paper is that
  the ramification problem arises in domains whose description most naturally includes permanent constraints or relationships between fluents. In formalisms which allow for such statements, the effects of actions may sometimes be propagated via groups of these constraints. The problem is to adequately describe these propagations of effects, whilst retaining a solution to the frame problem - that is, the problem of succinctly expressing that most actions leave most fluents unchanged.
Viewed like this, the ramification problem is intimately related to (or is an aspect of) the frame problem. Our solution to the frame problem is by introducing the notion of initiation points and termination points, and ensuring that these are the only mechanisms for change along the time-structure. Our approach to ramifications is to (slightly) widen the set of initiation and termination points in a given model via a fixed point definition. This extended definition takes into account any r-propositions (i.e. ramification statements) in the domain.

The current state of A.I. doesn't unfortunately permit a definitive statement of what makes a causal theory -- it seems to mean different things to different sub-communities (as witnessed in the recent AAAI Spring Symposium on Causality in Reasoning About Actions). Thielscher and others merely make a technical distinction between "causal-based" and "categorisation-based" contributions to the ramification problem. Our contribution is "causal-based" in this limited technical sense in that it doesn't categorise fluents, but does have a unidirectional ("whenever") "connective". But we accept that perhaps it's not so healthy to hijack the word "causal" for a rather specialised technical use in this way.

We reject the view that the frame problem is a mainly representational problem and the ramification problem is mainly a computational problem. We see both problems as having representational and computational aspects.

We accept that more in depth analyses are needed of the relationships between formalisms for resoning about actions in general, and approaches to ramifications in particular. Ultimately, the best way to do this is by providing translation methods and showing that these are "sound" and/or "complete" for well defined classes of domains. We haven't had time to do this yet, but it's on our agenda of future work on the Language  E .


Q8. Anonymous Reviewer 3 (23.4):

The examples in the paper would be more helpful if they were expanded more. Examples:

A8. Antonis Kakas and Rob Miller (3.5):

The domain description on page 6 was:

     CloseWindow initiates WindowClosed
     CloseVent initiates VentClosed
     OpenWindow terminates WindowClosed
     OpenVent terminates VentClosed
     CloseWindow initiates Stuffy when {VentClosed}
     CloseVent initiates Stuffy when {WindowClosed}
     OpenWindow terminates Stuffy
     OpenVent terminates Stuffy
Let H be the interpretation for this domain defined as follows:
          H(WindowClosed,t) = true,   for all t
          H(VentClosed,t) = true,     for all t
          H(Stuffy,t) = false,        for all t
Since there are no h-propositions in this domain, by Definition 8 there are no initiation points or termination points w.r.t. H. Hence H conditions 1-4 of Definition 9, and so is a model of the domain.

Re the "stuffy room" example on page 8, this is of course the classic illustration of why naive minimisation of change doesn't work when domain constraints are included in a domain. For example, if a situation calculus theory includes the constraint
    Holds(Stuffys) <- Holds(VentCloseds) ^ Holds(WindowCloseds  
approaches to the frame problem such as Baker's will give rise the kind of anomalous model that the paper refers to. This is related to the fact that the above can be written in several equivalent ways, e.g.
    ¬ Holds(VentCloseds) <- ¬ Holds(Stuffys) ^ Holds(WindowCloseds  
i.e. the classical "  <-  " connective allows for contrapositive re-writings. Hence several approaches to the ramification problem, including ours, advocate the use of a "unidirectional" connective or predicate which does not facilitate the construction of such contrapositive statements.

The reviewer also wrote:
  In the same vein, it's not clear why Thielscher's approach has trouble with the last variation of the switch example that you discuss in section 3. A more detailed discussion would help."
The point that we wanted to make is that it's not clear to us how Thielscher's approach (and related approaches) can be extended to include explicit time. But see our answer to Francois Lévy's recent question (question 5) on our ETAI interactions page.


Q9. Anonymous Reviewer 3 (23.4):

The unique contributions of this paper over the JLP paper are not so explicitly stated, namely, the introduction for the "whenever" construct into  E , and the resulting modifications in the definitions of the language, the translation into logic programs, etc. It would be helpful to be more explicit about them.

A9. Antonis Kakas and Rob Miller (3.5):

You summarised the contributions very well at the top of your report:

  1. Extending the declarative temporal language  E  to deal with ramifications
  2. Furnishing a translation between the extended  E  and logic programs


Q10. Anonymous Reviewer 3 (23.4):

The writing is in general clear, understandable, and straightforward, but there are several places which were unclear, or in which an additional English gloss would be helpful. Specifically:

A10. Antonis Kakas and Rob Miller (3.5):

  p. 4: It it not clear what the partial order is supposed to range over. In the 10th line from the bottom on this page, is the relation on points (1rst, 2nd, and 4th items in that line) or on sequences (3rd item in the line)?

The partial order ranges over all items in the set of time-points  PiDelta . This includes both finite sequences of action constants (items 1 and 3 in the expression to which you refer), and "Starts" of such sequences (items 2 and 4). They're all just (syntactic) objects in the set of time-points.

  "p. 7, clause 2 of Def. 14, and p. 12, clause 2 of Proposition 2: In both cases, an English gloss would be helpful. (That is, an intuitive explanation of when a ramification statement is true. This is, after all, the heart of the paper, and extra effort and space to make this well understood would be well worth it.)"

As we stated in our discussion with Tom Costello (interactions C2-6), the r-proposition "L whenever C" can be read as "C is a minimally sufficient cause for L". So, to quote from the paper, "at every time-point that C holds, L holds, and hence every action occurrence that brings about C also brings about L". So, "in order to find time-points at which the fluent literal L is established via the r-proposition `L whenever C', we need to look for time-points at which one or more of the conditions in C become established, and at which the remaining conditions are already and continue to be satisfied (up to some time-point beyond the point in question)." Clauses 2 of both Definition 14 and of Proposition 2 are mathematical articulations of this last statement.


Q11. Anonymous Reviewer 3 (23.4):

The online ETAI discussions highlighted a number of interesting points, including the issue of using a special purpose language  E  instead of standard first-order logic, whether truth conditions can really be given for all the predicates in  E , as well as more basic philosophical (ontological) questions on how you divide changes into causations and ramifications. It would be nice to see the paper deal with these to some extent. You can't, of course, give a whole dissertation defending the use of action-type languages, but integrating short versions of your statements on these positions into the paper would be useful.

A11. Antonis Kakas and Rob Miller (3.5):

The revised version of our paper (now available via the ETAI web pages) includes some extra remarks relating to various points raised in the ETAI interactions. We also very much hope that the paper will be read in conjunction with the online discussion.


Additional questions and comments, as well as the answers from the author(s), will be added into this structure. They will also circulated by email to the area subscribers. To contribute, please click Msg to moderator in the column to the right, send your question or comment as an E-mail message.

For additional details, please click Debate procedure there.

This debate is moderated by Erik Sandewall. The present protocol page was generated automatically from components containing the successive debate contributions.