Antonis Kakas and Rob MillerReasoning about Actions, Narratives and Ramification. 
Additional reviewing information: Minor details noticed by reviewers.
Q1. Michael Thielscher (24.10):
Antonis and Rob,
I have a question concerning the notion of initiation and termination points in case ramifications are involved. If my understanding of your Definition 14 is correct, then there seems to be a problem with undesired mutual justification. Take, as an example, the two rpropositions
_{ } dead whenever ¬ alive  
_{ } ¬ alive whenever dead 
_{ } H(0) = {alive, ¬ dead}  
_{ } H(1) = {¬ alive, dead} 
Finding some least fixpoint, which you mention after the definition, seems therefore vital for the correctness of the definition itself. However, the corresponding operator must not have an interpretation as argument. So I would think that instead of defining the notions of "initiation and termination points for F in H relative to D " one should define "initiation and termination points for F relative to D ," that is, without reference to some H .
A1. Antonis Kakas and Rob Miller (30.10):
Hello Michael, Thanks for your comments about Definition 14 of initiation and termination points. You are of course right to say that the definition requires the least fixed point construction, so perhaps we should have made this explicit within the definition itself. We omitted this from the paper in an attempt not to overload the definition with too much formalism, but perhaps its omission is causing more rather than less confusion. (Hudson Turner emailed us a comment similar to yours a little while ago.)
So yes, the initiation and termination points are defined by a least fixed point construction (along the lines we say after the definition). The version of the definition that makes this explicit is unfortunately a little too full of mathematical notation to write here in plain text or html format; please refer to the latex/postscript version of this message at [jenrac166].
You'll see that the operator corresponding to the least fixed point does indeed have an interpretation as argument. But there's no problem with this, because the interpretation is already fixed at the beginning of the definition. It's necessary to include this argument in order to deal with preconditions of cpropositions. For example, consider the following domain (with time as the naturals):
_{ } Take initiates Picture when {Loaded}  
_{ } Take happensat 2  
_{ } ¬ Picture holdsat 1 
Q2. Tom Costello (28.10):
In your paper you have three types of proposition, h, t and cpropositions. In your definition of an interpretation, you give enough information to establish truth conditions for tpropositions. The following is the obvious truth condition for tpropositions.
A tproposition, F holdsat T , is true in an interpretation E , if E(F, T) = true .
However, you do not seem to have enough information to give truth conditions for h or cpropositions.
Consider the domain language with one timepoint 0 and one fluent F and one action A . Then the domain description,
_{ } A happensat 0  
_{ } F holdsat 0 
The domain description
_{ } F holdsat 0 
For a logic to model distinct sets of propositions by the same structure is problematic for many reasons.
As a general point, A type languages are not sufficiently formal in defining when a proposition in true in a model. This has led to errors like the above in A type languages. Some papers have used a function from sequences of actions to sets of fluents, rather than a labeled transition function/relation from sets of fluents to sets of fluents, to give semantics to action languages. The former collapses domain descriptions that differ on causal propositions, while the latter does not. Giunchiglia, Kartha and Lifschitz are an example of the use of the latter. I know of no paper that explicitly gives truth conditions for all propositions in an A type language.
A2. Antonis Kakas and Rob Miller (30.10):
Hello Tom, Thanks for your comments and observations.
Regarding your specific comments about the Language E , then you're right  from a formal point of view there is no concept of truth or falsity as regards h and cpropositions. So, from the definitions, it doesn't even make sense to talk about "the set of true hpropositions". For your example, the semantics simply "disregards" the hproposition A happensat 0 , because the occurrence of A at 0 that this represents at the syntactic level has no effects.
There's no problem with this from a formal point of view, but it does mean that E , and languages like it, are very restrictive. That's why they're perhaps best regarded as steppingstones towards formalisations or axiomatisations written in fuller, generalpurpose logics. (However, and as we hope we and others have illustrated, they do have a use in discussing and illustrating approaches to particular issues  in our case, to ramifications  in a relatively intuitive and uncluttered way, and also in proving properties of classes of logic programs.) This is where work such as that of Kartha (translating A into various versions of the Situation Calculus) is valuable. In the case of the Language A , Kartha's translations bring out the fact that there is an implicit completion of causal information ( A 's epropositions) in A 's semantics. Much the same thing is true of h and cpropositions in E . (This is why adding truth functions for h and cpropositions in E models would be trivial but rather superfluous).
We discussed this in more detail in our first paper on E (in the Journal of logic Programming). As we've said in both papers, it's our intention to explore these issues further by developing translations analogous to Kartha's for E . You might also be interested to look at the papers by Kristof Van Belleghem, Marc Deneker, and Daniele Theseider Dupre, who have developed a language ER similar in many respects to E , but more expressive and with a correspondingly more complex semantics (which includes truth conditions for the equivalent of h and cpropositions). (We've described this briefly in Section 5 of our paper.)
As regards your general point about " A type languages", it would be interesting to get some comments from " A type people" about this. Perhaps "not sufficiently expressive" is a better phrase than "not sufficiently formal". (On this general theme, Mikhail Soutchanski made another good point in the recent ENAI when he pointed out that it's much easier to combine theories of action written in classical logic with other commonsense theories, e.g. of space or shape, than if specialised logics are used.)
C21. Alessandro Provetti (10.11):
Dear Antonis and Rob,
I'd like to comment on Tom's example about the role of hstatements
in
_{ }
F 
_{ }
F  
_{ }
A 
Assume that there are other fluents than
The latter yields the same models as far as the initial state is concerned,
but all of them sanction that
It appears to me that the equivalence of the two theories above under
You may want to comment on this in the paper or possibly proceed to work
on the entailment associated to
Hope this helps. Ciao!
Alessandro Provetti
C22. Tom Costello (11.11):
Dear Antonis and Rob, (and Alessandro)
While the languages
Consider the following domain description, stated in
_{ }
A  
_{ }

_{ }
A  
_{ }
F 
_{ }
A  
_{ }
A  
_{ }

_{ }
A  
_{ }
A  
_{ }
F 
These later approaches conflate models that intuitively differ.
I agree with Alessandro that
Tom
C23. Antonis Kakas and Rob Miller (12.11):
Hi Tom,
You wrote:
In Antonis and Rob's case they lack truth conditions for some of their propositions, and worse, it seems that it is not even possible to define truth conditions. 
As we said in our original answer to your question, it's trivial
to extend the semantics of the Language
_{ }

_{ }
 
_{ }
2^{ Fluent%literals } ·> { 
The definition of a model is exactly as before (Definition 9), with the additional conditions:
But this
doesn't really add much insight; you just get that
Rob and Tony
C24. Tom Costello (13.11):
Dear Rob and Tony,
Your definition of truth for cpropositions seems very unintuitive to
me. I would think that if
Your definition does not give this result. The reason I ask for truth conditions for your propositions is that I cannot understand what the intuitive consequences of a set of propositions should be, unless I understand what the propositions say. If the propositions are expressed in a standard logic, then I understand them using the definition of truth in a model. However, your propositions are not in a standard logic, and therefore, to understand what
_{ }
A 
Your paper introduces a new type of proposition,
_{ }
F 
I do not think truth conditions are a side point to the main theme of your paper. As you say, action languages are supposed to be "understandable and intuitive". Languages cannot be understood without semantics.
Yours,
Tom
C25. Antonis Kakas and Rob Miller (17.11):
Tom,
We think that perhaps we're in danger of going round in circles in this
discussion. As we've said in other answers, we've much sympathy for
your stance on the benefits of general purpose logics (and in particular
classical logic), and that's why we've stated on numerous occcasions
that languages such as
Again, it would be interesting to get some views from more people who
have developed
Rob and Tony
Editor's note: continued discussion on the merits and demerits of Action Description Languages will be referred to the panel discussion on ontologies. 
C26. Antonis Kakas and Rob Miller (28.11):
Tom,
In ENRAC 21.11, in the context of the general discussion on action description languages, you asked:
Similarly, does

In the light of this remark, it now occurs to us that a possible
partial explanation of your difficulty in gaining an intuition about
the meaning of
To understand our intentions, it's better to think just in terms
of local cause and effect, i.e. to think of the rproposition
We include "minimally" here to express our
feeling that it's not intuitive to include completely irrelevant
fluents in the set
However, we retain sympathy for your general arguments about the need, ultimately, for theories in classical logic or similar, and for defining entailment in terms of truth functions (as we've effectively done for tpropositions). It is of course debatable whether such theories need to be centered around the notions of global states and state transitions. One's intuitions and preferences about this are probably coloured by one's experience.
Rob and Tony
Q3. Tom Costello (30.10):
A question on the choice of approach: Why didn't you write everything in classical logic?. Personally, I find it much more natural to consider classical logical languages than A type languages. The enclosed postscript file is a translation of the proposed E language to a classical language, which I feel makes much clearer the advantages and disadvantages of the proposal.
A3. Antonis Kakas and Rob Miller (30.10):
Hello Tom,  We've no objection to using classical logic. Indeed, in both our E papers we've mentioned our intention to translate E into classical logic and other generalpurpose formalisms, in order to gain the obvious benefits. (An obvious candidate as a target for this translation is something like the classical logic Event Calculus in [Miller & Shanahan 1996].) As you indicate in your question, different researchers will find different approaches more natural. We chose to initially express our ideas on ramification in this form because we found it relatively intuitive and uncluttered, and convenient for proving properties of logic programs that we want to use for various applications. As we've stated in our answer to your previous question and in our first paper on E , these specialised languages are perhaps best regarded as steppingstones towards formalisations or axiomatisations written in fuller, generalpurpose logics. It's great that you have in fact used E in exactly this way. Please publish!
One point about your relations init and term in your classical logic translation. You say that you should take the "smallest relations ... that satisfy the above [axioms partially defining the relations]". But it turns out that this "smallest relation" idea is still not quite sufficient for eliminating the kind of anomalous models that Michael Thielscher was drawing attention to. So you really do need a least fixed point notion or equivalent somewhere in your axiomatisation, where the associated operator generates the least fixed point starting from a pair of empty sets (see our answer to Michael's question).
Of course, another reason for using the specialised language approach was to illustrate that the Language A type methodology could be applied using ontologies other than that of the Situation Calculus. We're not sure if authors of Language A type papers would reply to your question in the same way, so it would be interesting to get some other responses from this community.
Rob and Tony
Q4. Michael Gelfond (3.11):
Dear Tony and Rob. I am trying to understand the relationship between
your language
To do that I need some good intuitive understanding of the meaning of
statements of
The meaning of
The goal of
Can you (and do you want to) use
My other questions are about your logic program. I do not fully understand your definition of initiation point. Do I understand correctly that it should be changed? If so, what happens with the correctness of logic program?
It may be useful to use some semantics of logic program instead of using SLDNF directly. SLDNF can give some results which are correct w.r.t. your specification even though the program is semantically meaningless (Say, its Clark's completion is too weak or inconsistent, or it does not have stable model, etc.) If you prove that the program is semantically correct one will be able to use this result directly even if your program is run on, say, XDB or SLG (which checks for some loops) and not under Prolog.
Finally, more comments on LP4 will help. I find
comments like
"
A4. Antonis Kakas and Rob Miller (5.11):
Hello Michael, thanks for your question (several questions in fact!). Here are replies to each of your points in turn.
You wrote:
I am trying to understand the relationship between your language 
This is indeed an interesting question, and one that we tried to
address to some extent in our first (JLP) paper on
You wrote:
My feeling is that the meaning really depends on what you call
``the structure of time''.
If time is linear then your 
Yes, that seems correct.
You wrote:
If time is branching as in your second
example in the paper where 
Yes, the meaning of statements such as
Our intuition about
Situation Calculus terms such as
Now, in order to simulate SituationCalculuslike hypothetical
reasoning in
_{ } S0 < Start( [A] ) < [A] 
Like the Situation Calculus and the Language
It is straightforward to extend this approach to partially deal
with hypothetical reasoning about concurrent actions, by adapting
Chitta Baral's and your ideas. Our structure of time would include
sequences of sets of action symbols, e.g.
You wrote:
The goal of 
You wrote:
Since we have both [situation calculus ontology and an actual
history] we can combine reasoning about actual occurrences of
actions and observations about values of fluents at particular
moments of time with hypothetical reasoning of situation calculus
useful for planning, counterfactual reasoning, etc.
Can you (and do you want to) use 
But (at the risk of reopening an old and seemingly unstoppable
debate), at least for planning our first choice would be to use
abduction with a linear time structure rather than deduction with
a hypothetical branching time structure. Again, there are some
remarks about this in the original (JLP) paper on the Language
You wrote:
My other questions are about your logic program. I do not fully understand your definition of initiation point. Do I understand correctly that it should be changed? If so, what happens with the correctness of logic program? 
You wrote:
It may be useful to use some semantics of logic program instead of using SLDNF directly. SLDNF can give some results which are correct w.r.t. your specification even though the program is semantically meaningless (Say, its Clark's completion is too weak or inconsistent, or it does not have stable model, etc.) If you prove that the program is semantically correct one will be able to use this result directly even if your program is run on, say, XDB or SLG (which checks for some loops) and not under Prolog. 
You wrote:
Finally, more comments on LP4 will help. I find
comments like " 
Rob and Tony.
Q5. François Lévy (4.3):
Dear Antonis and Rob
Here are two late questions about your paper.
First, according to your view of ramifications, fluents can be initiated/terminated in two ways: either when an event occurs, or due to changing fluents in a constraint. The formal difference is that a fluent changing its value is not by itself an event. Do you consider it to rely on an ontological difference  i.e. in the process of modeling the real world, two kinds of objects of different nature have to be considered : events on the one side, (instantly) changing fluents on the other one. Or do you consider both similar, and make a difference on a purely technical ground (trigered events don't work to render this if the time line is dense)?
Second, as far as I understand, your predicate `Whenever' embeds both a
domain constraint
and a notion of influence, in Michael Thielscher's sense in his AI97
paper. The domain constraint
is what you call the static view  i.e. `Whenever' being replaced by a
material implication.
The influence information is: in the formula
Best Regards
François
A5. Antonis Kakas and Rob Miller (31.3):
Dear Francois,
Thanks again for your questions.
First, according to your view of ramifications, fluents
can be initiated/terminated in two ways: either when an
event occurs, or due to changing fluents in a constraint.
The formal difference is that a fluent changing its value
is not by itself an event. Do you consider it to rely on
an ontological difference  i.e. in the process of modeling
the real world, two kinds of objects of different nature
have to be considered: events on the one side, (instantly)
changing fluents on the other one. Or do you consider both
similar, and make a difference on a purely technical ground
(trigered events don't work to render this if the time line
is dense)?

So the choice of whether to use rpropositions as well as cpropositions when modelling a particular domain is partly pragmatic. "Switch2 whenever {Relay}" can be read as "all the events which initiate Relay also terminate Switch2". The use of this rproposition thus enables us to avoid writing a whole series of terminates propositions for Switch2 corresponding to each of the initiates propositions for Relay.
But there are other advantages in using rpropositions, as identified in
the paper. Not least, it helps with succinctly and correctly capturing
the effects of concurrent events. For example, the concurrent 'stuffy room'
example in the paper is difficult to describe in
Second, as far as I understand, your predicate 'Whenever'
embeds both a domain constraint and a notion of influence,
in Michael Thielscher's sense in his AI97 paper. The domain
constraint is what you call the static view  i.e. 'Whenever'
being replaced by a material implication. The influence
information is: in the formula L Whenever C, only L can
be initiated, so one domain constraint yields as many Whenever
formulas as fluents can be influenced in it. But Thielscher's

_{ } Alive <> ¬ Dead 
would be represented with the rpropositions
Alive whenever {Dead} Alive whenever {Dead} Dead whenever {Alive} Dead whenever {Alive}
However, we feel that it would be difficult to establish a formal correspondence between our approach to ramifications and Michael Thielscher's, for the reasons outlined in our discussion section. There seems to be a difference in the approaches in that Michael's effect propagation is 'approximately' instantaneous, whereas ours is 'truly' instantaneous (this is not to say that either is right or wrong  just that they're modelling slightly different concepts). This difference manifests itself in domains such as Michael's 'light detector' example (see Section 5 of his AI97 paper). We'd model the introduction of the detector with the single rproposition
Detect whenever {Light}
But we wouldn't get the same 'nondeterministic' behaviour of the detector that Michael gets  i.e. we wouldn't get the model in which the detector is activated when Switch1 is connected. Indeed, this model wouldn't make sense in a narrativebased formalism with explicit time  the detector would have been activated even though there was no timepoint at which the light was on. (Michael expands on the theme of 'approximately' verses 'truly' instantaneous effects in his related paper in the proceedings of Common Sense '98.)
Tony and Rob.
Q6. Anonymous Reviewer 2 (23.4):
The paper by Baral, Gelfond and Provetti published recently in JLP
describes an
A6. Antonis Kakas and Rob Miller (3.5):
Yes, this paper is clearly related to the themes of both
the present article and our previous paper on the Language
Q7. Anonymous Reviewer 3 (23.4):
Your paper makes the following contributions:
Both these contributions are welcome. The ramification problem
is an important problem in temporal reasoning which is
still not well understood. Studying the problem in the context
of a unified temporal language has the potential to shed light
on the connection between the ramification problem and other problems
in temporal reasoning, though see below for further comments.
The translation between
First, and most saliently, the paper does not explain why your approach solves the ramification problem. (Indeed, you don't explain why the approach solves the frame problem either, though that presumably was the job of the 1997 JLP paper.) It would be helpful to give some intuition of why this central problem in temporal reasoning arises, what other approaches have been suggested, how these approaches succeed and fail, what this approach provides, intuitively, in the way of a solution to the ramification problem, and how this approach compares to other approaches.
You do the last (comparing your approach to other approaches) briefly, in the beginning of section 5, but this treatment is too cursory and raises almost more questions than it answers. For example, in comparing your approach to those of Thielscher, McCain and Turner, and Lin, you aruge that their approach is essentially a causalbased approach, because the effect of action occurrences cannot be propagated backward through r(amification)propositions. To this reviewer, this fact hardly seems to be the characteristic fact of causal theories. A deeper analysis of what makes a causal theory, whether sets of axioms in E can be considered causal theories, and how causal approaches can be used to solve the ramification problem, would be helpful here.
Also very desirable would be a discussion of how solutions to the ramification problem interact with solutions to the frame problem. In particular, there is often a duality between the two problems, in that the frame problem is often seen as a mainly representational problem, whose solutions may worsen things from the computational point of view, and the ramification problem is often seen as mainly a computational problem, whose solutions may worsen things from the representational point of view. How do your two solutions interact? A discussion would be useful.
A7. Antonis Kakas and Rob Miller (3.5):
We're not sure if we would go as far as to state that we have "solved the ramification problem." Like the frame problem, not everyone agrees exactly what this problem is. The analysis in our paper is that
the ramification problem arises in domains whose description most naturally includes permanent constraints or relationships between fluents. In formalisms which allow for such statements, the effects of actions may sometimes be propagated via groups of these constraints. The problem is to adequately describe these propagations of effects, whilst retaining a solution to the frame problem  that is, the problem of succinctly expressing that most actions leave most fluents unchanged. 
The current state of A.I. doesn't unfortunately permit a definitive statement of what makes a causal theory  it seems to mean different things to different subcommunities (as witnessed in the recent AAAI Spring Symposium on Causality in Reasoning About Actions). Thielscher and others merely make a technical distinction between "causalbased" and "categorisationbased" contributions to the ramification problem. Our contribution is "causalbased" in this limited technical sense in that it doesn't categorise fluents, but does have a unidirectional ("whenever") "connective". But we accept that perhaps it's not so healthy to hijack the word "causal" for a rather specialised technical use in this way.
We reject the view that the frame problem is a mainly representational problem and the ramification problem is mainly a computational problem. We see both problems as having representational and computational aspects.
We accept that more in depth analyses are needed of the
relationships between formalisms for resoning about actions
in general, and approaches to ramifications in particular.
Ultimately, the best way to do this is by providing
translation methods and showing that these are "sound"
and/or "complete" for well defined classes of domains.
We haven't had time to do this yet, but it's on our agenda
of future work on the Language
Q8. Anonymous Reviewer 3 (23.4):
The examples in the paper would be more helpful if they were expanded more. Examples:
In a domain description with no hpropositions or tpropositions at all, it would be possible to construct a model where ... WindowClosed and VentClosed were true at all timepoints, but Stuffy was false. 
In particular, if we replace (sr9) with "CloseVent happens at 3" our semantics does not give rise to the type of anomalous model is problematic for some other approaches .... in which a change at 3 from not Stuff to Stuff is avoided by incorporating an unjustified change from WindowClosed to not WindowClosed. 
A8. Antonis Kakas and Rob Miller (3.5):
The domain description on page 6 was:
CloseWindow initiates WindowClosed CloseVent initiates VentClosed OpenWindow terminates WindowClosed OpenVent terminates VentClosed CloseWindow initiates Stuffy when {VentClosed} CloseVent initiates Stuffy when {WindowClosed} OpenWindow terminates Stuffy OpenVent terminates StuffyLet H be the interpretation for this domain defined as follows:
H(WindowClosed,t) = true, for all t H(VentClosed,t) = true, for all t H(Stuffy,t) = false, for all tSince there are no hpropositions in this domain, by Definition 8 there are no initiation points or termination points w.r.t. H. Hence H conditions 14 of Definition 9, and so is a model of the domain.
Re the "stuffy room" example on page 8, this is of course the classic illustration of why naive minimisation of change doesn't work when domain constraints are included in a domain. For example, if a situation calculus theory includes the constraint
_{ } Holds(Stuffy, s) < Holds(VentClosed, s)_{ ^ }Holds(WindowClosed, s) 
_{ } ¬ Holds(VentClosed, s) < ¬ Holds(Stuffy, s)_{ ^ }Holds(WindowClosed, s) 
The reviewer also wrote:
In the same vein, it's not clear why Thielscher's approach has trouble with the last variation of the switch example that you discuss in section 3. A more detailed discussion would help." 
Q9. Anonymous Reviewer 3 (23.4):
The unique contributions of this paper over the JLP paper are not
so explicitly stated, namely, the introduction for the "whenever"
construct into
A9. Antonis Kakas and Rob Miller (3.5):
You summarised the contributions very well at the top of your report:
Q10. Anonymous Reviewer 3 (23.4):
The writing is in general clear, understandable, and straightforward, but there are several places which were unclear, or in which an additional English gloss would be helpful. Specifically:
A10. Antonis Kakas and Rob Miller (3.5):
p. 4: It it not clear what the partial order is supposed to range over. In the 10th line from the bottom on this page, is the relation on points (1rst, 2nd, and 4th items in that line) or on sequences (3rd item in the line)? 
The partial order ranges over all items in the set of
timepoints
"p. 7, clause 2 of Def. 14, and p. 12, clause 2 of Proposition 2: In both cases, an English gloss would be helpful. (That is, an intuitive explanation of when a ramification statement is true. This is, after all, the heart of the paper, and extra effort and space to make this well understood would be well worth it.)" 
As we stated in our discussion with Tom Costello (interactions C26), the rproposition "L whenever C" can be read as "C is a minimally sufficient cause for L". So, to quote from the paper, "at every timepoint that C holds, L holds, and hence every action occurrence that brings about C also brings about L". So, "in order to find timepoints at which the fluent literal L is established via the rproposition `L whenever C', we need to look for timepoints at which one or more of the conditions in C become established, and at which the remaining conditions are already and continue to be satisfied (up to some timepoint beyond the point in question)." Clauses 2 of both Definition 14 and of Proposition 2 are mathematical articulations of this last statement.
Q11. Anonymous Reviewer 3 (23.4):
The online ETAI discussions highlighted a number of interesting points,
including the issue of using a special purpose language
A11. Antonis Kakas and Rob Miller (3.5):
The revised version of our paper (now available via the ETAI web pages) includes some extra remarks relating to various points raised in the ETAI interactions. We also very much hope that the paper will be read in conjunction with the online discussion.
Additional questions and comments, as well as the answers from the author(s), will be added into this structure. They will also circulated by email to the area subscribers. To contribute, please click Msg to moderator in the column to the right, send your question or comment as an Email message.
For additional details, please click Debate procedure there.
This debate is moderated by Erik Sandewall. The present protocol page was generated automatically from components containing the successive debate contributions.