Moderated by Erik Sandewall.
 

Iliano Cervesato, Massimo Franceschet, and Angelo Montanari

The Complexity of Model Checking in Modal Event Calculi with Quantifiers

The article mentioned above has been submitted to the Electronic Transactions on Artificial Intelligence, and the present page contains the review discussion. Click for more explanations and for the webpage of theauthors: Iliano Cervesato, Massimo Franceschet, and Angelo Montanari.

Overview of interactions

N:o Question Answer(s) Continued discussion
1a 17.5  Paolo Liberatore
24.5  Angelo Montanari
 
1b 17.5  Paolo Liberatore
24.5  Angelo Montanari
 
1c 17.5  Paolo Liberatore
24.5  Angelo Montanari
 
1d 17.5  Paolo Liberatore
24.5  Angelo Montanari
 
2 27.5  Peter Jonsson
14.6  Angelo Montanari
 
3 17.7  Rob Miller
5.8  The authors
 
 

Q1a. Paolo Liberatore (17.5):

Questions 1a and 1b are comments on the presentation; questions 1c and 1d are requests for clarification.

Page 2, col 2, second-last para: "We represent an MVI for  p  as  p(eiej ...". I found this sentence a little misleading. Indeed, the notation  p(eiej refers to any 3-tuple   <propertyeventevent>  . The aim of queries is to decide whether such a 3-tuple is a MVI. From the sentence written in the paper, it seems that  p(eiej is used to state that  ei ,  ej  is an MVI for  p . Perhaps this sentence should be rewritten in some way.

A1a. Angelo Montanari (24.5):

In the following, we quote a bigger paper excerpt, including the sentence: "We represent an MVI for  p  as  p(eiej ...". We slightly changed the notation to make it easier to read.

  Given an EC-structure H = (E, P, [.>, <.], ].,.[) and a knowledge state w, the Event Calculus (EC for short) permits inferring the maximal validity intervals (MVIs) over which a property p holds uninterruptedly. We represent an MVI for p as  p(eiet, where  ei  and  et  are the events that respectively initiate and terminate the interval over which  p  holds maximally. Consequently, we adopt as the query language of EC the set

     {p(e1e2)  |  p belongs to P ^ e1e2 belong to E}    
of all such property-labeled intervals over H. We interpret the elements of the language as propositional letters and the task performed by EC reduces to deciding which of these formulas are MVIs in the current knowledge state w and which are not.

We believe that reading the sentence in its context should allow one to conclude that every MVI is 3-tuple  <property,event,event> , but not every 3-tuple  <property,event,event>  is an MVI.


Q1b. Paolo Liberatore (17.5):

The definition of MVI states that between  e1  and  e2  there is no event that initiates  p . I would like you to spend a few words for explaining why.

A1b. Angelo Montanari (24.5):

We discussed this point in some detail in previous publications (cf. reference [4] as well as in [j-ci-12-359]. Your comment indicates that it is probably useful to add a brief explanation immediately after Definition 2.2.

In the following, we report part of the discussion provided in [4].

Definition 2.2 states that an event e interrupts the validity of a property

if it initiates or terminates p itself or a property q which is incompatible with  p . This rule adopts the so-called strong interpretation of the initiate and terminate relations: a given property  p  ceases to hold over a time interval if an event  e  which initiates or terminates  p , or a property incompatible with  p , occurs during this interval. The strong interpretation is needed when dealing with incomplete sequences of events or, as in our case, incomplete information about their ordering. For example, consider a switch that can take two different positions: on and off. Its behavior can be described by means of two types of event: one that changes the position from off to on (turn-on), the other from on to off (turn-off). While two turn-on events cannot occur consecutively in the real world, it may happen that an incomplete sequence consisting of two consecutive turn-on events, followed by a turn-off event, is recorded in the database. The strong interpretation of the initiate relation allows EC to recognize that a missing turn-off event must have occurred between the two turn-on events. However, since it is not possible to temporally locate such an event, EC only concludes that the switch is on between the second turn-on event and the turn-off event, and it considers the first turn-on event as a pending initiating event. It is worth mentioning that an alternative interpretation of the initiate and terminate relations, called weak interpretation, is also possible. According to such an interpretation, a property p is initiated by an initiating event unless it has been already initiated and not yet terminated (and dually for terminating events). The weak interpretation is needed to aggregate homogeneous states.

Further details about the strong/weak distinction, including formalization and other examples, can be found in [4].

References:

j-ci-12-359L. Chittaro and A. Montanari.
Efficient Temporal Reasoning in the Cached Event Calculus.
Computational Intelligence, vol. 12, pp. 359-382.


Q1c. Paolo Liberatore (17.5):

Page 9, col 1, para 3: the bound  O(n2 for reachability is due to the fact that you consider the number of events as the size of the input. Indeed, reachability in a graph is linear in the total size of the graph, that is, is  O(n+m, where  n  is the number of vertices and  m  is number of edges. This implies for example that the cost of EC is  O(s2 if  s  is the total size of the input and not only the number of events. Is there any reason for choosing the number of events to represent the size of H?

A1c. Angelo Montanari (24.5):

An instance of the model checking problem we analyzed in the paper is a triple   (Hwvarphi)  , where  H  is an EC-structure,  w  is a (strict) partial order on the events in  E , and  varphi  is a well-formed formula.

Given an EC-structure  H ,  E  is the set of events (we can assume that they are instances of a finite set of domain-specific event types), which can be arbitrarily large, while  P  is the set of relevant properties, which are fixed once and for all and characterize the considered domain. For instance, in the light example, we may have two events  e1  and  e2 , both of type  turn-on , which initiate the property  on . Since the cardinality of  P  does not change from one problem instance to another one (unless we change the application domain), while the cardinality of  E  may grow arbitrarily, we chose the cardinality of  E , that is, the number  n  of events, as the size of  H  (it is usually called data complexity).

The cardinality of  w , here denoted by  m , is bounded by  n  (more precisely, it is less than or equal to  n (n-1) /2 ). We did not choose  m  as a complexity parameter, because we are interested in explicitly characterizing the amount of available ordering information with respect to the given number of events. As an example, we are interested in distinguishing between situations in which  m = O(n (sparse graphs) from situations in which  m = O(n2 (dense graphs).

Finally, the size  k  of  varphi  in not bounded by  n , and it can arbitrarily grow, independently from  n  (clearly, not in the case of basic EC). Hence, we took  k  as another relevant complexity parameter (which is usually called query complexity).


Q1d. Paolo Liberatore (17.5):

Page 10, col 2, comment after Theorem 5.5: if  k  is just a bit high (for instance, 4), the cost of solving the problem becomes something in the order of  O(n7, which is usually considered not very useful. In the conclusions, you mention that the cost of the implemented version of QCMEC is very close to the theoretical value. Do you have empirical evidence that the efficiency of EQCMEC is also similar to the upper bound you proved? If so, the problem should be considered not actually feasible.

A1d. Angelo Montanari (24.5):

This last question is concerned with the following sentence, at the end of the complexity section:

  The directness of the implementation allows checking easily that the complexity of the implemented algorithms coincide with the bounds proved in this section for the problems they implement."

Well, you are right, this sentence is probably a little misleading. Its intended meaning is that, given the directness of the proposed implementation, we expect that its worst-case complexity coincides with the proved bounds. We can possibly reformulate the above sentence as follows:

"The directness of the implementation allows checking easily that the complexity of the implemented algorithms coincide, in the worst case, with the bounds proved in this section for the problems they implement."

However, we are convinced that the actual costs of the implemented versions of EC, ECMEC and EQCMEC are NOT close to the theoretical upper bounds, even though we have not empirical evidence of this fact.

Take, for example, the case of EQCMEC. Here, the critical factor is the level of nesting of quantifiers. The worst case holds when we have an EQCMEC-formula involving  q  nested quantifiers at its top-level. Let  Q.F  be such a formula, where  Q  is the quantification part, and let  d  be the dimension of  F ,  q  the number of quantifiers in  Q , and  n  the number of events. Exploiting the unfolding strategy, we are led to check a formula of size  d · n {q}  . Since both  d  and  q  are less than or equal to the size  k  of  Q.F , we get the upper bound  k · n {k}   for the size of the unfolded formula (without loss of generality, we can interpret the size  k  of  Q.F  as the number of propositional connectives, quantifiers and modal operators occurring in  Q.F  plus one).

Furthermore, as mentioned in the paper, practical applications using modal event calculi with quantifiers are expected to model situations involving a large number of events, while the size of the queries, and in particular the nesting of quantifiers, will in general be limited. The hardware fault diagnosis example falls into this category (the nesting level of quantifiers is 2).

Finally, it is possible to show that the upper bound of  O(n3 for EC can actually be lowered of one degree by exploiting the notions of transitive reduction and closure of an ordering relations. Preliminary results in this direction can be found in [2].

From the above elements, it follows that the hardware fault diagnosis problem can actually be solved in  O(n4.


Q2. Peter Jonsson (27.5):

Comments and questions to the authors:

Page 4, col 1, para 2: Please find a better name than "condition (*)".

Page 6, col 1, para 3: You write "The best approximation of /formula/ we can achieve...". In what sense is it the "best" approximation?

Chapter 4 "Implementation". I agree that the implementation of QCMEC in  lambda -prolog is very elegant. However, due to my own experience of prolog-like languages, I wonder how efficient the implementation is? I suggest that you include a small empirical investigations as follows: First, implement a QCMEC algorithm in some fast, imperative language such as C. This is straightforward since you have already proved that QCMEC is in PSPACE and your way of showing this (by exhibiting a polynomial-time alternating TM) immediately suggests a recursive algorithm for the QCMEC problem. (Even though such a naive algorithm may not be the fastest algorithm, it is probably efficient enough.) Then, compare your implementation in  lambda -prolog with the C-implementation on a number of test examples (such as the diagnosis example). Even though this experiment does not prove anything, it can very well give us an idea of the (in)efficiency of the implementations. I will not be surprised if the C-implementation completely outperforms the  lambda -prolog-implementation. I will be extremely surprised if the  lambda -prolog-implementation outperforms the C-implementation.

Page 8, before Th. 4.1: You write "We only show the statement of our soundness and completeness result since a fully worked out proof would require a very detailed account of the semantics of  lambda -prolog". So why implement QCMEC in  lambda -prolog at all? By implementing a straightforward QCMEC algorithm using a language with a more intuitive semantics, the soundness/completeness proof would be more or less trivial (once again, compare with your PSPACE-membership proof of QCMEC).

Chapter 5 "Complexity Analysis" Why not rearrange the section as follows: Begin the section by proving that QCMEC is in PSPACE. Then there is no need for first showing that CMEC is in PSPACE, extending the result to EQCMEC and finally realize that the proof for EQCMEC also works for QCMEC.

Page 8, col 2, proof of Th. 5.3.: You write "...the algorithm enters in an AND (resp. OR) state. It nondeterministically choose one among  alpha  and  beta  and evaluates it in w." This should be rephrased. If you are in an OR state, the sentence is correct but if you are in an AND state, no nondeterministical choice takes place. Instead, the ATM checks that all paths starting in the AND state is an accepting path.

Page 10, col 1, end of proof of Th. 5.3. Please give the proof of why  w  |-  F1  iff  G  is true. Or, at least, give the reader a hint...

Page 11, col 1: You write "In Section 4 we have transliterated the definition of QCMEC and its subcalculi in the higher-order logic programming  lambda -prolog. The directness of the implementation allows checking easily that the complexity of the implemented algorithms coincide with the bounds proved in this section for the problems they implement". One should be extremely careful when justifying complexity results with implementations in logic programming languages. Such languages often introduce large overheads and it is difficult to know the exact complexity of the different operations (unification,...) that are performed by the systems. Your assertion may very well be correct, but it needs further justification.

A2. Angelo Montanari (14.6):

  Page 4, col 1, para 2: Please find a better name than "condition (*)".

Ok. Let us call it "singleton condition" since both the initiate and terminate maps associate each property with a singleton set (or an empty set).

  Page 6, col 1, para 3: You write "The best approximation of /formula/ we can achieve...". In what sense is it the "best" approximation?

Ok. Since we are not interested in providing a formal measure of the goodness of an approximation, we will rewrite the sentence as follows: "Let us consider the following approximation of  Diamond varphii : ... which in general is not equivalent to, but only implied by,  Diamond varphii ."

  Chapter 4 "Implementation". I agree that the implementation of QCMEC in  lambda -prolog is very elegant. However, due to my own experience of prolog-like languages, I wonder how efficient the implementation is? I suggest that you include a small empirical investigations as follows: First, implement a QCMEC algorithm in some fast, imperative language such as C. This is straightforward since you have already proved that QCMEC is in PSPACE and your way of showing this (by exhibiting a polynomial-time alternating TM) immediately suggests a recursive algorithm for the QCMEC problem. (Even though such a naive algorithm may not be the fastest algorithm, it is probably efficient enough.) Then, compare your implementation in  lambda -prolog with the C-implementation on a number of test examples (such as the diagnosis example). Even though this experiment does not prove anything, it can very well give us an idea of the (in)efficiency of the implementations. I will not be surprised if the C-implementation completely outperforms the  lambda -prolog-implementation. I will be extremely surprised if the  lambda -prolog-implementation outperforms the C-implementation.

We presented a lambda-Prolog implementation of QCMEC for two reasons: first to show that this formalism is implementable, and quite easily so; second to promote in the AI community languages and a style of programming that permit direct encodings of the problem at hand, to the point that it is feasible to provide rigorous mathematical proofs of correctness (see below).

We did not consider efficiency issues. We would ourselves be very surprised if our lambda-Prolog implementation of QCMEC were to do better than even a naive C implementation. However, it is not our goal to compare the efficiency achievable with different programming languages.

  Page 8, before Th. 4.1: You write "We only show the statement of our soundness and completeness result since a fully worked out proof would require a very detailed account of the semantics of  lambda -prolog". So why implement QCMEC in  lambda -prolog at all? By implementing a straightforward QCMEC algorithm using a language with a more intuitive semantics, the soundness/completeness proof would be more or less trivial (once again, compare with your PSPACE-membership proof of QCMEC).

We strongly disagree. A correctness proof requires showing that the computation induced by the execution of an algorithm achieves the effects dictated by its specification, and nothing else. If we want to be serious about this, we must in particular relate the individual steps in the computation to aspects of the specification. In the case we consider, this is quite immediate since the representation language ( lambda -Prolog) is a logic, and the specification of QCMEC is given by means of logical definitions. Relating lambda-Prolog derivations (i.e. execution traces) to these specifications is almost trivial; however, a fully formal proof requires examining all possible cases and all possible rules, making it long and tedious. A proof of this kind can be found in [4] (to be published in the Journal of Logic Programming). Had we relied on an implementation in an imperative language such as C, we believe that a similar proof, although theoretically possible, is simply not feasible in practice (at least in a reasonable amount of time).

This was the logician's answer. The vast majority of the people who are likely to cast their eyes on our work will be interested in the more intuitive aspect of a correctness proof that we call immediacy: how far apart is the implementation from the formalism it implements? Here, our lambda-Prolog implementation is a direct transliteration of the specification of the semantics of QCMEC. It would not be so had we used C.

  Chapter 5 "Complexity Analysis" Why not rearrange the section as follows: Begin the section by proving that QCMEC is in PSPACE. Then there is no need for first showing that CMEC is in PSPACE, extending the result to EQCMEC and finally realize that the proof for EQCMEC also works for QCMEC.

You propose a top-down organization of the chapter; we arranged it in a bottom-up fashion, which gives us for free some of the hardness results (e.g., QCMEC-hardness immediately follows from EQCMEC-hardness). We prefer to keep things as they stand.

  Page 8, col 2, proof of Th. 5.3.: You write "...the algorithm enters in an AND (resp. OR) state. It nondeterministically choose one among  alpha  and  beta  and evaluates it in w." This should be rephrased. If you are in an OR state, the sentence is correct but if you are in an AND state, no nondeterministical choice takes place. Instead, the ATM checks that all paths starting in the AND state is an accepting path.

In the paper, we describe an alternating polynomial time algorithm. The algorithm neither checks that all paths starting in an AND state are accepting paths, nor searches for an accepting path starting from an OR state. If it were to, it would not be polynomial time-bounded. In both cases (AND and OR states), the algorithm simply guesses a path and follows it.

By definition, the alternating machine accepts an AND-state configuration (resp., an OR-state configuration) if and only if all (resp., al least one) of its successor configurations are accepting configurations. Thus, the distinction bewteen AND states and OR states comes into play in the definition of acceptance.

It is worth noting that given a problem, a nondeterministic algorithm for this problem does not solve it. Given a "yes" instance of the problem and a succint certificate for it, the nondeterministic algorithm only checks that the instance is indeed a "yes" instance of the problem by exploiting the certificate. For example, in the case of SAT, given a satisfiable propositional formula  F  and a truth assignment for the variables in  F  that makes  F  true, a nondeterministic algorithm for SAT checks that the assignment satisfies  F .

Notice also that AP is equal to PSPACE, that is, finding an alternating polynomial time algorithm for a given problem allows us to conclude that the problem can be solved in polynomial space. Thus, it is not surprising that the behaviour of the algorithm in an AND and in an OR state is the same, since the space requirements in both states are the same.

See [12] (chapter 16) for further details.

  Page 10, col 1, end of proof of Th. 5.3. Please give the proof of why  w  |-  F1  iff G is true. Or, at least, give the reader a hint...

While answering this comment, we noticed a minor imprecision in the definition of the formula defining  Fk . The correct formulation is as follows:

      | \Diamond \hat{F}                            
      |                                    if k = n
      |
F_k = | \Diamond (\psi_{k+1} \And \Box \hat{F})
      |                                    if k = n-1
      |
      | \Diamond (\psi_{k+1} \And
      |           \Box (\psi_{k+2} \Impl F_{k+2}))
                                           otherwise

Coming back to your question, we will now provide further details in order to help the reader to understand the proof.

Let G be a QBF with variables in  V =  {x1x2, ...xn}   and  H  be an EC-structure as defined in the proof.

A knowledge state  w  in  WH  induces an assignment  tw  of the variables in  V : for every variable  x  in  V , let  tw(x be true if and only if  w  |=  px(exe¬ x. Otherwise, that is, if  w  |=  ¬ px(exe¬ x, then  tw(x is false. Notice that different knowledge states may induce the same assignment. Indeed, only the relative order of the events  ex  and  e¬ x  for every variable  x  in  V  is relevant. This many-to-one correspondence between knowledge states and assignments establishes an equivalence relation on the set of all the knowledge states for  H  such that the truth of a propositional CMEC-formula on  H  (i.e. a CMEC-formula on  H  without modalities) is invariant within each equivalence class.

If  F  is a propositional formula (the matrix of some QBF  G ), the corresponding (propositional) CMEC-formula  hat(F is obtained by replacing every occurrence of the variable  x  in  F  with  px(exe¬ x. Given a knowledge state  w  in  WH ,  w  |=  hat(F iff  tw  satisfies  F . Hence,  w  |=  Diamond hat(F (resp.  w  |=  Box hat(F) iff  F  is satisfiable (resp. valid).

In the proof, we exploit this correspondence in order to map the propositional quantifiers of a QBF into the modal operators of a CMEC-formula. However, there is a technical problem with this solution: the scope of a quantifier is limited to the variable that it quantifies, whereas the scope of a modal operator in the formula  Box F  or  Diamond F  extends to the whole formula  F . In the proof, we take advantage of the formulas  psik  in order to restore the correct context for the evaluation of  hat(F.

We invite the skeptical reader to make some small examples with two or three nested quantifiers.

  Page 11, col 1: You write "In Section 4 we have transliterated the definition of QCMEC and its subcalculi in the higher-order logic programming  lambda -prolog. The directness of the implementation allows checking easily that the complexity of the implemented algorithms coincide with the bounds proved in this section for the problems they implement". One should be extremely careful when justifying complexity results with implementations in logic programming languages. Such languages often introduce large overheads and it is difficult to know the exact complexity of the different operations (unification,...) that are performed by the systems. Your assertion may very well be correct, but it needs further justification.

This observation is true in general of the implementation of any programming language. We do not have enough insight about the internals of the implementation we used to answer it in one way or another. However, because of the relatively simple structure of our programs, we do not expect the overhead introduced by the interpreter we used to alter the bounds we obtained.


Q3. Rob Miller (17.7):

Dear Iliano, Massimo and Angelo,

I've several questions about your paper "The complexity of Model Checking in Modal Event Calculi with Quantifiers" submitted to the ETAI.

Generally, I would like to understand the relationship between the modal logic event calculi you describe and refer to in your paper and the classical logic versions of the event calculus that people such as myself and Murray Shanahan use. So any remarks you have in this respect would be helpful. Here are some more specific questions related to this:

  • Do you see your formalisms as what we describe as "narrative" formalisms - i.e. formalisms in which the flow of time is described independently of any actions that occur?

  • Is the underlying concept of time linear? By "partial order" of events, do you mean "incomplete information about a total order", or are your techniques applicable to genuinely partially ordered time structures (such as the forward branching time structure employed by many versions of the Situation Calculus)?

  • I'm confused about the syntax of your language(s). Are there any meta-predicates in your language other than "br"? Perhaps some examples would help. How roughly might you express information such as:
             "Pushing causes the door to open 
                               if it's not locked"
             "The door was pushed at 9 o'clock"
    
    (In the classical logic variants of the event calculus, you'd write formulas such as
        (FORALL t).[Initiates(Push,Open,t)  <-  
                              NOT Holds(Locked,t)].
        Happens(Push,9).
    

Apologies if answers to these questions are already to be found in the other papers to which you refer. As a proponent of "the event calculus" I regret not knowing more about your previous work.

Rob

C3-1. Iliano Cervesato, Massimo Franceschet, and Angelo Montanari (5.8):

Dear Rob,

Thank you very much for your questions. In the following, we provide you with some remarks about our modal event calculi that hopefully will contribute to clarify the questions you raised.

  Generally, I would like to understand the relationship between the modal logic event calculi you describe and refer to in your paper and the classical logic versions of the event calculus that people such as myself and Murray Shanahan use. So any remarks you have in this respect would be helpful. Here are some more specific questions related to this:

Let us briefly run again along the path that led us to develop our set of modal event calculi.

Roughly speaking, the original Event Calculus (EC) is a simple temporal formalism designed to model situations characterized by a set of events, whose occurrences have the effect of starting or terminating the validity of determined properties. Given a description of when these events take place and of the properties they affect, EC is able to determine the maximal validity intervals (MVIs) over which a property holds uninterruptedly.

We focused our attention on situations where only partial information about the ordering of event occurrences is at our disposal. More precisely, we limited our investigation to problems consisting of a fixed set of events that are known to have happened, but with incomplete information about the relative order of their occurrences. In these situations, the MVIs derived by EC bear little relevance since the acquisition of additional knowledge about the actual event ordering might both dismiss current MVIs and validate new MVIs. It is instead critical to compute precise variability bounds for the MVIs of the (currently underspecified) actual ordering of events. In [7,8], optimal bounds have been identified in the set of necessary MVIs and in the set of possible MVIs. They are the subset of the currents MVIs that are not invalidated by the acquisition of new ordering information and the set of intervals that are MVIs in at least one extension of the current ordering of events, respectively. In [1], we proposed a Modal Event Calculus (MEC), which, given a possibly incomplete specification of the ordering of event occurrences, extends EC with the possibility of inquiring about MVIs (MVIs computed by the original calculus), necessary MVIs (MVIs which hold in all extensions of the given ordering), and possible MVIs (MVIs which hold in at least one extension). In [3,4,5,6], we generalized such a modal framework to model effectively significant situations (we considered case studies taken from the domain of hardware fault diagnosis). We defined a number of modal event calculi that allow a free mix of modal operators, boolean connectives and preconditions, focusing on the interactions between modal operators and boolean connectives in [3,4], and between modal operators and preconditions in [6]. In [5], the expressiveness and complexity of the resulting hierarchy of modal event calculi are systematically investigated. In the ETAI paper, we further extend our modal event calculi by admitting arbitrary event quantifications.

Other differences between our modal event calculi and the classical logical versions of EC, that are worth mentioning, are the following:

  • we provided EC with a model-theoretic formalization by means of the notions of EC-structure (which specifies the set of event occurrences, the set of relevant properties, the initiating and terminating maps that respectively associate each property with the events that initiate and terminate it, the exclusivity relation that identifies incompatible properties), knowledge state (a strict partial order over the set of occurred events), EC query language (the set of all property-labeled intervals over the EC-structure), and EC intended model (the set of MVIs computed by EC);

  • we generalized such a formalization to all modal event calculi we proposed; the generalization actually consists in the extension of the EC-structure to deal with preconditions, of the (modal) query language to encompass modal operators and boolean connectives, and of the intended model to cope with the additional structural and/or linguistic features;

  • we gave sound and complete implementations of the various modal event calculi in higher-order logic programming languages (hereditary Harrop formulas, lambda-Prolog).

 
  • Do you see your formalisms as what we describe as "narrative" formalisms - i.e. formalisms in which the flow of time is described independently of any actions that occur?

  • Is the underlying concept of time linear? By "partial order" of events, do you mean "incomplete information about a total order", or are your techniques applicable to genuinely partially ordered time structures (such as the forward branching time structure employed by many versions of the Situation Calculus)?

As described above, at the beginning of our research work (cf. [7,8]), we restricted our attention to complete actual courses of events, whose temporal ordering, specified by means of a set of ordered pairs of events, is only partially known. The considered scenarios are thus characterized by a set of occurred events, which is fixed once and for all, and does not include concurrent or divisible events, and by incomplete (but consistent) information about an actual total order, given in terms of the relative order of pairs of event occurrences. Modal EC (cf. [1]) allows one to determine the sets of MVIs, necessary MVIs, and possible MVIs, and to constrain their evolution as further (consistent) information about the ordering (that is, further pairs of ordered events) is added.

However, most of the restrictions of the original Modal EC (which, by inertia, we do not explicitly remove from its successive generalizations [3,4,5,6]) can actually be relaxed:

  • completeness of the set of event occurrences (we can have incomplete knowledge about both the event occurrences and their temporal ordering);

  • occurred events only (the events can be either actual events or hypothetical ones).

All the proved results, indeed, do not rely on such restrictions.

 
  • I'm confused about the syntax of your language(s). Are there any meta-predicates in your language other than "br"? Perhaps some examples

According to the proposed approach, the meta-predicate "br" does not belong to any modal query language (syntax); it comes into play in the definition of the intended models of our modal languages (semantics) as well as in their implementations.

Obviously, nothing prevents us to extend query languages to allow one to retrieve information about, for instance, the initiating and terminating maps. Our original (and current) goal, however, was to support queries about (any combination of) MVIs only.

  would help. How roughly might you express information such as:
  "Pushing causes the door to open if it's not locked"

"The door was pushed at 9 o'clock"

  (In the classical logic variants of the event calculus, you'd write formulas such as
    ALL t[Initiates(PushOpent) <- ¬ Holds(Lockedt)]   
    Happens(Push9  
)

Such as example can be easily expressed in the Event Calculus with preconditions (PEC) described in [6]. A structure for PEC (PEC-structure) differs from an EC-structure for the addition of a context argument to the initiating and terminating maps (that specifies the set of preconditions that must hold in order that an event actually initiates or terminates a given property) and the removal of the exclusive relation (the presence of preconditions in PEC permits an easy emulation of the exclusivity relation).

We express the condition that

  "pushing causes the door to open if it's not locked"
by using the initiating and terminating maps of the PEC-structure as follows.

Let OPEN and UNLOCK be two properties, and PUSH, SHUT, KEY_L, and KEY_R be four events. The event PUSH initiates the property OPEN with precondition UNLOCK, that is, a push event opens the door if it is unlocked. Moreover, the event SHUT terminates the property OPEN with no preconditions. Finally, the event KEY_L initiates the property UNLOCK, while the event KEY_R terminates it.

It is worth noting that, for simplicity (as in the original Kowalski and Sergot's EC paper), we usually define the initiating and terminating maps in terms of event occurrences and property types. In such a case, the initiating map only includes information stating that the occurred PUSH event initiates the property OPEN if the property UNLOCK holds at its occurrence time, while the terminating map provides no information.

However, it is not difficult to redefine the initiating and terminating maps in terms of event and property types, and to properly instantiate them when an event of the given type occurs (we detailed such an approach in some of our early papers on EC). In such a case, the initiating and terminating maps can be used to model all the above described event and property relationships; moreover, the relationship between the event PUSH, the property OPEN, and the precondition UNLOCK is instantiated for the given occurrence of an event of type PUSH.

Furthermore, we express the fact that
  "the door was pushed at 9 o'clock"
by stating that the set of occurred events of the PEC-structure only includes an occurrence of a PUSH event.

We do not associate explicit times with event occurrences (we model temporal information by a strict partial order over the set of occurred events), but we exploit temporal knowledge provided by explicit timestamps to obtain information about the relative ordering of event occurrences.

Aside. We only consider positive preconditions. For this reason, we introduce the condition UNLOCK. However, as long as we restricted ourselves to acyclic dependencies among properties (as we did in [6]), it is not difficult to generalize our notion of PEC-structure in order to support both positive and negative preconditions.


 

Background: Review Protocol Pages and the ETAI

This Review Protocol Page (RPP) is a part of the webpage structure for the Electronic Transactions on Artificial Intelligence, or ETAI. The ETAI is an electronic journal that uses the Internet medium not merely for distributing the articles, but also for a novel, two-stage review procedure. The first review phase is open and allows the peer community to ask questions to the author and to create a discussion about the contribution. The second phase - called refereeing in the ETAI - is like conventional journal refereeing except that the major part of the required feedback is supposed to have occurred already in the first, review phase.

The referees make a recommendation whether the article is to be accepted or declined, as usual. The article and the discussion remain on-line regardless of whether the article was accepted or not. Additional questions and discussion after the acceptance decision are welcomed.

The Review Protocol Page is used as a working structure for the entire reviewing process. During the first (review) phase it accumulates the successive debate contributions. If the referees make specific comments about the article in the refereeing phase, then those comments are posted on the RPP as well, but without indicating the identity of the referee. (In many cases the referees may return simply an " accept" or " decline" recommendation, namely if sufficient feedback has been obtained already in the review phase).