Issue 98054 Editor: Erik Sandewall 11.7.1998

Today

Eugenia Ternovskaia has contributed comments and questions on the ETAI submitted article by Marc Denecker et al. Also in this Newsletter, Erik Sandewall answers Pat Hayes and Jixin Ma in the resumed discussion about the ontology of time.


ETAI Publications

Discussion about received articles

Additional debate contributions have been received for the following article(s). Please click the title of the article to link to the interaction page, containing both new and old contributions to the discussion.

Marc Denecker, Daniele Theseider Dupré, and Kristof Van Belleghem
An Inductive Definition Approach to Ramifications


Debates

Ontologies for time

Erik Sandewall:

Pat Hayes wrote:

  Erik wants to know why we should bother to consider punctuated times... But more concretely, I disagree with Erik's way of thinking. I start with axioms and ask what models they have. Erik starts with models (the real line, for example) and assumes that we somehow have access to the 'standard' ones. But of course we don't: there is no complete computationally enumerable theory of the standard real line. All we have are axiomatic theories.

...

Erik asks for examples of what can be done with punctuated-time models. I don't know what he means. It is theories that do things, not their models.

I have no problems with theories that allow both punctuated and non-punctuated real time models. However, since punctuated time is proposed as a solution to the "Dividing Instant Problem", and since accordingly some or all non-punctuated models suffer from such a DIP, I'd like to understand what concrete difficulties are obtained in a computational system that admits non-punctuated time, and even relies on it.

This question of mine arises in a cognitive robotics perspective, which may in fact be distinct from the perspectives of commonsense reasoning or of natural language understanding. In cognitive robotics, it's of paramount importance to be able to deal with hybrid scenario descriptions, combining qualitative and quantitative estimates of duration, distances, consumption of resources such as fuel and battery power, and so on.

Consequently, we are led to use state-of-the-art algorithms for dealing with these kinds of constraints, including, in particular, temporal constraint solving in the tradition of Dechter-Meiri-Pearl and the more recent developments that are used on linear programming. All of these methods make use of standard arithmetic operations such as addition and multiplication, and don't make any exception for them not being everywhere defined. Therefore, it would seem adventurous to use them in a context where the time line is assumed to be punctuated, so that for some numbers in the (e.g. real) domain there does not exist any corresponding point in time.

Now, back to Pat's comments.

  Erik asks for examples of what can be done with punctuated-time models. I don't know what he means. It is theories that do things, not their models.

isn't it more exact to say that software does things? From the point of view of the algorithms mentioned above, it just doesn't matter whether and how the (e.g. real) numbers are axiomatized. However, it does matter that addition is always defined, which is why it is convenient to assume the use of "standard" models of time.

If software is identified only with theorem provers, then of course theories are of primary interest. But would you seriously propose using a theorem prover plus an axiomatization of the time domain as a computational instrument?

Some axioms and some deduction is needed, for sure, but suppose I develop and use a hybrid system that combines algorithms with Pat's core theory. Is there something that can then go wrong? Can I obtain some unwarranted conclusions? Will I miss some warranted conclusions? Or, will I get into inordinate difficulties when writing my remaining scenario description axioms? (At least one concrete example of such problems would be useful).

On the other hand, if no such problems have been reported and none can be found, they why is all this fuss about the so called DIP? Why can't we just take Pat's core theory, observe that it allows both the integers and the reals as timepoint domains, and that in addition it allows a whole lot of other domains that noone needs to care about from a practical point of view?

Or, to reverse the question - concretely why do people care about those other models? On this topic, Pat has referred to others:

  Erik wants to know why we should bother to consider punctuated times. Well, I guess my first answer would be: ask the people who use the Allen theory. I think it has something to do with natural language understanding.

and in another context to the temporal database community. Fine, but what is it that they can do using non-"standard" time that can't be done using "standard" time?

Jixin's answer to that question was:

  The Dividing Instant Problem is a typical problem with the approach of simply constructing time intervals from points (such as reals, rationals or integers), e.g., by means of defining an intervals as a set of points... The fundamental reason is that in a system where time intervals are all taken as semi-open, it will be difficult to represent time points in an appropriate structure so that they can stand between intervals conveniently.

I had actually asked for something more concrete: a scenario that can't be expressed, or a scenario query that requires inordinately long completion time, for example. Is it possible to be more specific about in what sense it "will be more difficult" to represent time points that stand between intervals?

As an aside: Jixin also wrote:
  However, it seems that by some careful and proper treatments, we may also reach the same results by defining the concepts of intervals based on points. The key point here is, in addition to the concept of lower and upper bounds for point-based intervals, the concept of left type and right type for intervals needs to be addressed as well. What follows is the skeleton of the structure:

  1.  P  is a partially-ordered set of points;
  2.  Type  is a two-member set   {openclosed}  ;
  3. An interval  i  is defined as a quaternion  seq(p1p2lr such that:
    •  p1 < p2 
    •  l  and  r  belong to  Type 
    • if  p1 = p2  then  l = r = closed 

Yes, but how does this differ from the standard notions of open and closed intervals? It would appear that

   (a,b,open,open) = (a,b)
   (a,b,open,close) = (a,b]
and so on. Are there some models where your fourtuple intervals can not be reduced to standard definitions, such as
   (a,b) = {x | a < x < b}
and so on? Do those "nonstandard" models have some interesting properties?

Returning to the use of algorithms that are incompatible with punctuated time lines: I realize, of course, that computation on quantified estimates of durations and resources is rarely used in commonsense reasoning, at least as studied in A.I. Therefore, researchers in CSR may not find the algorithms mentioned above very useful. However, it would certainly be an advantage, from a general scientific point of view, if a common framework could be found for reasoning about actions in both CSR, natural language understanding, and cognitive robotics. Doing this would also facilitate the design of combined systems covering all of those aspects. However, one constraint in such a search for a common ground is that cognitive robotics needs the full set of reals (or some other dense domain) for the time axis. My question is therefore: is there something in those other areas that can not be rendered using full real time? In other words, is it the case that these different areas require intrinsically different and mutually inconsistent extensions of a core theory such as the one proposed by Pat in the previous Newsletter?