Issue 98054 | Editor: Erik Sandewall | [postscript] | ||
11.7.1998 |
|
|||
Today | ||||||||||
Eugenia Ternovskaia has contributed comments and questions on the ETAI submitted article by Marc Denecker et al. Also in this Newsletter, Erik Sandewall answers Pat Hayes and Jixin Ma in the resumed discussion about the ontology of time.
| ||||||||||
ETAI Publications | ||||||||||
Discussion about received articlesAdditional debate contributions have been received for the following article(s). Please click the title of the article to link to the interaction page, containing both new and old contributions to the discussion.
Marc Denecker, Daniele Theseider Dupré, and Kristof Van Belleghem
| ||||||||||
Debates | ||||||||||
Ontologies for timeErik Sandewall:Pat Hayes wrote:
I have no problems with theories that allow both punctuated and non-punctuated real time models. However, since punctuated time is proposed as a solution to the "Dividing Instant Problem", and since accordingly some or all non-punctuated models suffer from such a DIP, I'd like to understand what concrete difficulties are obtained in a computational system that admits non-punctuated time, and even relies on it. This question of mine arises in a cognitive robotics perspective, which may in fact be distinct from the perspectives of commonsense reasoning or of natural language understanding. In cognitive robotics, it's of paramount importance to be able to deal with hybrid scenario descriptions, combining qualitative and quantitative estimates of duration, distances, consumption of resources such as fuel and battery power, and so on. Consequently, we are led to use state-of-the-art algorithms for dealing with these kinds of constraints, including, in particular, temporal constraint solving in the tradition of Dechter-Meiri-Pearl and the more recent developments that are used on linear programming. All of these methods make use of standard arithmetic operations such as addition and multiplication, and don't make any exception for them not being everywhere defined. Therefore, it would seem adventurous to use them in a context where the time line is assumed to be punctuated, so that for some numbers in the (e.g. real) domain there does not exist any corresponding point in time. Now, back to Pat's comments.
If software is identified only with theorem provers, then of course theories are of primary interest. But would you seriously propose using a theorem prover plus an axiomatization of the time domain as a computational instrument? Some axioms and some deduction is needed, for sure, but suppose I develop and use a hybrid system that combines algorithms with Pat's core theory. Is there something that can then go wrong? Can I obtain some unwarranted conclusions? Will I miss some warranted conclusions? Or, will I get into inordinate difficulties when writing my remaining scenario description axioms? (At least one concrete example of such problems would be useful). On the other hand, if no such problems have been reported and none can be found, they why is all this fuss about the so called DIP? Why can't we just take Pat's core theory, observe that it allows both the integers and the reals as timepoint domains, and that in addition it allows a whole lot of other domains that noone needs to care about from a practical point of view? Or, to reverse the question - concretely why do people care about those other models? On this topic, Pat has referred to others:
Jixin's answer to that question was:
As an aside: Jixin also wrote:
Yes, but how does this differ from the standard notions of open and closed intervals? It would appear that (a,b,open,open) = (a,b) (a,b,open,close) = (a,b]and so on. Are there some models where your fourtuple intervals can not be reduced to standard definitions, such as (a,b) = {x | a < x < b}and so on? Do those "nonstandard" models have some interesting properties? Returning to the use of algorithms that are incompatible with punctuated time lines: I realize, of course, that computation on quantified estimates of durations and resources is rarely used in commonsense reasoning, at least as studied in A.I. Therefore, researchers in CSR may not find the algorithms mentioned above very useful. However, it would certainly be an advantage, from a general scientific point of view, if a common framework could be found for reasoning about actions in both CSR, natural language understanding, and cognitive robotics. Doing this would also facilitate the design of combined systems covering all of those aspects. However, one constraint in such a search for a common ground is that cognitive robotics needs the full set of reals (or some other dense domain) for the time axis. My question is therefore: is there something in those other areas that can not be rendered using full real time? In other words, is it the case that these different areas require intrinsically different and mutually inconsistent extensions of a core theory such as the one proposed by Pat in the previous Newsletter?
|