Issue 98053 | Editor: Erik Sandewall | [postscript] | ||
7.7.1998 |
|
|||
Today | ||||||||||
Today's Newsletter contains a question by Murray Shanahan to the article by Marc Denecker, Daniele Theseider Dupre, and Kristof Van Belleghem that is under discussion for the ETAI. We also have a summary of that article - a specification of its contents which is longer and more concrete than an abstract, and which we encourage all ETAI authors to submit in addition to an article itself. The summary makes it easier for everyone to get a quick hold of the contents of the paper. The discussion about ontologies for time was very lively in April and May, but by the end of May it seemed that several of the contributions must be due to some technical misunderstanding. I therefore asked the participants at that time to take a "timeout" and discuss the matter between themselves, before coming back to the Newsletter audience. Now (and after some editorial delays due to the season) we can present the bottom line: it turns out that Sergio Brandano had defined relations on intervals so that not only
Debate contributions that are due to this misunderstanding will be edited separately. Today's Newsletter contains two contributions (or parts of contributions, actually) which were made already at the end of May, but which are unrelated to the said misunderstanding. Instead, they are answers to earlier questions by the present editor as to what is lost if one only uses "standard" metric time, for example the set of real numbers without the "punctuation" mentioned by Pat Hayes. With this, it will hopefully be possible to return this debate to its proper topics.
| ||||||||||
ETAI Publications | ||||||||||
Received research articlesThe following is a summary of an article that has previously been received by the present ETAI area. Marc Denecker, Daniele Theseider Dupré, and Kristof Van Belleghem
Summary: This paper describes a new approach to the ramification problem. The approach consists of several contributions. First of all we show that the standard view on causal laws as rules serving to restore the integrity of state constraints is too restrictive. We argue that causal laws essentially represent the propagation of physical or logical forces through a dynamic system, and that state constraints may be a by-product of this propagation process but that this is not necessarily the case. Therefore we introduce causal laws as pure effect propagation rules independent of state constraints, and define a transition function only dependent on the causal rules which maps each state and set of actions to a resulting state. Second, we argue that in order to obtain a natural and modular representation of the effect propagation process, the language of causal rules needs to allow for recursion to model mutually dependent effects, as well as for negation to model effects which propagate in the absence of other effects. To define a semantics for complex effect theories involving both negative and cyclic dependencies between effects, we observe that a fundamental property of the process of effect propagation is that it is constructive: effects do not spontaneously appear without an external cause; each effect must be caused by a preceding effect and in practice effects are generated in a bottom-up way. To adequately model this constructive nature of physical change propagation, we base the semantics of the formalism on the principle of inductive definition, the main mathematical constructive principle. We use a generalised inductive definition principle, which in fact generalises several different approaches. For theories without recursion, the semantics we propose coincides with completion (and can as such be seen as a generalisation of explanation closure approaches). For theories without negation, the semantics coincides with positive inductive definition semantics and standard circumscription (which has also received a lot of attention in research on the frame problem). For stratified theories the semantics coincides with iterated inductive definitions and with the perfect model semantics for stratified logic programs. In the most general case the proposed operator used in the fixpoint calculation coincides with the well-founded operator in logic programming. The proposed semantics allows to deal with any effect theories without syntactic restrictions on cycles or negations. As a consequence, syntactically correct effect theories exist which do not have a constructively well-defined model. In particular this is the case for effect theories containing actual cycles over negation, e.g. when the existence of each of two effects depends on the other effect's absence. The physical systems corresponding to such effect theories are rare, and those that exist (those deliberately constructed to yield such an effect theory) exhibit unstable behaviour. But the distinction between good and bad effect theories can not be made based on syntactical properties. Our semantics is such that it detects and marks bad effect theories while it assigns a unique transition function to good, constructive, effect theories, including some that appear syntactically problematic. A third contribution is the introduction of so-called complex effect rules which provide a higher-level description of effect propagations. These complex effect rules allow to state that it is the change in truth value of a complex fluent formula, rather than the change of a single fluent, which triggers certain effect propagations. Complex effect rules map to sets of low-level definition rules. We have observed that one form of complex effect rule is sufficiently expressive to model all kinds of temporal reasoning examples known to us, which also suggests that the corresponding low-level rules only occur in particular combinations in practice. The high-level effect rules offer a much more natural and compact way of describing ramifications than the low-level rules. In particular they prove to be very effective for modeling cumulative or canceling effects of simultaneous actions. An essential property of the semantics, which distinguishes it from several other approaches, is that it is deterministic: for a given effect theory, any combination of a state and set of actions yields a unique set of causations and hence a unique resulting state (sometimes an inconsistent one, in which case the set of actions is not allowed in the given starting state). In our view nondeterminism should be explicitly represented if it is intended, hence we do not allow nondeterminism to arise from special combinations of otherwise deterministic rules. We do describe an extension of effect rules which allows for a limited form of nondeterminism, in that the rules can state that some effects are allowed, but not forced, to occur. This approach has been further extended to more expressive nondeterministic rules, but this issue is beyond the scope of the current paper.
Our approach is presented independent of a specific time structure or
formalism, such as Situation Calculus, We compare our approach to related work in the literature. In particular we show a correspondence between our nondeterministic effect rules and the causal rules proposed by Thielscher. Differences are, apart from the nondeterministic aspect, especially the different view on delays in effect propagations.
Discussion about received articlesAdditional debate contributions have been received for the following article(s). Please click the title of the article to link to the interaction page, containing both new and old contributions to the discussion.
Marc Denecker, Daniele Theseider Dupré, and Kristof Van Belleghem
| ||||||||||
Debates | ||||||||||
Ontologies for timePat Hayes:Erik wants to know why we should bother to consider punctuated times. Well, I guess my first answer would be: ask the people who use the Allen theory. I think it has something to do with natural language understanding. But more concretely, I disagree with Erik's way of thinking. I start with axioms and ask what models they have. Erik starts with models (the real line, for example) and assumes that we somehow have access to the 'standard' ones. But of course we don't: there is no complete computationally enumerable theory of the standard real line. All we have are axiomatic theories.
(In any case, why is the real line 'standard' for time? If anything,
quantum physics suggests that time may be more like a kind of
everywhere-punctutuated continuum (imagine two copies
Erik asks for examples of what can be done with punctuated-time models. I don't know what he means. It is theories that do things, not their models. Pat Hayes
Here are the axioms of my proposal for a core theory. (The theory can be reformulated without explicitly mentioning intervals, but more awkwardly; and also without explicitly mentioning points, although even more awkwardly and using a little elementary set theory. Its much easier to allow both: but notice that an interval is defined completely by its endpoints.) 1. Points are totally ordered by Before. 2. Every proposition is true over a set of reference intervals.
3. There is a function from pairs of points 4. Intervals Meet only if they share an endpoint, and this point is the unique point When they meet.
5. If
6. or
6a
6b 7. Points have unique Clocktimes.
8. The Clocktime at point
This theory allows pointlike intervals
Now, this theory has models on the reals, naturally. It also has models on the rationals, the integers, and many other more exotic structures, including various 'punctuated' models. (Same theory, different models.) If you relax the total ordering to a partial one it even has models consistent with special relativity. It can be extended to Allenian thinking (by changing 4 and 6 to iff), or it can be extended to Sergioian continuity (the intervals are closed: open intervals are the sets of points In an interval), or to analyse temporal databases (every point has a unique 'next' point with no points between.) Take your pick; but I think you will always find a least this minimal amount of structure in any theory. (Suggested counterexamples welcome!)
Jixin Ma:To Erik, In [ENRAC 98047, 24.5.1998], you put the following questions:
In your message to Pat appeared in the same issue, you wrote:
It is important to note that, the treatment here for case 3) is different from the approach which characterises intervals as sets of points. Here, it is not forced to say that there is a point between the Light-Off interval and Light-On interval. However, it seems that by some careful and proper treatments, we may also reach the same results by defining the concepts of intervals based on points. The key point here is, in addition to the concept of lower and upper bounds for point-based intervals, the concept of left type and right type for intervals needs to be addressed as well. What follows is the skeleton of the structure:
Therefore, if we take point,
The relation between intervals,
It is easy to see that such a Meets relation defined here represents
the relationship that an interval is immediately before another
interval (no matter wether the set of points is further characterised
as dense or discrete), without any other intervals standing between
them. All other temporal relations, such as Additional issues such as dense/discrete, linear/nonlinear structure, and so on, may be characterised by means of extra axioms. Also, some extra constraints about the internal structure of intervals may be imposed. It has been checked that such a point-based time model has the expressive power of that based on both intervals and points as primitive. For instance, it can also express all the above three cases successfully. In fact, it seems that the two are equivelent! Jixin
|