********************************************************************
ELECTRONIC NEWSLETTER ON REASONING ABOUT ACTIONS AND CHANGE
Issue 98054 Editor: Erik Sandewall 11.7.1998
Back issues available at http://www.ida.liu.se/ext/etai/actions/njl/
********************************************************************
********* TODAY *********
Eugenia Ternovskaia has contributed comments and questions on the ETAI
submitted article by Marc Denecker et al. Also in this Newsletter, Erik
Sandewall answers Pat Hayes and Jixin Ma in the resumed discussion
about the ontology of time.
********* ETAI PUBLICATIONS *********
--- DISCUSSION ABOUT RECEIVED ARTICLES ---
The following debate contributions (questions, answers, or comments)
have been received for articles that have been received by the ETAI and
which are presently subject of discussion. To see the full context,
for example, to see the question that a given answer refers to, or to
see the article itself or its summary, please use the web-page version
of this Newsletter.
========================================================
| AUTHOR: Marc Denecker, Daniele Theseider Dupre, and
| Kristof Van Belleghem
| TITLE: An Inductive Definition Approach to Ramifications
========================================================
--------------------------------------------------------
| FROM: Eugenia Ternovskaia
--------------------------------------------------------
Dear Marc, Daniele and Kristof,
It was interesting to read your paper, especially because I am interested
in inductive definability myself. I wrote a paper "Inductive Definability
and the Situation Calculus" presented at Dynamics'97 last October. It was
published in a volume of LNCS later. You can get it from
http://www.cs.utoronto.ca/~eugenia/papers/indef.ps
Among other things, I describe an inductive solution to the frame and
ramification problems. I treat causal rules specifying direct and indirect
effects of actions as rules of an inductive definition. Causal (inductive)
rules specifying direct effects of actions may contain both cycles and
negations. Causal rules describing indirect effects may not contain cycles.
This condition can be relaxed, but in this case the correct form of
successor state axioms will not be obtained. I would be grateful to you
if you could read this paper and provide some comments.
Here are some comments on your paper. First of all, it's a good paper.
The main contribution, I think, is that you further elaborated the generalized
inductive definition principle and applied it to the ramification problem.
I consider Marc's result about the equivalence between this principle and
the well-founded semantics significant, and it was good to see how it works
in the context of the ramification problem. It was nice to see examples
for the syntactic subclasses of the generalized inductive definition.
I think that introducing the third truth value helps you to explain the subtle
details of the iterative process which was quite difficult for me since I was
working in classical first order logic with fixpoints.
From my experience, it is a long way from understanding how causal (inductive)
rules work and capturing it in a classical logic-based formalism such as the
situation calculus. In your paper you say that your approach can be embedded
in the situation calculus. Perhaps the main question to you is how the third
truth value would be represented in the two-valued setting?
The following will help you to make a few little improvements, I hope.
----------------------
p. 28, after Th.1.
"Unless nondeterminism is explicitly introduced, our theory leaves
no room for ambiguity."
I do not think this is an advantage of your theory. I think the truth
value "u" should be obtained in two cases.
First, it may be a result of a "bad axiomatization". Second, it may appear
due to lack of complete information about the current state of the world.
(It seems that you apply the Closed World Assumption in your definition of
a state as a set of (positive) fluent literals, if I understood correctly.)
I believe your definitions might be easily changed to represent the second
case as well. For example, a state could be associated with a set of "known"
positive and negative literals.
----------------------------------------------------------
It also seems strange that you cannot derive anything about "holds",
although I understand your motivation. Would it be difficult to introduce
this feature?
-------------------
p. 0 (abstract)
bad definitions should be "bad definitions", I guess.
---------------------------------------------
p. 4 Please do not start sentences with a formula, e.g. caus(l), init(l)!!!
This is hard to read.
This is especially confusing on p. 16 :
I am reading: "action start1.cause(turn1) has two ...."
--------------------------------------
p. 10
"In sections 4, 3....." Should be "In sections 3, 4....."
--------------------------------------
p. 10 "..., where $\top$ denotes the inconsistent state."
This is a confusing notation. Usually $\bot$ is used to denote false or
contradiction (as well as the bottom of a lattice) and $\top$ is used to
denote true (or the top of a lattice), as you perhaps know.
I noticed that you already use $\bot$ later in the text, but maybe another
solution is possible. For example, in automata theory $\emptyset$
is used to denote a "junk" state without outgoing transitions.
--------------------------------------
p. 10 ...rule $caus(l) <--..caus(l')$
Perhaps you should say what the dots are for.
--------------------------------------
p. 10 Footnote 7: "The possibility of bad definitions exists only in the most
general case and will be discussed there."
Where? Perhaps you should say in what section.
--------------------------------------
p. 10 Footnote 8: I did not understand the second sentence at first reading.
"Initial state", "presence and absence" are confusing.
--------------------------------------
p. 16 Third and fourth line after Def. 4. Typo in math environment
or too many symbols in one place?
----------------------------------------------------------
I think all footnotes should be sentences.
----------------------------------------------------------
p. 23, footnote 17: A more or less meaningful "real-world" example could be
something like this: an object cannot be at location p and q at the
same time (but must be somewhere), and actions move_to_p and move_to_q
are always applicable.
----------------------------------------------------------
p. 28, top. "is" is used twice.
"completely orthogonal to successor state calculation" is too strong,
see comments for p.38-39.
----------------------------------------------------------
p.38-39 "This approach contains several contributions:
- A complete uncoupling of ramifications from state constraints..."
I do not think they can be uncoupled _completely_.
Even in your theory, ramifications impose some restriction on your function
Trans, I believe. Trans maps states and actions to states. In this sense
reachable states are restricted and state constraints are implied.
Can you clarify?
The other direction, from state constraints to ramification, as described
by Thielscher, is a nice idea. As far as I know, no one claims that this
is *the only way* to obtain causal rules specifying indirect effects.
It is a useful method to obtain some of these rules, even though it does
not always work.
Also, in your summary you say "the standard view on causal laws as rules
serving to restore the integrity of state constraints is too restrictive."
From talking to different people, I never had the impression that this is
the only and standard view on the role of causal rules in the community.
Overall, even if you think your statement is absolutely right in its
current form, it's strange to see it as a contribution #1.
I think you have done more interesting things. Again, it's a good paper.
--------------------------------
I hope my comments will be helpful.
Regards,
Eugenia
********* DEBATES *********
--- ONTOLOGIES FOR TIME ---
--------------------------------------------------------
| FROM: Erik Sandewall
--------------------------------------------------------
Pat Hayes wrote:
> Erik wants to know why we should bother to consider punctuated times...
> But more concretely, I disagree with Erik's way of thinking. I start with
> axioms and ask what models they have. Erik starts with models (the real
> line, for example) and assumes that we somehow have access to the
> 'standard' ones. But of course we don't: there is no complete
> computationally enumerable theory of the standard real line. All we have
> are axiomatic theories.
...
> Erik asks for examples of what can be done with punctuated-time models. I
> don't know what he means. It is theories that do things, not their models.
I have no problems with theories that allow both punctuated and
non-punctuated real time models. However, since punctuated time is proposed
as a solution to the "Dividing Instant Problem", and since accordingly
some or all non-punctuated models suffer from such a DIP, I'd like to
understand what concrete difficulties are obtained in a computational
system that admits non-punctuated time, and even relies on it.
This question of mine arises in a cognitive robotics perspective, which
may in fact be distinct from the perspectives of commonsense reasoning
or of natural language understanding. In cognitive robotics, it's of
paramount importance to be able to deal with hybrid scenario descriptions,
combining qualitative and quantitative estimates of duration, distances,
consumption of resources such as fuel and battery power, and so on.
Consequently, we are led to use state-of-the-art algorithms for dealing
with these kinds of constraints, including, in particular, temporal
constraint solving in the tradition of Dechter-Meiri-Pearl and the more
recent developments that are based on linear programming (see e.g. Jonsson
and Backstrom, AAAI-96). All of these methods make use of standard
arithmetic operations such as addition and multiplication, and don't make
any exception for them not being everywhere defined. Therefore, it would
seem adventurous to use them in a context where the time line is assumed
to be punctuated, so that for some numbers in the (e.g. real) domain
there does not exist any corresponding point in time.
Now, back to Pat's comments.
> Erik asks for examples of what can be done with punctuated-time models.
> I don't know what he means. It is theories that do things, not their
> models.
isn't it more exact to say that software does things? From the point of
view of the algorithms mentioned above, it just doesn't matter whether and
how the (e.g. real) numbers are axiomatized. However, it does matter that
addition is always defined, which is why it is convenient to assume the
use of "standard" models of time.
If software is identified only with theorem provers, then of course theories
are of primary interest. But would you seriously propose using a theorem
prover plus an axiomatization of the time domain as a computational
instrument?
Some axioms and some deduction is needed, for sure, but suppose I develop
and use a hybrid system that combines algorithms with Pat's core theory.
Is there something that can then go wrong? Can I obtain some unwarranted
conclusions? Will I miss some warranted conclusions? Or, will I get into
inordinate difficulties when writing my remaining scenario description
axioms? (At least *one concrete example* of such problems would be
useful).
On the other hand, if no such problems have been reported and none can be
found, they why is all this fuss about the so called DIP? Why can't we
just take Pat's core theory, observe that it allows both the integers and
the reals as timepoint domains, and that in addition it allows a whole lot
of other domains that noone needs to care about from a practical point
of view? (Assuming, of course, that the core theory is correct).
Or, to reverse the question - *concretely why* do people care about
those other models? On this topic, Pat has referred to others:
> Erik wants to know why we should bother to consider punctuated times. Well,
> I guess my first answer would be: ask the people who use the Allen theory.
> I think it has something to do with natural language understanding.
and in another context to the temporal database community. Fine, but what
is it that they can do using non-"standard" time that can't be done
using "standard" time?
Jixin's answer to that question was:
> The Dividing Instant Problem is a typical problem with the approach
> of simply constructing time intervals from points (such as reals,
> rationals or integers), e.g., by means of defining an intervals as a
> set of points...
>
> The fundamental reason is that
> in a system where time intervals are all taken as semi-open, it will
> be difficult to represent time points in an appropriate structure so
> that they can stand between intervals conveniently.
I had actually asked for something more concrete: a scenario that can't
be expressed, or a scenario query that requires inordinately long
completion time, for example. Is it possible to be more specific about
in what sense it "will be more difficult" to represent time points that
stand between intervals?
As an aside: Jixin also wrote:
> However, it seems that by some careful and proper treatments, we may
> also reach the same results by defining the concepts of intervals
> based on points. The key point here is, in addition to the concept of
> lower and upper bounds for point-based intervals, the concept of left
> type and right type for intervals needs to be addressed as well. What
> follows is the skeleton of the structure:
>
> 1) P is a partially-ordered set of points;
> 2) Type is a two-member set {open, closed};
> 3) An interval i is defined as a quaternion (p1, p2, l, r) such that:
> 3.1) p1 <= p2
> 3.2) l and r belong to Type
> 3.4) if p1 = p2 then l = r =closed
Yes, but how does this differ from the standard notions of open and
closed intervals? It would appear that
(a,b,open,open) = (a,b)
(a,b,open,close) = (a,b]
and so on. Are there some models where your fourtuple intervals can not
be reduced to standard definitions, such as
(a,b) = {x | a < x < b}
and so on? Do those "nonstandard" models have some interesting properties?
Returning to the use of algorithms that are incompatible with punctuated
time lines: I realize, of course, that computation on quantified estimates
of durations and resources is rarely used in commonsense reasoning, at
least as studied in A.I. Therefore, researchers in CSR may not find the
algorithms mentioned above very useful. However, it would certainly be
an advantage, from a general scientific point of view, if a common framework
could be found for reasoning about actions in both CSR, natural language
understanding, and cognitive robotics. Doing this would also facilitate
the design of combined systems covering all of those aspects. However,
one constraint in such a search for a common ground is that cognitive
robotics needs the full set of reals (or some other dense domain) for the
time axis. My question is therefore: **is there something in those other
areas that *can not* be rendered using full real time?** In other words,
is it the case that these different areas require intrinsically different
and mutually inconsistent extensions of a core theory such as the one
proposed by Pat in the previous Newsletter?
********************************************************************
This Newsletter is issued whenever there is new news, and is sent
by automatic E-mail and without charge to a list of subscribers.
To obtain or change a subscription, please send mail to the editor,
erisa@ida.liu.se. Contributions are welcomed to the same address.
Instructions for contributors and other additional information is
found at: http://www.ida.liu.se/ext/etai/actions/njl/
********************************************************************