Issue 98005 | Editor: Erik Sandewall | [postscript] | ||
21.1.1998 |
|
|||
Today | ||||||||||
Today's issue contains answers by Hector Geffner and by Erik Sandewall to Pat Hayes's contribution yesterday.
| ||||||||||
Debates | ||||||||||
Research methodologyHector Geffner:Pat says:
I'm not sure I understand Pat's point well, but I think I understand the YSP. Here is the way I see it. In system/control theory there is a principle normally called the "causality principle" that basically says that "actions cannot affect the past". If a model of a dynamic system does not comply with this principle, it's considered "faulty". In any AI the same principle makes perfect sense when actions are exogenous; such actions, I think, we can agree, should never affect your beliefs about the past (indeed, as long as you cannot predict exogenous actions from your past beliefs, you shouldn't change your past beliefs when such actions occur). What Hanks and McDermott show is that certain models of action in AI (like simple minimization of abnormality) violate the causality principle. In particular they show that your beliefs at time 2, say, after LOAD AND WAIT (where you believe the gun is loaded) are different from your beliefs at time 2, after LOAD, WAIT and SHOOT. Namely, SHOOT at t=3 had an effect on your past beliefs (LOADED at t=2).
Most recent models of action comply with the causality principle.
In some it comes for free (e.g., language Regards. - Hector Geffner
Erik Sandewall:Pat, citing Ray Reiter's earlier contribution, you wrote:
Well, Ray and you are bringing up two different issues here. Ray's objection was with respect to classification: he argued that the frame assumption (when one uses it) ought to be considered as epistemological rather ontological. (In the position statement that he referred to, I had proposed a definition of ontology and suggested that the situation calculus does not represent one, since the frame assumption is represented by separate axioms rather than being built into the underlying ontology). On the other hand, the question that you bring up is what kind or kinds of persistence we ought to prefer: temporal forward in time, temporal backwards, geometrical, etc. Let me address your letter first. I certainly agree with the analysis in the second paragraph of your message: the world is not entirely chaotic, some of its regularities can be characterized in terms of persistence (= restrictions on change, or on discontinuities in the case of piecewise continuous change) and all those exceptions to persistence that are now well-known: ramifications, interactions due to concurrency, causality with delays, surprises, and so on. For quite some time now, research in our field has used a direct method in trying to find a logic that is capable of dealing correctly with all these phenomena, that is, by considering a number of "typical" examples of common-sense reasoning and looking for a logic that does those examples right. My concern is that this is a misguided approach, for two reasons:
What I proposed, therefore (in particular in the book "Features and Fluents") was to subdivide this complex problem into the following loop consisting of managable parts (the "systematic methodology"):
This agenda certainly aims to address all the intricacies that you mention in the first paragraph of your message, but only in due time. We can not do everything at once; if we try doing that then we'll just run around in circles. In the Features and Fluents approach we have iterated this loop a few times, starting with strict inertia and then adding concurrency and ramification, doing assessments in each case. What about the other major current approaches? Early action languages, in particular A , fits nicely into this paradigm, except that whereas above we use one single language and two semantics (classical models and intended models), A uses two different languages each with its own semantics. However, later action languages, such as AR , do not qualify since they define the models of the action language (intended models, in the above terms) using a minimization rule. To me, minimization techniques belong in the entailment methods which are to be assessed according to the paradigm, but the gold standard that we assess them against should not use such an obscure concept as minimization. On similar grounds, I argued that a situation-calculus approach where a frame assumption is realized by a recipe for adding more axioms to a given axiomatization does not really define an ontology. It can be measured against an ontology, of course, but it does not constitute one. Ray's argument against that was that the frame assumption is inherently epistemological, or maybe metaphysical. Since most people would probably interpret "metaphysical" as "unreal" rather than in the technical sense used by philosophers, we couldn't really use that term. With respect to the term epistemological, I just notice that some entailment methods have been observed to have problems e.g. with postdiction: prediction works fine but postdiction doesn't. This means that when we specify the range of applicability of an entailment method, we can not restrict ourselves to ontological restrictions, such as "does this method work if the world behaves nondeterministically?"; we must also take into account those restrictions that refer to the properties of what is known and what is asked, and to their relationship. The restriction to only work for prediction is then for me an epistemological restriction. On this background, Ray then questioned whether the frame assumption itself is ontological or epistemological in nature. I'd say that in a systematic methodology (as in items 1-6 above), the ontology that is defined in step 1 and revised in step 6 must specify the persistence properties of the world, otherwise there isn't much one can say with respect to assessments. This technical argument I think is more useful than the purely philosophical question of what it "really is". You then address the following question:
Yes, exactly! There are two different ontologies at work here; my argument would be that each of them should be articulated in terms that are not only precise but also concise, and which facilitate comparison with other approaches both within and outside KR. But your question at the beginning of this quotation is a fundamental one: how do we choose the additional ontological structures as we iterate over the systematic methodology loop, and how do we motivate our choices? In some cases the choice is fairly obvious, at least if you have decided to base the ontology on a combination of structured states and linear metric time (integers or reals). Concurrency, chains of transitions, immediate (delay-free) dependencies, and surprise changes can then be formalized in a straight-forward manner. Also, we can and should borrow structures from neighboring fields, such as automata theory, theory of real-time systems, and Markov chain theory. However, there are also cases where the choice is less than obvious. What about the representation of actions by an invocation event and a termination event, which is what R-sitcalc is about? What about the recent proposal by Karlsson and Gustafsson [f-cis.linep.se-97-014] to use a concept of "influences" (vaguely similar to what is used in qualitative reasoning), so that if you try to light a fire and I drip water on the firewood at the same time, then your action has a light-fire-influence and my action has an extinguish-fire-influence, where the latter dominates? (If there is only a light-fire-influence for a sufficient period of time, then a fire results). These are nontrivial choices of ontology; how can we motivate them, assess them, and put them to use? To my mind, this ties in with what Bob Kowalski said in the panel discussion at the recent workshop on Formalization of Common Sense: these are pre-logical issues. It is not meaningful to begin writing formulae in logic at once and to ask what variant of circumscription is going to be needed. Instead, one ought to work out an application area of non-trivial size with the proposed ontology, probably also using a tentative syntax that matches the ontology, but without committing to anything else. Only then, as one knows what ontology is needed, is it meaningful to look for entailment methods and their implementations which may be appropriate for the ontology one needs. The bottom line is: let's use the ontology, or the underlying semantics, as an intermediate step on the way from application to implemented system. Going from application to ontology requires one kind of activity; going from ontology to implementation requires another kind. Such a decomposition has all the obvious advantages: it allows one to address simpler problems before proceeding to more difficult ones, it provides a way of characterizing and comparing results, and it facilitates reuse of earlier results.
References:
|