******************************************************************** ELECTRONIC NEWSLETTER ON REASONING ABOUT ACTIONS AND CHANGE Issue 00003 Editor: Erik Sandewall 22.6.2000 Back issues available at http://www.etaij.org/rac/ ******************************************************************** ********* TODAY ********* Today we present John McCarthy's answer to Joe Halpern in the ENRAC debate on the use of modal logic in knowledge representation. In order to see the earlier contributions to this discussion, please refer to its new discussion page. In the webpage for the ETAI section on actions and change, http://www.etaij.org/rac/ click 'General discussions' in the menu, and follow the link to "The use of Modal Logic in Knowledge Representation". Since the previous Newsletter, Joe Halpern has slightly modified the text in the last two paragraphs of his contribution there, based on an off-line interaction with McCarthy. McCarthy's response here is relative to the revised text, which is now posted on our webpage ********* ENRAC DEBATES ********* --- MODALITY FOR ROBOTS II--RESPONSE TO HALPERN --- John McCarthy Computer Science Department Stanford University Stanford, CA 94305 jmc@cs.stanford.edu http://www-formal.stanford.edu/jmc/ In (McCarthy 1997) I criticized modal logic on the grounds that it could not represent modal statements sufficiently expressively for human level logical AI. Joe Halpern defended modal logic in (Halpern 1999), I replied in (McCarthy 1999), he replied again in (Halpern 2000), and here are my further comments about a few of Halpern's points. Halpern writes 2. McCarthy wants robots to be able to reason with ZFC (again, see point 4 in [McCarthy 1999]). Of course, common knowledge is expressible in ZFC (and much more besides). However, this comes at a cost, since ZFC is highly undecidable. Human level logical AI requires a logical system with three properties. (1) Common sense knowledge and scientific knowledge are compactly and understandably expressible. (2) Meta-knowledge about the common sense knowledge is also expressible. If we can reason about the common sense knowledge then the AI system may also need to reason about it. (3) The inferences, including the nonmonotonic inferences, that validate a strategy for achieving a goal are also readily expressible. ZF (with or without the axiom of choice) has these properties to an outstanding degree. Actually we need a heavy duty ZF that has many of the theorems and definitions built-in. What about the fact that ZF, or any other system capable of expressing common sense knowledge and reasoning is undecideable, although systems of lesser expressive power, like some forms of propositional modal logic, are decideable? Sentences in these lesser systems map into subsets of sentences of ZF, and their inference methods map into subsets of ZF inferences. For example, STRIPS can be mapped into a subset of situation calculus. The solution is to make ZF reasoners that can be restricted to subsets of the inferences. Unfortunately, present day theorem provers are insufficiently controllable. Of course, we also need good heuristics for undecideable domains. Halpern goes on to write This point brings out a much deeper philosophical difference between McCarthy and me. McCarthy wants robots to reason with a formal logic. I'm not sure that's either necessary or desirable. Humans certainly don't do much reasoning in a formal logic to help them in deciding how to act; it's not clear to me that robots need to either. Robots should use whatever formalisms help them make decisions. I do think logic is a useful tool for clarifying subtleties and for systems designers to reason about systems (e.g., for robot designers to reason about robots), but it's not clear to me that it's as useful for the robots themselves. The sentence "Robots should use whatever . . . " is ambiguous between a person deciding what to use and the robot deciding. If logic is useful for system designers and robots are to have human level intelligence, then they must also be able to use logic. I fear that Halpern has abandoned the goal of human level AI. I'm curious what Halpern considers the appropriate upper limit of intellectual ambition for robots. I agree that ordinary human reasoning about the effects of actions is not done in mathematical logic. Nevertheless, AI systems based on mathematical logic have been the most successful, e.g. Reiter's Golog. Halpern sees no use for number concepts as distinct from numbers. Most sentences involving fixed numbers allow the abuse of notation of not making the distinction. However, omitting them is like omitting 0 from the number system or omitting the empty set from set theory. It was long argued that you don't need to count if you have nothing to count. ". . . it seems that hardly anybody proposes to use different variables for propositions and for truth-values, or different variables for individuals and individual concepts."--((Carnap 1956), p. 113). In (McCarthy 1979) I did what Carnap says hardly anyone proposes to do. Also I introduce functions from objects to "standard concepts" of them. These are analogous to the "standard names" used by some logicisans. Numbers have particularly simple standard concepts, and maybe one could get by identifying the standard concept of a number with the number itself. However, logicians find it useful to distinguish between numbers and numerals denoting numbers, having standard concepts of numbers is also likely to useful. Formalisms that can treat concepts as first class objects are needed to give computer systems the same power that humans have. (McCarthy 1979) gives many examples. However, the challenges I offered in my previous note were met by Halpern using modal predicate logic, a highly undecideable domain. I could offer still more challenges of which I mention only one--the problem of Mr. S and Mr. P treated in (McCarthy 1978). A key problem is to represent the fact that Mr. S and Mr. P initially know only what they are explicitly told and know that that's all they know. Otherwise, Mr. P would not be able to infer that Mr. S did not know the numbers. References Carnap, R. 1956. Meaning and Necessity. University of Chicago Press. Halpern, J. Y. 1999. On the adequacy of modal logic. in ETAI discussion. Halpern, J. Y. 2000. On the adequacy of modal logic ii, a response to McCarthy. News Journal on Reasoning about Actions and Change. See http://www.etaij.org/rac. McCarthy, J. 1978. Formalization of two puzzles involving knowledge. Reprinted in (McCarthy 1990). Available as http://www-formal.stanford.edu/jmc/puzzles.html McCarthy, J. 1979. First Order Theories of Individual Concepts and Propositions2. In D. Michie (Ed.), Machine Intelligence, Vol. 9. Edinburgh: Edinburgh University Press. Reprinted in (McCarthy 1990). Available as http://www-formal.stanford.edu/jmc/concepts.html McCarthy, J. 1990. Formalizing Common Sense: Papers by John McCarthy. Ablex Publishing Corporation. McCarthy, J. 1997. Modality, si! modal logic, no! Studia Logica 59:29-32. McCarthy, J. 1999. Modality for robots--responses to halpern and wansing. News Journal on Reasoning about Actions and Change 3. See http://www.etaij.org/rac. ******************************************************************** This Newsletter is issued whenever there is new news, and is sent by automatic E-mail and without charge to a list of subscribers. To obtain or change a subscription, please send mail to the editor, erisa@ida.liu.se. Contributions are welcomed to the same address. Instructions for contributors and other additional information is found at: http://www.etaij.org/rac/ ********************************************************************