AIICS Publications: Journal Publications
Hide abstracts BibTeX entries  
2017  
[106] 
2017. Repeatability and accuracy of an, MRIbased, semiautomated analysis method for quantifying abdominal adipose tissue and thigh muscle volumes, and hepatic proton density fat fraction. Radiology, ??(??):??–??. Publication status: Accepted <strong>Purpose</strong>The purpose of this study was to determine the repeatability and accuracy of an commercially available (Advanced MR Analytics [AMRA<strong><sup>Â®</sup></strong>]; LinkÃ¶ping, Sweden) magnetic resonance imaging (MRI)<strong></strong>based, semi<strong></strong>automated method to quantify abdominal adipose tissue and thigh muscle volume as well as hepatic proton density fat fraction (PDFF)<strong>Materials and Methods</strong>This prospective study was approved by an institutional review board (IRB) and was Health Insurance Portability and Accountability Act (HIPAA) compliant. All subjects provided written informed consent. Inclusion criteria were age â‰¥ 18 years, and willingness to participate. Exclusion criteria were contraindication to MRI. Three<strong></strong>dimensional, T1<strong></strong>weighted, dual<strong></strong>echo body<strong></strong>coil images were acquired from base of skull to knees at 3T, twice before and once after taking subjects off the scanner table (total of three acquisitions). Source images were reconstructed offline to generate water, and calibrated fat images where pure adipose tissue has unit value and absence of adipose tissue has zero value. Abdominal adipose tissues and thigh muscles were segmented, and their volumes estimated using AMRA a semi<strong></strong>automated analysis method and, as a reference standard, manually. Hepatic PDFF was estimated using a confoundercorrected chemicalshift encoded MRI method with hybrid complexmagnitude reconstruction., and, as a reference standard, with magnetic resonance spectroscopy (MRS). Tissue volume and hepatic PDFF intra and interexamination repeatability was assessed by intraclass correlation (ICC) and coefficient of variation (CV) analysis. Tissue volume and hepatic PDFF accuracies were assessed by linear regression using their respective reference standards.<strong>Results</strong>Twenty adult subjects were enrolled (18 female, age range 25 <strong></strong> 76 yrs, body mass index range 19.3 to 43.9 kg/m<sup>2</sup>). Adipose and thigh muscle tissue volumes estimated using the semi<strong></strong>automated analysis method had intraand interexamination ICCs between 0.996 and 0.998, and CVs between 1.5 and 3.6%. For hepatic MRI PDFF, intra and interexamination ICCs were â‰¥ 0.994 and CVs, â‰¤ 7.3%. Agreement between semiautomated and manual volume estimates, and between MRI and MRS hepatic PDFF estimates, was high, with regression slopes and intercepts not significantly different from the identity line (all pâ€™s > 0.05), and R<sup>2</sup>â€™s between 0.744 and 0.994.<strong>Conclusions</strong>This MRI<strong></strong>based, semi<strong></strong>automated method provides high repeatability, and high accuracy for estimating abdominal adipose tissue and thigh muscle volumes, and hepatic PDFF. 
.

2016  
[105] 
2016. Cognitive robotics in JOURNAL OF EXPERIMENTAL and THEORETICAL ARTIFICIAL INTELLIGENCE, vol 28, issue 5, pp 779780. Journal of experimental and theoretical artificial intelligence (Print), 28(5):779–780. TAYLOR & FRANCIS LTD. DOI: 10.1080/0952813X.2016.1218649. n/a 
.

[104] 
2016. Editorial Material: Editorial: Special Issue on Safety, Security, and Rescue Robotics (SSRR), Part 2 in JOURNAL OF FIELD ROBOTICS, vol 33, issue 4, pp 409410. n/a 
.

[103] 
2016. Editorial Material: Editorial: Special Issue on Safety, Security, and Rescue Robotics (SSRR), Part 1 in JOURNAL OF FIELD ROBOTICS, vol 33, issue 3, pp 263264. n/a 
.

[102] 
2016. Optimal scheduling for replacing perimeter guarding unmanned aerial vehicles. Annals of Operations Research, ??(??):1–12. Springer. DOI: 10.1007/s1047901621695. Publication status: Epub ahead of print Guarding the perimeter of an area in order to detect potential intruders is an important task in a variety of securityrelated applications. This task can in many circumstances be performed by a set of cameraequipped unmanned aerial vehicles (UAVs). Such UAVs will occasionally require refueling or recharging, in which case they must temporarily be replaced by other UAVs in order to maintain complete surveillance of the perimeter. In this paper we consider the problem of scheduling such replacements. We present optimal replacement strategies and justify their optimality. 
.

[101] 
2016. Efficient Processing of Simple Temporal Networks with Uncertainty: Algorithms for Dynamic Controllability Verification. Temporal formalisms are essential for reasoning about actions that are carried out over time. The exact durations of such actions are generally hard to predict. In temporal planning, the resulting uncertainty is often worked around by only considering upper bounds on durations, with the assumption that when an action happens to be executed more quickly, the plan will still succeed. However, this assumption is often false: If we finish cooking too early, the dinner will be cold before everyone is ready to eat. Using <em>Simple Temporal Networks with Uncertainty (STNU)</em>, a planner can correctly take both lower and upper duration bounds into account. It must then verify that the plans it generates are executable regardless of the actual outcomes of the uncertain durations. This is captured by the property of <em>dynamic controllability</em> (DC), which should be verified incrementally during plan generation. Recently a new incremental algorithm for verifying dynamic controllability was proposed: <em>EfficiantIDC</em>, which can verify if an STNU that is DC remains DC after the addition or tightening of a constraint (corresponding to a new action being added to a plan). The algorithm was shown to have a worst case complexity of <em>O</em>(n<sup>4</sup>) for each addition or tightening. This can be amortized over the construction of a whole STNU for an amortized complexity in <em>O</em>(n<sup>3</sup>). In this paper we improve the <em>EfficientIDC</em> algorithm in a way that prevents it from having to reprocess nodes. This improvement leads to a lower worst case complexity in <em>O</em>(n<sup>3</sup>). 
.

[100] 
2016. A Modeling Framework for Troubleshooting Automotive Systems. Applied Artificial Intelligence, 30(3):257–296. Taylor & Francis. DOI: 10.1080/08839514.2016.1156955. Note: The published article is a shorter version than the version in manuscript form. The status of this article wasÂ earlier Manuscript.Funding agencies: Scania CV AB; FFI  Strategic Vehicle Research and Innovation; Excellence Center at Linkoping and Lund in Information Technology (ELLIIT); Research Council (VR) Linnaeus Center CADICS This article presents a novel framework for modeling the troubleshooting process for automotive systems such as trucks and buses. We describe how a diagnostic model of the troubleshooting process can be created using eventdriven, nonstationary, dynamic Bayesian networks. Exact inference in such a model is in general not practically possible. Therefore, we evaluate different approximate methods for inference based on the Boyenâ€“Koller algorithm. We identify relevant model classes that have particular structure such that inference can be made with linear time complexity. We also show how models created using expert knowledge can be tuned using statistical data. The proposed learning mechanism can use data that is collected from a heterogeneous fleet of modular vehicles that can consist of different components. The proposed framework is evaluated both theoretically and experimentally on an application example of a fuel injection system. 
.

2015  
[99] 
2015. Querying Flying Robots and Other Things: Ontologysupported stream reasoning. XRDS: Crossroads, The ACM Magazine for Students, 22(2):44–47. Association for Computing Machinery (ACM). DOI: 10.1145/2845155. Link to publication: http://xrds.acm.org/article.cfm?aid=2845... 
.

[98] 
2015. Towards richer rule languages with polynomial data complexity for the Semantic Web. Data & Knowledge Engineering, 9697(??):57–77. ELSEVIER SCIENCE BV. DOI: 10.1016/j.datak.2015.04.005. We introduce a Horn description logic called HornDL, which is strictly and essentially richer than HornReg(1), HornSHTQ and HornSROIQ, while still has PTime data complexity. In comparison with HornSROIQ, HornDL additionally allows the universal role and assertions of the form irreflexive(s), s(a, b), a b. More importantly, in contrast to all the wellknown Horn fragments epsilon L, DLLite, DLP, HornSHIQ, and HornSROIQ of description logics, HornDL allows a form of the concept constructor \"universal restriction\" to appear at the left hand side of terminological inclusion axioms. Namely, a universal restriction can be used in such places in conjunction with the corresponding existential restriction. We develop the first algorithm with PTime data complexity for checking satisfiability of HornDL knowledge bases. 
.

[97] 
2015. Paraconsistent semantics of speech acts. This paper discusses an implementation of four speech acts: assert, concede, request and challenge in a paraconsistent framework. A natural fourvalued model of interaction yields multiple new cognitive situations. They are analyzed in the context of communicative relations, which partially replace the concept of trust. These assumptions naturally lead to six types of situations, which often require performing conflict resolution and belief revision. The particular choice of a rulebased, DATALOC. like query language 4QL as a fourvalued implementation framework ensures that, in contrast to the standard twovalued approaches, tractability of the model is achieved. 
.

2014  
[96] 
2014. Guided Integrated Remote and Workshop Troubleshooting of Heavy Trucks. International Journal of Commercial Vehicles, 7(1):25–36. SAE International. DOI: 10.4271/2014010284. When a truck or bus suffers from a breakdown it is important that the vehicle comes back on the road as soon as possible. In this paper we present a prototype diagnostic decision support system capable of automatically identifying possible causes of a failure and propose recommended actions on how to get the vehicle back on the road as cost efficiently as possible.This troubleshooting system is novel in the way it integrates the remote diagnosis with the workshop diagnosis when providing recommendations. To achieve this integration, a novel planning algorithm has been developed that enables the troubleshooting system to guide the different users (driver, helpdesk operator, and mechanic) through the entire troubleshooting process.In this paper we formulate the problem of integrated remote and workshop troubleshooting and present a working prototype that has been implemented to demonstrate all parts of the troubleshooting system. 
.

[95] 
2014. A Horn Fragment with PTime Data Complexity of Regular Description Logic with Inverse. VNU Journal of Computer Science and Communication Engineering, 30(4):14–28. Link to publication: http://www.jcsce.vnu.edu.vn/index.php/jc... We study a Horn fragment called HornRegI of the regular description logic with inverse RegI, which extends the description logic ALC with inverse roles and regular role inclusion axioms characterized by finite automata. In contrast to the wellknown Horn fragments EL, DLLite, DLP, HornSHIQ and HornSROIQ of description logics, HornRegI allows a form of the concept constructor \"universal restriction\" to appear at the left hand side of terminological inclusion axioms, while still has PTIME data complexity. Namely, a universal restriction can be used in such places in conjunction with the corresponding existential restriction. We provide an algorithm with PTIME data complexity for checking satisfiability of HornRegI knowledge bases. 
.

[94] 
2014. Evaluation of a Lightweight Lidar and a Photogrammetric System for Unmanned Airborne Mapping Applications: [Bewertung eines Lidarsystems mit geringem Gewicht und eines photogrammetrischen Systems für Anwendungen auf einem UAV]. Photogrammetrie  Fernerkundung  Geoinformation, ??(4):287–298. E. Schweizerbart'sche Verlagsbuchhandlung. Link to article: http://www.ingentaconnect.com/content/sc... This paper presents a comparison of two lightweight and lowcost airborne mapping systems. One is based on a lidar technology and the other on a video camera. The airborne lidar system consists of a highprecision global navigation satellite system (GNSS) receiver, a microelectromechanical system (MEMS) inertial measurement unit, a magnetic compass and a lowcost lidar scanner. The vision system is based on a consumer grade video camera. A commercial photogrammetric software package is used to process the acquired images and generate a digital surface model. The two systems are described and compared in terms of hardware requirements and data processing. The systems are also tested and compared with respect to their application on board of an unmanned aerial vehicle (UAV). An evaluation of the accuracy of the two systems is presented. Additionally, the multi echo capability of the lidar sensor is evaluated in a test site covered with dense vegetation. The lidar and the camera systems were mounted and tested onboard an industrial unmanned helicopter with maximum takeoff weight of around 100 kilograms. The presented results are based on real flighttest data. 
.

[93] 
2014. WORL: a nonmonotonic rule language for the semantic web. Vietnam Journal of Computer Science, 1(1):57–69. Springer Berlin/Heidelberg. DOI: 10.1007/s405950130009y. We develop a new Web ontology rule language, called WORL, which combines a variant of OWL 2 RL with eDatalog Â¬ . We allow additional features like negation, the minimal number restriction and unary external checkable predicates to occur at the lefthand side of concept inclusion axioms. Some restrictions are adopted to guarantee a translation into eDatalog Â¬ . We also develop the wellfounded semantics and the stable model semantics for WORL as well as the standard semantics for stratified WORL (SWORL) via translation into eDatalog Â¬ . Both WORL with respect to the wellfounded semantics and SWORL with respect to the standard semantics have PTime data complexity. In contrast to the existing combined formalisms, in WORL and SWORL negation in concept inclusion axioms is interpreted using nonmonotonic semantics. 
.

[92] 
2014. Editorial Material: A perspective on the early history of artificial intelligence in Europe. n/a 
.

[91] 
2014. Unmanned aerial vehicleaided communications system for disaster recovery. Journal of Network and Computer Applications, 41(??):27–36. Elsevier. DOI: 10.1016/j.jnca.2013.10.002. After natural disasters such as earthquakes, floods, hurricanes, tornados and fires, providing emergency management schemes which mainly rely on communications systems is essential for rescue operations. To establish an emergency communications system during unforeseen events such as natural disasters, we propose the use of a team of unmanned aerial vehicles (UAVs). The proposed system is a postdisaster solution and can be used whenever and wherever required. Each UAV in the team has an onboard computer which runs three main subsystems responsible for endtoend communication, formation control and autonomous navigation. The onboard computer and the lowlevel controller of the UAV cooperate to accomplish the objective of providing local communications infrastructure. In this study, the subsystems running on each UAV are explained and evaluated by simulation studies and field tests using an autonomous helicopter. While the simulation studies address the efficiency of the endtoend communication subsystem, the field tests evaluate the accuracy of the navigation subsystem. The results of the field tests and the simulation studies show that the proposed system can be successfully used in case of disasters to establish an emergency communications system. 
.

2013  
[90] 
2013. Automated Generation of Logical Constraints on Approximation Spaces Using Quantifier Elimination. Fundamenta Informaticae, 127(14):135–149. IOS Press. DOI: 10.3233/FI2013900. Note: Funding AgenciesSwedish Research Council (VR) Linnaeus Center CADICSELLIIT Excellence Center at LinkopingLund in Information TechnologyCUAS projectSSF, the Swedish Foundation for Strategic Research This paper focuses on approximate reasoning based on the use of approximation spaces. Approximation spaces and the approximated relations induced by them are a generalization of the rough setbased approximations of Pawlak. Approximation spaces are used to define neighborhoods around individuals and rough inclusion functions. These in turn are used to define approximate sets and relations. In any of the approaches, one would like to embed such relations in an appropriate logical theory which can be used as a reasoning engine for specific applications with specific constraints. We propose a framework which permits a formal study of the relationship between properties of approximations and properties of approximation spaces. Using ideas from correspondence theory, we develop an analogous framework for approximation spaces. We also show that this framework can be strongly supported by automated techniques for quantifier elimination. 
.

[89] 
2013. How an agent might think. Logic journal of the IGPL (Print), 21(3):515–535. Oxford University Press (OUP): Policy A  Oxford Open Option A. DOI: 10.1093/jigpal/jzs051. The current article is devoted to extensions of the rule query language 4QL proposed by MaÅ‚uszyÅ„ski and SzaÅ‚as. 4QL is a Datalog<sup>Â¬Â¬</sup>like language, allowing one to use rules with negation in heads and bodies of rules. It is based on a simple and intuitive semantics and provides uniform tools for lightweight versions of wellknown forms of nonmonotonic reasoning. In addition, 4QL is tractable w.r.t. data complexity and captures PTime queries. In the current article we relax most of restrictions of 4QL, obtaining a powerful but still tractable query language 4QL<sup>+</sup>. In its development we mainly focused on its pragmatic aspects: simplicity, tractability and generality. In the article we discuss our approach and choices made, define a new, more general semantics and investigate properties of 4QL<sup>+</sup>. 
.

[88] 
2013. Toward rich geometric map for SLAM: online detection of planets in 2D LIDAR. Journal of Automation, Mobile Robotics & Intelligent Systems, 7(1):35–41. Link to article: http://www.jamris.org/archive.php Rich geometric models of the environment are needed for robots to carry out their missions. However a robot operating in a large environment would require a compact representation. In this article, we present a method that relies on the idea that a plane appears as a line segment in a 2D scan, and that by tracking those lines frame after frame, it is possible to estimate the parameters of that plane. The method is divided in three steps: fitting line segments on the points of the 2D scan, tracking those line segments in consecutive scan and estimating the parameters with a graph based SLAM (Simultaneous Localisation And Mapping) algorithm. 
.

[87] 
2013. Highlevel Mission Specification and Planning for Collaborative Unmanned Aircraft Systems using Delegation. Unmanned Systems, 1(1):75–119. World Scientific. Automated specification, generation and execution of high level missions involving one or more heterogeneous unmanned aircraft systems is in its infancy. Much previous effort has been focused on the development of air vehicle platforms themselves together with the avionics and sensor subsystems that implement basic navigational skills. In order to increase the degree of autonomy in such systems so they can successfully participate in more complex mission scenarios such as those considered in emergency rescue that also include ongoing interactions with human operators, new architectural components and functionalities will be required to aid not only human operators in mission planning, but also the unmanned aircraft systems themselves in the automatic generation, execution and partial verification of mission plans to achieve mission goals. This article proposes a formal framework and architecture based on the unifying concept of delegation that can be used for the automated specification, generation and execution of highlevel collaborative missions involving one or more air vehicles platforms and human operators. We describe an agentbased software architecture, a temporal logic based mission specification language, a distributed temporal planner and a task specification language that when integrated provide a basis for the generation, instantiation and execution of complex collaborative missions on heterogeneous air vehicle systems. A prototype of the framework is operational in a number of autonomous unmanned aircraft systems developed in our research lab. 
.

[86] 
2013. StreamBased Hierarchical Anchoring. Autonomous systems situated in the real world often need to recognize, track, and reason about various types of physical objects. In order to allow reasoning at a symbolic level, one must create and continuously maintain a correlation between symbols denoting physical objects and sensor data being collected about them, a process called anchoring.In this paper we present a streambased hierarchical anchoring framework. A classification hierarchy is associated with expressive conditions for hypothesizing the type and identity of an object given streams of temporally tagged sensor data. The anchoring process constructs and maintains a set of object linkage structures representing the best possible hypotheses at any time. Each hypothesis can be incrementally generalized or narrowed down as new sensor data arrives. Symbols can be associated with an object at any level of classification, permitting symbolic reasoning on different levels of abstraction. The approach is integrated in the DyKnow knowledge processing middleware and has been applied to an unmanned aerial vehicle traffic monitoring application. 
.

[85] 
2013. RoboCup Rescue Robot and Simulation Leagues. The AI Magazine, 34(1):??–??. AAAI Press. Link to journal: http://www.aaai.org/ojs/index.php/aimaga... The RoboCup Rescue Robot and Simulation competitions have been held since 2000. The experience gained during these competitions has increased the maturity level of the field, which allowed deploying robots after real disasters (e.g. Fukushima Daiichi nuclear disaster). This article provides an overview of these competitions and highlights the state of the art and the lessons learned. 
.

[84] 
2013. Study of efficiency of USAR operations with assistive technologies. Advanced Robotics, 27(5):337–350. DOI: 10.1080/01691864.2013.763723. Note: Funding AgenciesGerman Federal Ministry of Education and Research13N9759German Federal Agency for Technical Relief (THW)RIF e.V.JTelectronic GmbHcarat robotic innovation GmbHBerlin Oberspree Sondermaschinenbau GmbH (BOS)SEEBA This paper presents presents a study on eciency of Urban Search and Rescue (USAR) missions that has been carried out within the framework of the German research project ILOV. After three years of development, first field tests have been carried out in 2011 by professionals such as the Rapid Deployment Unit for Salvage Operations Abroad (SEEBA). We present results from evaluating search teams in simulated USAR scenarios equipped with newly developed technical search means and digital data input terminals developed in the ILOV project. In particular, USAR missions assisted by the â€œbioradarâ€, a groundpenetrating radar system for the detection of humanoid movements, a semiactive video probe of more than 10 m length for rubble pile exploration, a snakelike rescue robot, and the decision support system FRIEDAA were evaluated and compared with conventional USAR missions. Results of this evaluation indicate that the developed technologies represent an advantages for USAR missions, which are discussed in this paper. 
.

[83] 
2013. A FrontierVoidBased Approach for Autonomous Exploration in 3D. Advanced Robotics, 27(6):459–468. Taylor and Francis. DOI: 10.1080/01691864.2013.763720. Note: Funding AgenciesDeutsche Forschungsgemeinschaft in the Transregional Collaborative Research CenterSFB/TR8 We consider the problem of an autonomous robot searching for objects in unknown 3d space. Similar to the well known frontierbased exploration in 2d, the problem is to determine a minimal sequence of sensor viewpoints until the entire search space has been explored. We introduce a novel approach that combines the two concepts of voids, which are unexplored volumes in 3d, and frontiers, which are regions on the boundary between voids and explored space. Our approach has been evaluated on a mobile platform equipped with a manipulator searching for victims in a simulated USAR setup. First results indicate the realworld capability and search efficiency of the proposed method. 
.

2012  
[82] 
2012. Modeling and inference for troubleshooting with interventions applied to a heavy truck auxiliary braking system. Engineering applications of artificial intelligence, 25(4):705–719. Elsevier. DOI: 10.1016/j.engappai.2011.02.018. Computer assisted troubleshooting with external interventions is considered. The work is motivated by the task of repairing an automotive vehicle at lowest possible expected cost. The main contribution is a decision theoretic troubleshooting system that is developed to handle external interventions. In particular, practical issues in modeling for troubleshooting are discussed, the troubleshooting system is described, and a method for the efficient probability computations is developed. The troubleshooting systems consists of two parts; a planner that relies on AO* search and a diagnoser that utilizes Bayesian networks (BN). The work is based on a case study of an auxiliary braking system of a modern truck. Two main challenges in troubleshooting automotive vehicles are the need for disassembling the vehicle during troubleshooting to access parts to repair, and the difficulty to verify that the vehicle is fault free. These facts lead to that probabilities for faults and for future observations must be computed for a system that has been subject to external interventions that cause changes in the dependency structure. The probability computations are further complicated due to the mixture of instantaneous and noninstantaneous dependencies. To compute the probabilities, we develop a method based on an algorithm, updateBN, that updates a static BN to account for the external interventions. 
.

2011  
[81] 
2011. Large scale multiple robot visual mapping with heterogeneous landmarks in semistructured terrain. This paper addresses the cooperative localization and visual mapping problem with multiple heterogeneous robots. The approach is designed to deal with the challenging large semistructured outdoors environments in which aerial/ground ensembles are to evolve. We propose the use of heterogeneous visual landmarks, points and line segments, to achieve effective cooperation in such environments. A largescale SLAM algorithm is generalized to handle multiple robots, in which a global graph maintains the relative relationships between a series of local submaps built by the different robots. The key issue when dealing with multiple robots is to find the link between them, and to integrate these relations to maintain the overall geometric consistency; the events that introduce these links on the global graph are described in detail. Monocular cameras are considered as the primary extereoceptive sensor. In order to achieve the undelayed initialization required by the bearingonly observations, the wellknown inversedepth parametrization is adopted to estimate 3D points. Similarly, to estimate 3D line segments, we present a novel parametrization based on anchored PlÃ¼cker coordinates, to which extensible endpoints are added. Extensive simulations show the proposed developments, and the overall approach is illustrated using realdata taken with a helicopter and a ground rover. 
.

[80] 
2011. ConversePDL with Regular Inclusion Axioms: A Framework for MAS Logics. <em>In this paper we study automated reasoning in the modal logic CPDLreg which is a combination of CPDL (Propositional Dynamic Logic with Converse) and REGc (Regular Grammar Logic with Converse). The logic CPDL is widely used in many areas, including program verification, theory of action and change, and knowledge representation. On the other hand, the logic REGc is applicable in reasoning about epistemic states and ontologies (via Description Logics). The modal logic CPDLreg can serve as a technical foundation for reasoning about agency. Even very rich multiagent logics can be embedded into CPDLreg via a suitable translation. Therefore, CPDLreg can serve as a test bed to implement and possibly verify new ideas without providing specific automated reasoning techniques for the logic in question. This process can to a large extent be automated. The idea of embedding various logics into CPDLreg is illustrated on a rather advanced logic TEAMLOGK designed to specify teamwork in multiagent systems. Apart from defining informational and motivational attitudes of groups of agents, TEAMLOGK allows for grading agents' beliefs, goals and intentions. The current paper is a companion to our paper (DuninKÄ™plicz et al., 2010a). The main contribution are proofs of soundness and completeness of the tableau calculus for CPDLreg provided in (DuninKÄ™plicz et al., 2010a).</em> 
.

[79] 
2011. ExpTime Tableau Decision Procedures for Regular Grammar Logics with Converse. Studia Logica: An International Journal for Symbolic Logic, 98(3):387–428. Springer Berlin/Heidelberg. DOI: 10.1007/s1122501193413. Grammar logics were introduced by FariÃ±as del Cerro and Penttonen in 1988 and have been widely studied. In this paper we consider regular grammar logics with converse (<em>REG</em> <sup><em>c</em> </sup>logics) and present sound and complete tableau calculi for the general satisfiability problem of <em>REG</em> <sup><em>c</em> </sup>logics and the problem of checking consistency of an ABox w.r.t. a TBox in a <em>REG</em> <sup><em>c</em> </sup>logic. Using our calculi we develop ExpTime (optimal) tableau decision procedures for the mentioned problems, to which various optimization techniques can be applied. We also prove a new result that the data complexity of the instance checking problem in <em>REG</em> <sup><em>c</em></sup>logics is coNPcomplete. 
.

[78] 
2011. Logical Foundations and Complexity of 4QL, a Query Language with Unrestricted Negation. The paper discusses properties of 4QL, a DATALOGÂ¬Â¬like query language, originally outlined by MaÅ‚uszyÂ´nski and SzaÅ‚as (MaÅ‚uszyÂ´nski & SzaÅ‚as, 2011). 4QL allows one to use rules with negation in heads and bodies of rules. It is based on a simple and intuitive semantics and provides uniform tools for â€œlightweightâ€ versions of known forms of nonmonotonic reasoning. Negated literals in heads of rules may naturally lead to inconsistencies. On the other hand, rules do not have to attach meaning to some literals. Therefore 4QL is founded on a fourvalued semantics, employing the logic introduced in (MaÅ‚uszyÂ´nski et al., 2008; VitÃ³ria et al., 2009) with truth values: â€˜trueâ€™, â€˜falseâ€™, â€˜inconsistentâ€™ and â€˜unknownâ€™. In addition, 4QL is tractable w.r.t. data complexity and captures PTIME queries. Even though DATALOGÂ¬Â¬ is known as a concept for the last 30 years, to our best knowledge no existing approach enjoys these properties. In the current paper we:<ul><li>investigate properties of wellsupported models of 4QL</li><li>prove the correctness of the algorithm for computing wellsupported models</li><li>show that 4QL has PTIME data complexity and captures PTIME.</li></ul> 
.

[77] 
2011. Reports of the AAAI 2011 Spring Symposia. The AI Magazine, 32(3):119–127. AAAI Press. The Association for the Advancement of Artificial Intelligence presented the 2011 Spring Symposium Series Monday through Wednesday, March 2123, 2011, at Stanford University. This report summarizes the eight symposia. 
.

[76] 
2011. From systems to logic in the early development of nonmonotonic reasoning. This note describes how the notion of nonmonotonic reasoning emerged in Artificial Intelligence from the mid1960s to 1980. It gives particular attention to the interplay between three kinds of activities: design of highlevel programming systems for AI, design of truthmaintenance systems, and the development of nonmonotonic logics. This was not merely a development from logic to implementation: in several cases there was a development from a system design to a corresponding logic. The article concludes with some reflections on the roles and relationships between logicist theory and system design in AI, and in particular in Knowledge Representation. 
.

2010  
[75] 
2010. Exercising Moral Copyright for Evolving Publications. ScieCom Info, 6(3):??–??. Svenskt Resurscentrum för Vetenskaplig Kommunikation. Link: http://www.sciecom.org/ojs/index.php/sci... 
.

[74] 
2010. Tableaux with Global Caching for Checking Satisfiability of a Knowledge Base in the Description Logic SH. Transactions on Computational Collective Intelligence, 1(1):21–38. Springer. ISBN: 9783642150333. DOI: 10.1007/9783642150340_2. Description logics (DLs) are a family of knowledge representation languages which can be used to represent the terminological knowledge of an application domain in a structured and formally wellunderstood way. DLs can be used, for example, for conceptual modeling or as ontology languages. In fact, OWL (Web Ontology Language), recommended by W3C, is based on description logics. In the current paper we give the first direct ExpTime (optimal) tableau decision procedure, which is not based on transformation or on the precompletion technique, for checking satisfiability of a knowledge base in the description logic <em>SH</em><img src=\"http://www.springerlink.com/jsMath/fonts/cmsy10/alpha/120/char53.png\" /><img src=\"http://www.springerlink.com/jsMath/fonts/cmsy10/alpha/120/char48.png\" />. Our procedure uses sound global caching and can be implemented as an extension of the highly optimized tableau prover TGC to obtain an efficient program for the mentioned satisfiability problem. 
.

[73] 
2010. Defeasible inheritance with doubt index and its axiomatic characterization. This article introduces and uses a representation of defeasible inheritance networks where links in the network are viewed as propositions, and where defeasible links are tagged with a quantitative indication of the proportion of exceptions, called the doubt index. This doubt index is used for restricting the length of the chains of inference. The representation also introduces the use of defeater literals that disable the chaining of subsumption links. The use of defeater literals replaces the use of negative defeasible inheritance links, expressing \"most A are not B\". The new representation improves the expressivity significantly. Inference in inheritance networks is defined by a combination of axioms that constrain the contents of network extensions, a heuristic restriction that also has that effect, and a nonmonotonic operation of minimizing the set of defeater literals while retaining consistency. We introduce an underlying semantics that defines the meaning of literals in a network, and prove that the axioms are sound with respect to this semantics. We also discuss the conditions for obtaining completeness. Traditional concepts, assumptions and issues in research on nonmonotonic or defeasible inheritance are reviewed in the perspective of this approach. 
.

[72] 
2010. Checking Consistency of an ABox w.r.t. Global Assumptions in PDL. We reformulate Pratts tableau decision procedure of checking satisfiability of a set of formulas in PDL. Our formulation is simpler and its implementation is more direct. Extending the method we give the first Ex PT m E (optimal) tableau decision procedure not based on transformation for checking consistency of an ABox w.r.t. a TBox in PDL (here, PDL is treated as a description logic). We also prove a new result that the data complexity of the instance checking problem in PDL is coNPcomplete. 
.

[71] 
2010. Optimal placement of UVbased communications relay nodes. Journal of Global Optimization, 48(4):511–531. Springer. DOI: 10.1007/s1089801095268. Note: The original publication is available at www.springerlink.com:Oleg Burdakov, Patrick Doherty, Kaj Holmberg and PerMagnus Olsson, Optimal placement of UVbased communications relay nodes, 2010, Journal of Global Optimization, (48), 4, 511531.http://dx.doi.org/10.1007/s1089801095268Copyright: Springer Science Business Mediahttp://www.springerlink.com/ We consider a constrained optimization problem with mixed integer and real variables. It models optimal placement of communications relay nodes in the presence of obstacles. This problem is widely encountered, for instance, in robotics, where it is required to survey some target located in one point and convey the gathered information back to a base station located in another point. One or more unmanned aerial or ground vehicles (UAVs or UGVs) can be used for this purpose as communications relays. The decision variables are the number of unmanned vehicles (UVs) and the UV positions. The objective function is assumed to access the placement quality. We suggest one instance of such a function which is more suitable for accessing UAV placement. The constraints are determined by, firstly, a free line of sight requirement for every consecutive pair in the chain and, secondly, a limited communication range. Because of these requirements, our constrained optimization problem is a difficult multiextremal problem for any fixed number of UVs. Moreover, the feasible set of real variables is typically disjoint. We present an approach that allows us to efficiently find a practically acceptable approximation to a global minimum in the problem of optimal placement of communications relay nodes. It is based on a spatial discretization with a subsequent reduction to a shortest path problem. The case of a restricted number of available UVs is also considered here. We introduce two label correcting algorithms which are able to take advantage of using some peculiarities of the resulting restricted shortest path problem. The algorithms produce a Pareto solution to the twoobjective problem of minimizing the path cost and the number of hops. We justify their correctness. The presented results of numerical 3D experiments show that our algorithms are superior to the conventional BellmanFord algorithm tailored to solving this problem. 
.

[70] 
2010. FlexDx: A Reconfigurable Diagnosis Framework. Engineering applications of artificial intelligence, 23(8):1303–1313. Elsevier. DOI: 10.1016/j.engappai.2010.01.004. Detecting and isolating multiple faults is a computationally expensive task. It typically consists of computing a set of tests and then computing the diagnoses based on the test results. This paper describes FlexDx, a reconfigurable diagnosis framework which reduces the computational burden while retaining the isolation performance by only running a subset of all tests that is sufficient to find new conflicts. Tests in FlexDx are thresholded residuals used to indicate conflicts in the monitored system. Special attention is given to the issues introduced by a reconfigurable diagnosis framework. For example, tests are added and removed dynamically, tests are partially performed on historic data, and synchronous and asynchronous processing are combined. To handle these issues FlexDx has been implemented using DyKnow, a streambased knowledge processing middleware framework. Concrete methods for each component in the FlexDx framework are presented. The complete approach is exemplified on a dynamic system which clearly illustrates the complexity of the problem and the computational gain of the proposed approach. 
.

[69] 
2010. A Framework for Graded Beliefs, Goals and Intentions. In natural language we often use graded concepts, reflecting different intensity degrees of certain features. Whenever such concepts appear in a given reallife context, they need to be appropriately expressed in its models. In this paper, we provide a framework which allows for extending the BGI model of agency by grading beliefs, goals and intentions. We concentrate on TEAMLOG [6, 7, 8, 9, 12] and provide a complexityoptimal decision method for its graded version TEAMLOG(K) by translating it into CPDLreg (propositional dynamic logic with converse and \"inclusion axioms\" characterized by regular languages). We also develop a tableau calculus which leads to the first EXPTIME (optimal) tableau decision procedure for CPDLreg. As CPDLreg is suitable for expressing complex properties of graded operators, the procedure can also be used as a decision tool for other multiagent formalisms. 
.

[68] 
2010. A Layered RuleBased Architecture for Approximate Knowledge Fusion. COMPUTER SCIENCE AND INFORMATION SYSTEMS, 7(3):617–642. COMSIS CONSORTIUM. DOI: 10.2298/CSIS100209015D. In this paper we present a framework for fusing approximate knowledge obtained from various distributed, heterogenous knowledge sources. This issue is substantial in modeling multiagent systems, where a group of loosely coupled heterogeneous agents cooperate in achieving a common goal. In paper [5] we have focused on defining general mechanism for knowledge fusion. Next, the techniques ensuring tractability of fusing knowledge expressed as a Horn subset of propositional dynamic logic were developed in [13,16]. Propositional logics may seem too weak to be useful in realworld applications. On the other hand, propositional languages may be viewed as sublanguages of firstorder logics which serve as a natural tool to define concepts in the spirit of description logics [2]. These notions may be further used to define various ontologies, like e. g. those applicable in the Semantic Web. Taking this step, we propose a framework, in which our Horn subset of dynamic logic is combined with deductive database technology. This synthesis is formally implemented in the framework of HSPDL architecture. The resulting knowledge fusion rules are naturally applicable to realworld data. 
.

[67] 
2010. Relay Positioning for Unmanned Aerial Vehicle Surveillance. The international journal of robotics research, 29(8):1069–1087. Sage Publications. DOI: 10.1177/0278364910369463. When unmanned aerial vehicles (UAVs) are used for surveillance, information must often be transmitted to a base station in real time. However, limited communication ranges and the common requirement of free line of sight may make direct transmissions from distant targets impossible. This problem can be solved using relay chains consisting of one or more intermediate relay UAVs. This leads to the problem of positioning such relays given known obstacles, while taking into account a possibly missionspecific quality measure. The maximum quality of a chain may depend strongly on the number of UAVs allocated. Therefore, it is desirable to either generate a chain of maximum quality given the available UAVs or allow a choice from a spectrum of Paretooptimal chains corresponding to different tradeoffs between the number of UAVs used and the resulting quality. In this article, we define several problem variations in a continuous threedimensional setting. We show how sets of Paretooptimal chains can be generated using graph search and present a new labelcorrecting algorithm generating such chains significantly more efficiently than the bestknown algorithms in the literature. Finally, we present a new dual ascent algorithm with better performance for certain tasks and situations. 
.

[66] 
2010. Tractable approximate knowledge fusion using the Horn fragment of serial propositional dynamic logic. International Journal of Approximate Reasoning, 51(3):346–362. Elsevier. DOI: 10.1016/j.ijar.2009.11.002. In this paper we investigate a technique for fusing approximate knowledge obtained from distributed, heterogeneous information sources. This issue is substantial, e.g., in modeling multiagent systems, where a group of loosely coupled heterogeneous agents cooperate in achieving a common goal. Information exchange, leading ultimately to knowledge fusion, is a natural and vital ingredient of this process. We use a generalization of rough sets and relations [30], which depends on allowing arbitrary similarity relations. The starting point of this research is [6], where a framework for knowledge fusion in multiagent systems is introduced. Agents individual perceptual capabilities are represented by similarity relations, further aggregated to express joint capabilities of teams, This aggregation, expressing a shift from individual to social level of agents activity, has been formalized by means of dynamic logic. The approach of Doherty et al. (2007) [6] uses the full propositional dynamic logic, which does not guarantee tractability of reasoning. Our idea is to adapt the techniques of Nguyen [2628] to provide an engine for tractable approximate database querying restricted to a Horn fragment of serial dynamic logic. We also show that the obtained formalism is quite powerful in applications. 
.

[65] 
2010. A casebased approach to dialogue systems. Journal of experimental and theoretical artificial intelligence (Print), 22(1):23–51. Taylor & Francis. DOI: 10.1080/09528130902723708. We describe an approach to integrate dialogue management, machinelearning and action planning in a system for dialogue between a human and a robot. Casebased techniques are used because they permit lifelong learning from experience and demand little prior knowledge and few static handwritten structures. This approach has been developed through the work on an experimental dialogue system, called CEDERIC, that is connected to an unmanned aerial vehicle (UAV). A single case base and casebased reasoning engine is used both for understanding and for planning actions by the UAV. Dialogue experiments both with experienced and novice users, where the users have solved tasks by dialogue with this system, showed very adequate success rates. 
.

[64] 
2010. Bridging the sensereasoning gap: DyKnow  Streambased middleware for knowledge processing. Engineering autonomous agents that display rational and goaldirected behavior in dynamic physical environments requires a steady flow of information from sensors to highlevel reasoning components. However, while sensors tend to generate noisy and incomplete quantitative data, reasoning often requires crisp symbolic knowledge. The gap between sensing and reasoning is quite wide, and cannot in general be bridged in a single step. Instead, this task requires a more general approach to integrating and organizing multiple forms of information and knowledge processing on different levels of abstraction in a structured and principled manner. We propose knowledge processing middleware as a systematic approach to organizing such processing. Desirable properties are presented and motivated. We argue that a declarative streambased system is appropriate for the required functionality and present DyKnow, a concrete implemented instantiation of streambased knowledge processing middleware with a formal semantics. Several types of knowledge processes are defined and motivated in the context of a UAV traffic monitoring application. In the implemented application, DyKnow is used to incrementally bridge the sensereasoning gap and generate partial logical models of the environment over which metric temporal logical formulas are evaluated. Using such formulas, hypotheses are formed and validated about the type of vehicles being observed. DyKnow is also used to generate event streams representing for example changes in qualitative spatial relations, which are used to detect traffic violations expressed as declarative chronicles. 
.

2009  
[63] 
2009. Annotation Theories over Finite Graphs. Studia Logica: An International Journal for Symbolic Logic, 93(23):147–180. Springer. DOI: 10.1007/s1122500992203. In the current paper we consider theories with vocabulary containing a number of binary and unary relation symbols. Binary relation symbols represent labeled edges of a graph and unary relations represent unique annotations of the graphâ€™s nodes. Such theories, which we call <em>annotation theories</em>, can be used in many applications, including the formalization of argumentation, approximate reasoning, semantics of logic programs, graph coloring, etc. We address a number of problems related to annotation theories over finite models, including satisfiability, querying problem, specification of preferred models and model checking problem. We show that most of considered problems are NPTime or coNPTimecomplete. In order to reduce the complexity for particular theories, we use secondorder quantifier elimination. To our best knowledge none of existing methods works in the case of annotation theories. We then provide a new secondorder quantifier elimination method for stratified theories, which is successful in the considered cases. The new result subsumes many other results, including those of [2, 28, 21]. 
.

[62] 
2009. Modelling and Reasoning with Paraconsistent Rough Sets. We present a language for defining paraconsistent rough sets and reasoning about them. Our framework relates and brings together two major fields: rough sets [23] and paraconsistent logic programming [9]. To model inconsistent and incomplete information we use a fourvalued logic. The language discussed in this paper is based on ideas of our previous work [21, 32, 22] developing a fourvalued framework for rough sets. In this approach membership function, set containment and set operations are fourvalued, where logical values are t (true), f (false), i (inconsistent) and u (unknown). We investigate properties of paraconsistent rough sets as well as develop a paraconsistent rule language, providing basic computational machinery for our approach. 
.

[61] 
2009. A Temporal Logicbased Planning and Execution Monitoring Framework for Unmanned Aircraft Systems. Research with autonomous unmanned aircraft systems is reaching a new degree of sophistication where targeted missions require complex types of deliberative capability integrated in a practical manner in such systems. Due to these pragmatic constraints, integration is just as important as theoretical and applied work in developing the actual deliberative functionalities. In this article, we present a temporal logicbased task planning and execution monitoring framework and its integration into a fully deployed rotorbased unmanned aircraft system developed in our laboratory. We use a very challenging emergency services application involving body identification and supply delivery as a vehicle for showing the potential use of such a framework in realworld applications. TALplanner, a temporal logicbased task planner, is used to generate mission plans. Building further on the use of TAL (Temporal Action Logic), we show how knowledge gathered from the appropriate sensors during plan execution can be used to create state structures, incrementally building a partial logical model representing the actual development of the system and its environment over time. We then show how formulas in the same logic can be used to specify the desired behavior of the system and its environment and how violations of such formulas can be detected in a timely manner in an execution monitor subsystem. The pervasive use of logic throughout the higher level deliberative layers of the system architecture provides a solid shared declarative semantics that facilitates the transfer of knowledge between different modules. 
.

[60] 
2009. Voting by Eliminating Quantifiers. Studia Logica: An International Journal for Symbolic Logic, 92(3):365–379. Springer. DOI: 10.1007/s1122500992007. Mathematical theory of voting and social choice has attracted much attention. In the general setting one can view social choice as a method of aggregating individual, often conflicting preferences and making a choice that is the best compromise. How preferences are expressed and what is the â€œbest compromiseâ€ varies and heavily depends on a particular situation. The method we propose in this paper depends on expressing individual preferences of voters and specifying properties of the resulting ranking by means of firstorder formulas. Then, as a technical tool, we use methods of secondorder quantifier elimination to analyze and compute results of voting. We show how to specify voting, how to compute resulting rankings and how to verify voting protocols. 
.

[59] 
2009. VisionBased Unmanned Aerial Vehicle Navigation Using GeoReferenced Information. EURASIP Journal on Advances in Signal Processing, 2009(387308):1–18. Hindawi Publishing Corporation. DOI: 10.1155/2009/387308. This paper investigates the possibility of augmenting an Unmanned Aerial Vehicle (UAV) navigation system with a passive video camera in order to cope with longterm GPS outages. The paper proposes a visionbased navigation architecture which combines inertial sensors, visual odometry, and registration of the onboard video to a georeferenced aerial image. The visionaided navigation system developed is capable of providing highrate and driftfree state estimation for UAV autonomous navigation without the GPS system. Due to the use of imagetomap registration for absolute position calculation, driftfree position performance depends on the structural characteristics of the terrain. Experimental evaluation of the approach based on offline flight data is provided. In addition the architecture proposed has been implemented onboard an experimental UAV helicopter platform and tested during visionbased autonomous flights. 
.

2008  
[58] 
2008. Extending the concept of publication: Factbases and knowledgebases. Learned Publishing, 21(2):123–131. Association of Learned and Professional Society Publishers. DOI: 10.1087/095315108X288893. The concept of a 'publication' no longer applies only to printed works, information technology has extended its application to several other types of works. This article describes a facility called the Common Knowledge Library that publishes modules of formally structured information representing facts and knowledge of various kinds. Publications of this new type have some characteristics in common with databases, and others in common with software modules, however, they also share some important characteristics with traditional publications. A framework for citation of previous work is important in order to provide an incentive for contributors of such modules. Peer review  the traditional method of quality assurance for scientific articles  must also be applied, although in a modified form, for fact and knowledge modules. The construction of the Common Knowledge Library is a cumulative process, new contributions are obtained by interpreting the contents of existing knowledge sources on the Internet, and the existing contents of the Library are an important resource for that interpretation process. 
.

[57] 
2008. Towards Incorporating Background Theories into Quantifier Elimination. Journal of applied nonclassical logics, 18(23):325–340. Éditions HermèsLavoisier. DOI: 10.3166/jancl.18.325340. In the paper we present a technique for eliminating quantifiers of arbitrary order, in particular of firstorder. Such a uniform treatment of the elimination problem has been problematic up to now, since techniques for eliminating firstorder quantifiers do not scale up to higherorder contexts and those for eliminating higherorder quantifiers are usually based on a form of monotonicity w.r.t implication (set inclusion) and are not applicable to the firstorder case. We make a shift to arbitrary relations \"ordering\" the underlying universe. This allows us to incorporate background theories into higherorder quantifier elimination methods which, up to now, has not been achieved. The technique we propose subsumes many other results, including the Ackermann's lemma and various forms of fixpoint approaches when the \"ordering\" relations are interpreted as implication and reveals the common principle behind these approaches. 
.

[56] 
2008. Automatic fitting procedures for EPR spectra of disordered systems: matrix diagonalization and perturbation methods applied to fluorocarbon radicals. Spectrochimica Acta Part A  Molecular and Biomolecular Spectroscopy, 69(5):1294–1300. Elsevier. DOI: 10.1016/j.saa.2007.09.040. Note: Original publication: A. Lund, P. Andersson, J. Eriksson, J. Hallin, T. Johansson, R. Jonsson, H. LÃ¶fgren, C. Paulin and A. Tell, Automatic fitting procedures for EPR spectra of disordered systems: matrix diagonalization and perturbation methods applied to fluorocarbon radicals, 2008, Spectrochimica Acta Part A, (69), 5, 12941300. http://dx.doi.org/10.1016/j.saa.2007.09.040. Copyright: Elsevier B.V., http://www.elsevier.com/ Two types of automatic fitting procedures for EPR spectra of disordered systems have been developed, one based on matrix diagonalisation of a general spin Hamiltonian, the other on 2<sup>nd</sup> order perturbation theory. The first program is based on a previous Fortran code complemented with a newly written interface in Java to provide userfriendly in and output. The second is intended for the special case of free radicals with several relatively weakly interacting nuclei, in which case the general method becomes slow. A least squaresâ€™ fitting procedure utilizing analytical or numerical derivatives of the theoretically calculated spectrum with respect to the gand hyperfine structure (hfs) tensors was used to refine those parameters in both cases. â€˜Rigid limitâ€™ ESR spectra from radicals in organic matrices and in polymers, previously studied experimentally at low temperature, were analysed by both methods. Fluorocarbon anion radicals could be simulated, quite accurately with the exact method, whereas automatic fitting on e.g. the cC<sub>4</sub>F<sub>8</sub><sup></sup> anion radical is only feasible with the 2<sup>nd</sup> order approximative treatment. Initial values for the <sup>19</sup>F hfs tensors estimated by DFT calculations were quite close to the final. For neutral radicals of the type XCF<sub>2</sub>CF<sub>2</sub>â€¢ the refinement of the hfs tensors by the exact method worked better than the approximate. The reasons are discussed. The ability of the fitting procedures to recover the correct magnetic parameters of disordered systems was investigated by fittings to synthetic spectra with known hfs tensors. The exact and the approximate methods are concluded to be complementary, one being general, but limited to relatively small systems, the other being a special treatment, suited for S=Â½ systems with several moderately large hfs. 
.

2007  
[55] 
2007. VisionBased SLAM: Stereo and Monocular Approaches. International Journal of Computer Vision, 74(3):343–364. Kluwer Academic Publishers. DOI: 10.1007/s1126300700423. Building a spatially consistent model is a key functionality to endow a mobile robot with autonomy. Without an initial map or an absolute localization means, it requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides data from which stable features can be extracted and matched as the robot moves. But it does not directly provide 3D information, which is a difficulty for estimating the geometry of the environment. This article presents two approaches to the SLAM problem using vision: one with stereovision, and one with monocular images. Both approaches rely on a robust interest point matching algorithm that works in very diverse environments. The stereovision based approach is a classic SLAM implementation, whereas the monocular approach introduces a new way to initialize landmarks. Both approaches are analyzed and compared with extensive experimental results, with a rover and a blimp. 
.

[54] 
2007. Communication between agents with heterogeneous perceptual capabilities. In real world applications robots and software agents often have to be equipped with higher level cognitive functions that enable them to reason, act and perceive in changing, incompletely known and unpredictable environments. One of the major tasks in such circumstances is to fuse information from various data sources. There are many levels of information fusion, ranging from the fusing of low level sensor signals to the fusing of high level, complex knowledge structures. In a dynamically changing environment even a single agent may have varying abilities to perceive its environment which are dependent on particular conditions. The situation becomes even more complex when different agents have different perceptual capabilities and need to communicate with each other. In this paper, we propose a framework that provides agents with the ability to fuse both low and high level approximate knowledge in the context of dynamically changing environments while taking account of heterogeneous and contextually limited perceptual capabilities. To model limitations on an agent's perceptual capabilities we introduce the idea of partial tolerance spaces. We assume that each agent has one or more approximate databases where approximate relations are represented using lower and upper approximations on sets. Approximate relations are generalizations of rough sets. It is shown how sensory and other limitations can be taken into account when constructing and querying approximate databases for each respective agent. Complex relations inherit the approximativeness of primitive relations used in their definitions. Agents then query these databases and receive answers through the filters of their perceptual limitations as represented by (partial) tolerance spaces and approximate queries. The techniques used are all tractable. Â© 2005 Elsevier B.V. All rights reserved. 
.

[53] 
2007. Secondorder quantifier elimination in higherorder contexts with applications to the semantical analysis of conditionals. Studia Logica: An International Journal for Symbolic Logic, 87(1):37–50. Springer. DOI: 10.1007/s1122500790754. Secondorder quantifier elimination in the context of classical logic emerged as a powerful technique in many applications, including the correspondence theory, relational databases, deductive and knowledge databases, knowledge representation, commonsense reasoning and approximate reasoning. In the current paper we first generalize the result of Nonnengart and Szalas [17] by allowing secondorder variables to appear within higherorder contexts. Then we focus on a semantical analysis of conditionals, using the introduced technique and Gabbay's semantics provided in [10] and substantially using a thirdorder accessibility relation. The analysis is done via finding correspondences between axioms involving conditionals and properties of the underlying thirdorder relation. Â© 2007 Springer Science+Business Media B.V. 
.

[52] 
2007. A correspondence framework between threevalued logics and similaritybased approximate reasoning. Fundamenta Informaticae, 75(14):179–193. IOS Press. This paper focuses on approximate reasoning based on the use of similarity spaces. Similarity spaces and the approximated relations induced by them are a generalization of the rough setbased approximations of Pawlak [17, 18]. Similarity spaces are used to define neighborhoods around individuals and these in turn are used to define approximate sets and relations. In any of the approaches, one would like to embed such relations in an appropriate logic which can be used as a reasoning engine for specific applications with specific constraints. We propose a framework which permits a formal study of the relationship between approximate relations, similarity spaces and threevalued logics. Using ideas from correspondence theory for modal logics and constraints on an accessibility relation, we develop an analogous framework for threevalued logics and constraints on similarity relations. In this manner, we can provide a tool which helps in determining the proper threevalued logical reasoning engine to use for different classes of approximate relations generated via specific types of similarity spaces. Additionally, by choosing a threevalued logic first, the framework determines what constraints would be required on a similarity relation and the approximate relations induced by it. Such information would guide the generation of approximate relations for specific applications. 
.

2006  
[51] 
2006. Systems  Opening up the process. n/a 
.

[50] 
2006. Improving heuristics through relaxed search  An analysis of TP4 and HSP*a in the 2004 planning competition. The journal of artificial intelligence research, 25(??):233–267. AAAI Press. DOI: 10.1613/jair.1885. The h(m) admissible heuristics for (sequential and temporal) regression planning are defined by a parameterized relaxation of the optimal cost function in the regression search space, where the parameter m offers a tradeoff between the accuracy and computational cost of the heuristic. Existing methods for computing the h(m) heuristic require time exponential in m, limiting them to small values (m <= 2). The h(m) heuristic can also be viewed as the optimal cost function in a relaxation of the search space: this paper presents relaxed search, a method for computing this function partially by searching in the relaxed space. The relaxed search method, because it compute h(m) only partially, is computationally cheaper and therefore usable for higher values of m. The (complete) h(2) heuristic is combined with partial hm heuristics , for m = 3, ... computed by relaxed search, resulting in a more accurate heuristic. This use of the relaxed search method to improve on the h(2) heuristic is evaluated by comparing two optimal temporal planners: TP4, which does not use it, and HSP*(a), which uses it but is otherwise identical to TP4. The comparison is made on the domains used in the 2004 International Planning Competition, in which both planners participated. Relaxed search is found to be cost effective in some of these domains, but not all. Analysis reveals a characterization of the domains in which relaxed search can be expected to be cost effective, in terms of two measures on the original and relaxed search spaces. In the domains where relaxed search is cost effective, expanding small states is computationally cheaper than expanding large states and small states tend to have small successor states. 
.

[49] 
2006. A flexible runtime system for image processing in a distributed computational environment for an unmanned aerial vehicle. International Journal of Pattern Recognition and Artificial Intelligence, 20(5):763–780. DOI: 10.1142/S0218001406004867. A runtime system for implementation of image processing operations is presented. It is designed for working in a flexible and distributed environment related to the software architecture of a newly developed UAV system. The software architecture can be characterized at a coarse scale as a layered system, with a deliberative layer at the top, a reactive layer in the middle, and a processing layer at the bottom. At a finer scale each of the three levels is decomposed into sets of modules which communicate using CORBA, allowing system development and deployment on the UAV to be made in a highly flexible way. Image processing takes place in a dedicated module located in the process layer, and is the main focus of the paper. This module has been designed as a runtime system for data flow graphs, allowing various processing operations to be created online and on demand by the higher levels of the system. The runtime system is implemented in Java, which allows development and deployment to be made on a wide range of hardware/software configurations. Optimizations for particular hardware platforms have been made using Java's native interface. 
.

[48] 
2006. Probabilistic roadmap based path planning for an autonomous unmanned helicopter. Journal of Intelligent & Fuzzy Systems, 17(4):395–405. IOS Press. The emerging area of intelligent unmanned aerial vehicle (UAV) research has shown rapid development in recent years and offers a great number of research challenges for artificial intelligence. For both military and civil applications, there is a desire to develop more sophisticated UAV platforms where the emphasis is placed on development of intelligent capabilities. Imagine a mission scenario where a UAV is supplied with a 3D model of a region containing buildings and road structures and is instructed to fly to an arbitrary number of building structures and collect video streams of each of the building's respective facades. In this article, we describe a fully operational UAV platform which can achieve such missions autonomously. We focus on the path planner integrated with the platform which can generate collision free paths autonomously during such missions. Both probabilistic roadmapbased (PRM) and rapidly exploring random treesbased (RRT) algorithms have been used with the platform. The PRMbased path planner has been tested together with the UAV platform in an urban environment used for UAV experimentation. 
.

[47] 
2006. A knowledge processing middleware framework and its relation to the JDL data fusion model. Journal of Intelligent & Fuzzy Systems, 17(4):335–351. IOS Press. Any autonomous system embedded in a dynamic and changing environment must be able to create qualitative knowledge and object structures representing aspects of its environment on the fly from raw or preprocessed sensor data in order to reason qualitatively about the environment and to supply such state information to other nodes in the distributed network in which it is embedded. These structures must be managed and made accessible to deliberative and reactive functionalities whose successful operation is dependent on being situationally aware of the changes in both the robotic agent's embedding and internal environments. DyKnow is a knowledge processing middleware framework which provides a set of functionalities for contextually creating, storing, accessing and processing such structures. The framework is implemented and has been deployed as part of a deliberative/reactive architecture for an autonomous unmanned aerial vehicle. The architecture itself is distributed and uses realtime CORBA as a communications infrastructure. We describe the system and show how it can be used to create more abstract entity and state representations of the world which can then be used for situation awareness by an unmanned aerial vehicle in achieving mission goals. We also show that the framework is a working instantiation of many aspects of the JDL data fusion model. 
.

[46] 
2006. Secondorder Reasoning in Description Logics. Journal of applied nonclassical logics, 16(3  4):517–530. Éditions HermèsLavoisier. DOI: 10.3166/jancl.16.517530. Description logics refer to a family of formalisms concentrated around concepts, roles and individuals. They belong to the most frequently used knowledge representation formalisms and provide a logical basis to a variety of well known paradigms. The main reasoning tasks considered in the area of description logics are those reducible to subsumption. On the other hand, any knowledge representation system should be equipped with a more advanced reasoning machinery. Therefore in the current paper we make a step towards integrating description logics with secondorder reasoning. One of the important motivations behind introducing secondorder formalism follows from the fact that many forms of commonsense and nonmonotonic reasoning used in AI can be modelled within the secondorder logic. To achieve our goal we first extend description logics with a possibility to quantify over concepts. Since one of the main criticisms against the use of secondorder formalisms is their complexity, we next propose secondorder quantifier elimination techniques applicable to a large class of description logic formulas. Finally we show applications of the techniques, in particular in reasoning with circumscribed concepts and approximated terminological formulas. 
.

[45] 
2006. Approximate Databases: A support tool for approximate reasoning. Journal of applied nonclassical logics, 16(12):87–118. Éditions HermèsLavoisier. DOI: 10.3166/jancl.16.87117. Note: Special issue on implementation of logics This paper describes an experimental platform for approximate knowledge databases called the Approximate Knowledge Database (AKDB), based on a semantics inspired by rough sets. The implementation is based upon the use of a standard SQL database to store logical facts, augmented with several query interface layers implemented in JAVA through which extensional, intensional and local closed world nonmonotonic queries in the form of crisp or approximate logical formulas can be evaluated tractably. A graphical database design user interface is also provided which simplifies the design of databases, the entering of data and the construction of queries. The theory and semantics for AKDBs is presented in addition to application examples and details concerning the database implementation. 
.

2004  
[44] 
2004. DyKnow: An approach to middleware for knowledge processing. Journal of Intelligent & Fuzzy Systems, 15(1):3–13. IOS Press. Any autonomous system embedded in a dynamic and changing environment must be able to create qualitative knowledge and object structures representing aspects of its environment on the fly from raw or preprocessed sensor data in order to reason qualitatively about the environment. These structures must be managed and made accessible to deliberative and reactive functionalities which are dependent on being situationally aware of the changes in both the robotic agent's embedding and internal environment. DyKnow is a software framework which provides a set of functionalities for contextually accessing, storing, creating and processing such structures. The system is implemented and has been deployed in a deliberative/reactive architecture for an autonomous unmanned aerial vehicle. The architecture itself is distributed and uses realtime CORBA as a communications infrastructure. We describe the system and show how it can be used in execution monitoring and chronicle recognition scenarios for UAV applications. 
.

[43] 
2004. Elaboration tolerance through objectorientation. Although many formalisms for reasoning about action and change have been proposed in the literature, any concrete examples provided in such articles have primarily consisted of tiny domains that highlight some particular aspect or problem. However, since some of the classical problems are now completely or partially solved and since powerful tools are becoming available, it is now necessary to start modeling more complex domains. This article presents a methodology for handling such domains in a systematic manner using an objectoriented framework and provides several examples of the elaboration tolerance exhibited by the resulting models. (C) 2003 Elsevier B.V. All rights reserved. 
.

[42] 
2004. A fuzzy gainscheduler for the attitude control of an unmanned helicopter. IEEE transactions on fuzzy systems, 12(4):502–515. IEEE Computer Society. DOI: 10.1109/TFUZZ.2004.832539. In this paper, we address the design of an attitude controller that achieves stable, and robust aggressive maneuverability for an unmanned helicopter. The controller proposed is in the form of a fuzzy gainscheduler, and is used for stable and robust altitude, roll, pitch, and yaw control. The controller is obtained from a realistic nonlinear multipleinputmultipleoutput model of a real unmanned helicopter platform, the APIDMK3. The results of this work are illustrated by extensive simulation, showing that the objective of aggressive, and robust maneuverability has been achieved. Â© 2004 IEEE. 
.

[41] 
2004. A Fuzzy Flight Controller Combining Linguistic and Modelbased Fuzzy Control. In this paper we address the design of a fuzzy flight controller that achieves stable and robust aggressive manoeuvrability for an unmanned helicopter. The fuzzy flight controller proposed consists of a combination of a fuzzy gain scheduler and linguistic (Mamdanitype) controller. The fuzzy gain scheduler is used for stable and robust altitude, roll, pitch, and yaw control. The linguistic controller is used to compute the inputs to the fuzzy gain scheduler, i.e., desired values for roll, pitch, and yaw at given desired altitude and horizontal velocities. The flight controller is obtained and tested in simulation using a realistic nonlinear MIMO model of a real unmanned helicopter platform, the APIDMK 
.

[40] 
2004. Issues in Designing Physical Agents for Dynamic RealTime Environments: World Modeling, Planning, Learning, and Communicating. The AI Magazine, 25(2):137–138. AAAI Press. This article discusses a workshop held in conjunction with the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI03), held in Acapulco, Mexico, on 11 August 2003. 
.

2003  
[39] 
2003. TALplanner in the Third International Planning Competition: Extensions and control rules. The journal of artificial intelligence research, 20(??):343–377. AAAI Press. DOI: 10.1613/jair.1189. TALplanner is a forwardchaining planner that relies on domain knowledge in the shape of temporal logic formulas in order to prune irrelevant parts of the search space. TALplanner recently participated in the third International Planning Competition, which had a clear emphasis on increasing the complexity of the problem domains being used as benchmark tests and the expressivity required to represent these domains in a planning system. Like many other planners, TALplanner had support for some but not all aspects of this increase in expressivity, and a number of changes to the planner were required. After a short introduction to TALplanner, this article describes some of the changes that were made before and during the competition. We also describe the process of introducing suitable domain knowledge for several of the competition domains. 
.

[38] 
2003. Formalizing defeasible logic in CAKE. Fundamenta Informaticae, 57(23):193–213. IOS Press. Due to its efficiency, defeasible logic is one of the most interesting nonmonotonic formalisms. Unfortunately, the logic has one major limitation: it does not properly deal with cyclic defeasible rules. In this paper, we provide a new variant of defeasible logic, using CAKE method. The resulting formalism is tractable and properly deals with circular defeasible rules. 
.

[37] 
2003. Towards a framework for approximate ontologies. Fundamenta Informaticae, 57(24):147–165. IOS Press. Currently, there is a great deal of interest in developing tools for the generation and use of ontologies on the WWW. These knowledge structures are considered essential to the success of the semantic web, the next phase in the evolution of the WWW. Much recent work with ontologies assumes that the concepts used as building blocks are crisp as opposed to approximate. It is a premise of this paper that approximate concepts and ontologies will become increasingly more important as the semantic web becomes a reality. We propose a framework for specifying, generating and using approximate ontologies. More specifically, (1) a formal framework for defining approximate concepts, ontologies and operations on approximate concepts and ontologies is presented. The framework is based on intuitions from rough set theory, (2) algorithms for automatically generating approximate ontologies from traditional crisp ontologies or from large data sets together with additional knowledge are presented. The knowledge will generally be related to similarity measurements between individual objects in the data sets, or constraints of a logical nature which rule out particular constellations of concepts and dependencies in generated ontologies. The techniques for generating approximate ontologies are parameterizable. The paper provides specific instantiations and examples. 
.

[36] 
2003. 2003 AAAI Spring Symposium Series. The AI Magazine, 24(3):131–140. AAAI Press. The American Association for Artificial Intelligence, in cooperation with Stanford Universityâ€™s Department of Computer Science, presented the 2003 Spring Symposium Series, Monday through Wednesday, 24â€“26 March 2003, at Stanford University. The titles of the eight symposia were AgentMediated Knowledge Management, Computational Synthesis: From Basic Building Blocks to High Level Functions, Foundations and Applications of Spatiotemporal Reasoning (FASTR), Human Interaction with Autonomous Systems in Complex Environments, Intelligent Multimedia Knowledge Management, Logical Formalization of Commonsense Reasoning, Natural Language Generation in Spoken and Written Dialogue, and New Directions in QuestionAnswering Motivation. 
.

2001  
[35] 
2001. On the Design of Software Individuals. Electronic Transactions on Artifical Intelligence, 5(??):??–??. Linköpings Universitet. 
.

[34] 
2001. TALPLANNER  A temporal logicbased planner. The AI Magazine, 22(3):95–102. AAAI Press. TALPLANNER is a forwardchaining planner that utilizes domaindependent knowledge to control search in the state space generated by action invocation. The domaindependent control knowledge, background knowledge, plans, and goals are all represented, using,formulas in, a temporal logic called TAL, which has been developed independently as a formalism for specifying agent narratives and reasoning about them. In the Fifth International Artificial Intelligence Planning and Scheduling Conference planning competition, TALPLANNER exhibited impressive performance, winning the Outstanding Performance Award in the DomainDependent Planning Competition. In this article, we provide an overview of TALPLANNER. 
.

2000  
[33] 
2000. Defining and Certifying Electronic Publication in Science. Learned Publishing, 13(4):251–258. Association of Learned and Professional Society Publishers. Link to article: http://www.ida.liu.se/ext/caisor/archive... 
.

[32] 
2000. Towards efficient universal planning: A randomized approach. One of the most widespread approaches to reactive planning is Schoppers' universal plans. We propose a stricter definition of universal plans which guarantees a weak notion of soundness, not present in the original definition, and isolate three different types of completeness that capture different behaviors exhibited by universal plans. We show that universal plans which run in polynomial time and are of polynomial size cannot satisfy even the weakest type of completeness unless the polynomial hierarchy collapses. By relaxing either the polynomial time or the polynomial space requirement, the construction of universal plans satisfying the strongest type of completeness becomes trivial. As an alternative approach, we study randomized universal planning. By considering a randomized version of completeness and a restricted (but nontrivial) class of problems, we show that there exists randomized universal plans running in polynomial time and using polynomial space which are sound and complete for the restricted class of problems. We also report experimental results on this approach to planning, showing that the performance of a randomized planner is not easily compared to that of a deterministic planner. 
.

[31] 
2000. TALplanner: A temporal logic based forward chaining planner. Annals of Mathematics and Artificial Intelligence, 30(14):119–169. Springer. DOI: 10.1023/A:1016619613658. We present TALplanner, a forwardchaining planner based on the use of domaindependent search control knowledge represented as formulas in the Temporal Action Logic (TAL). TAL is a narrative based linear metric time logic used for reasoning about action and change in incompletely specified dynamic environments. TAL is used as the formal semantic basis for TALplanner, where a TAL goal narrative with control formulas is input to TALplanner which then generates a TAL narrative that entails the goal and control formulas. The sequential version of TALplanner is presented. The expressivity of plan operators is then extended to deal with an interesting class of resource types. An algorithm for generating concurrent plans, where operators have varying durations and internal state, is also presented. All versions of TALplanner have been implemented. The potential of these techniques is demonstrated by applying TALplanner to a number of standard planning benchmarks in the literature. 
.

[30] 
2000. The PMA and relativizing minimal change for action update. Fundamenta Informaticae, 44(12):95–131. IOS Press. Recently, a great deal of progress has been made using nonmonotonic temporal logics to formalize reasoning about action and change. In particular, much focus has been placed on the proper representation of nondeterministic actions and the indirect effects of actions. For the latter the use of causal or fluent dependency rule approaches has been dominant. Although much recent effort has also been spent applying the belief revision/update (BR/U) approach to the action and change domain, there has been less progress in dealing with nondeterministic update and indirect effects represented as integrity constraints. We demonstrate that much is to be gained by crossfertilization between the two paradigms and we show this in the following manner. We first propose a generalization of the PMA, called the modified MPMA which uses intuitions from the TL paradigm to permit representation of nondeterministic update and the use of integrity constraints interpreted as causal or fluent dependency rules. We provide several syntactic characterizations of MPMA, one of which is in terms of a simple temporal logic and provide a representation theorem showing equivalence between the two. In constructing the MPMA, we discovered a syntactic anomaly which we call the redundant atom anomaly that many TL approaches suffer from. We provide a method for avoiding the problem which is equally applicable across paradigms. We also describe a syntactic characterization of MPMA in terms of Dijkstra semantics. We set up a framework for future generalization of the BR/U approach and conclude with a formal comparison of related approaches. 
.

[29] 
2000. Tackling the qualification problem using fluent dependency constraints. In the area of formal reasoning about action and change, one of the fundamental representation problems is providing concise modular and incremental specifications of action types and world models, where instantiations of action types are invoked by agents such as mobile robots. Provided the preconditions to the action are true, their invocation results in changes to the world model concomitant with the goaldirected behavior of the agent. One particularly difficult class of related problems, collectively called the qualification problem, deals with the need to find a concise incremental and modular means of characterizing the plethora of exceptional conditions that might qualify an action, but generally do not, without having to explicitly enumerate them in the preconditions to an action. We show how fluent dependency constraints together with the use of durational fluents can be used to deal with problems associated with action qualification using a temporal logic for action and change called TALQ. We demonstrate the approach using action scenarios that combine solutions to the frame, ramification, and qualification problems in the context of actions with duration, concurrent actions, nondeterministic actions, and the use of both Boolean and nonBoolean fluents. The circumscription policy used for the combined problems is reducible to the firstorder case. 
.

1999  
[28] 
1999. Reasoning about Concurrent Interaction. Journal of logic and computation (Print), 9(5):623–650. Oxford University Press. DOI: 10.1093/logcom/9.5.623. In this paper we present TALC, a logic of action and change for worlds with action concurrency. TALC has a firstorder semantics and proof theory. It builds on an existing logic TAL, which includes the use of dependency laws for dealing with ramification. It is demonstrated how TALC can represent a number of phenomena related to action concurrency: action duration, how the effects of one action interferes with or enables another action, synergistic effects of concurrent actions, conflicting and cumulative effect interactions, and resource conflicts. A central idea is that actions are not described as having effects that directly alter the world state. Instead, actions produce influences, and the way these influences alter the world state are described in specialized influence laws. Finally, we address how TALC narratives can be written to support modularity. 
.

[27] 
1999. Reasoning about action in polynomial time. Although many formalisms for reasoning about action exist, surprisingly few approaches have taken computational complexity into consideration. The contributions of this article are the following: a temporal logic with a restriction for which deciding satisfiability is tractable, a tractable extension for reasoning about action, and NPcompleteness results for the unrestricted problems. Many interesting reasoning problems can be modelled, involving nondeterminism, concurrency and memory of actions. The reasoning process is proved to be sound and complete. (C) 1999 Published by Elsevier Science B.V. All rights reserved. 
.

[26] 
1999. Metaqueries on deductive databases. We introduce the notion of a metaquery on relational databases and a technique which can be used to represent and solve a number of interesting problems from the area of knowledge representation using logic. The technique is based on the use of quantifier elimination and may also be used to query relational databases using a declarative query language called SHQL (SemiHorn Query Language), introduced in [6]. SHQL is a fragment of classical firstorder predicate logic and allows us to define a query without supplying its explicit definition. All SHQL queries to the database can be processed in polynomial time (both on the size of the input query and the size of the database). We demonstrate the use of the technique in problem solving by structuring logical puzzles from the Knights and Knaves domain as SHQL metaqueries on relational databases. We also provide additional examples demonstrating the flexibility of the technique. We conclude with a description of a newly developed software tool, The Logic Engineer, which aids in the description of algorithms using transformation and reduction techniques such as those applied in the metaquerying approach. 
.

[25] 
1999. Declarative PTIME queries for relational databases using quantifier elimination. Journal of logic and computation (Print), 9(5):737–758. Oxford University Press. DOI: 10.1093/logcom/9.5.737. In this paper, we consider the problem of expressing and computing queries on relational deductive databases in a purely declarative query language, called SHQL (SemiHorn Query Language). Assuming the relational databases in question are ordered, we show that all SHQL queries are computable in PTIME (polynomial time) and the whole class of PTIME queries is expressible in SHQL. Although similar results have been proven for fixpoint languages and extensions to datalog, the claim is that SHQL has the advantage of being purely declarative, where the negation operator is interpreted as classical negation, mixed quantifiers may be used and a query is simply a restricted firstorder theory not limited by the rulebased syntactic restrictions associated with logic programs in general. We describe the PTIME algorithm used to compute queries in SHQL which is based in part on quantifier elimination techniques and also consider extending the method to incomplete relational databases using intuitions related to circumscription techniques. 
.

1998  
[24] 
1998. (TAL) temporal action logics: Language specification and tutorial. Electronic Transactions on Artifical Intelligence, 2(34):273–306. Link: http://www.ep.liu.se/ej/etai/1998/009/ The purpose of this article is to provide a uniform, lightweight language specication and tutorial for a class of temporal logics for reasoning about action and change that has been developed by our group during the period 19941998. The class of logics are collected under the name TAL, an acronym for Temporal Action Logics. TAL has its origins and inspiration in the work with Features and Fluents (FF) by Sandewall, but has diverged from the methodology and approach through the years. We first discuss distinctions and compatibility with FF, move on to the lightweight language specication, and then present a tutorial in terms of an excursion through the different parts of a relatively complex narrative defined using TAL. We conclude with an annotated list of published work from our group. The article tries to strike a reasonable balance between detail and readability, making a number of simplications regarding narrative syntax and translation to a base logical language. Full details are available in numerous technical reports and articles which are listed in the final section of this article. 
.

[23] 
1998. General domain circumscription and its effective reductions. We first define general domain circumscription (GDC) and provide it with a semantics. GDC subsumes existing domain circumscription proposals in that it allows varying of arbitrary predicates, functions, or constants, to maximize the minimization of the domain of a theory. We then show that for the class of semiuniversal theories without function symbols, that the domain circumscription of such theories can be constructively reduced to logically equivalent firstorder theories by using an extension of the DLS algorithm, previously proposed by the authors for reducing secondorder formulas. We also show that for a certain class of domain circumscribed theories, that any arbitrary secondorder circumscription policy applied to these theories is guaranteed to be reducible to a logically equivalent firstorder theory. In the case of semiuniversal theories with functions and arbitrary theories which are not separated, we provide additional results, which although not guaranteed to provide reductions in all cases, do provide reductions in some cases. These results are based on the use of fixpoint reductions. 
.

1997  
[22] 
1997. Rolling Bearing Simulation on MIMD Computers. The international journal of high performance computing applications, 11(4):299–313. Sage Publications. DOI: 10.1177/109434209701100404. Fulltext: http://dx.doi.org/10.1177/10943420970110... Rolling bearing simulations are very computationally in tensive. Serial simulations may take weeks to execute, and there is a need to use the potential of parallel comput ing. The specific structure of the rolling bearing problem is used to develop suitable scheduling strategies. The authors discuss the system of stiff ordinary differential equations arising from a bearing model and show how to numerically solve these ordinary differential equations on parallel computers. Benchmarking results are presented for two test cases on three platforms. 
.

[21] 
1997. Computing circumscription revisited: A reduction algorithm. Journal of automated reasoning, 18(3):297–336. Kluwer Academic Publishers. DOI: 10.1023/A:1005722130532. In recent years, a great deal of attention has been devoted to logics of commonsense reasoning. Among the candidates proposed, circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic reasoning, but difficult to apply in practice. The major reason for this is the secondorder nature of circumscription axioms and the difficulty in finding proper substitutions of predicate expressions for predicate variables. One solution to this problem is to compile, where possible, secondorder formulas into equivalent firstorder formulas. Although some progress has been made using this approach, the results are not as strong as one might desire and they are isolated in nature. In this article, we provide a general method that can be used in an algorithmic manner to reduce certain circumscription axioms to firstorder formulas. The algorithm takes as input an arbitrary secondorder formula and either returns as output an equivalent firstorder formula, or terminates with failure. The class of secondorder formulas, and analogously the class of circumscriptive theories that can be reduced, provably subsumes those covered by existing results. We demonstrate the generality of the algorithm using circumscriptive theories with mixed quantifiers (some involving Skolemization), variable constants, nonseparated formulas, and formulas with nary predicate variables. In addition, we analyze the strength of the algorithm, compare it with existing approaches, and provide formal subsumption results. 
.

1996  
[20] 
1996. On Natural Deduction in FirstOrder Fixpoint Logics. In the current paper we present a powerful technique of obtaining natural deduction proof systems for firstorder fixpoint logics. The term fixpoint logics refers collectively to a class of logics consisting of modal logics with modalities definable at metalevel by fixpoint equations on formulas. The class was found very interesting as it contains most logics of programs with e.g. dynamic logic, temporal logic and the Â¯calculus among them. In this paper we present a technique that allows us to derive automatically natural deduction systems for modal logics from fixpoint equations defining the modalities 
.

[19] 
1996. A reduction result for circumscribed semihorn formulas. Circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic and commonsense reasoning, but difficult to apply in practice due to the use of secondorder formulas. One proposal for dealing with the computational problems is to identify classes of firstorder formulas whose circumscription can be shown to be equivalent to a firstorder formula. In previous work, we presented an algorithm which reduces certain classes of secondorder circumscription axioms to logically equivalent firstorder formulas. The basis for the algorithm is an elimination lemma due to Ackermann. In this paper, we capitalize on the use of a generalization of Ackermann's Lemma in order to deal with a subclass of universal formulas called <em>semiHorn formulas</em>. Our results subsume previous results by Kolaitis and Papadimitriou regarding a characterization of circumscribed definite logic programs which are firstorder expressible. The method for distinguishing which formulas are reducible is based on a boundedness criterion. The approach we use is to first reduce a circumscribed semiHorn formula to a fixpoint formula which is reducible if the formula is bounded, otherwise not. In addition to a number of other extensions, we also present a fixpoint calculus which is shown to be sound and complete for bounded fixpoint formulas. 
.

1994  
[18] 
1994. On an Automated Translation of Modal Proof Rules into Formulas of the Classical Logic. Journal of Applied NonClassical Logics, 4(2):119–127. Éditions HermèsLavoisier. In this paper we study correspondences between modal proof rules and the classical logic. The method we apply is based on an Ackermann's technique of eliminating secondorder quantifiers from formulas. We show that the process of finding suitable correspondences can be reduced to a few simple steps. Moreover, the whole technique can be fully mechanized. We thus provide the reader with a powerful tool, useful in automated translations between modal logics and the classical logic. 
.

1993  
[17] 
1993. On the Correspondence between Modal and Classical Logic: An Automated Approach. Journal of logic and computation (Print), 3(6):605–620. Oxford University Press. DOI: 10.1093/logcom/3.6.605. The current paper is devoted to automated techniques in the correspondence theory. The theory we deal with concerns the problem of finding classical firstorder axioms corresponding to propositional modal schemas. Given a modal schema and a semantics base method of translating propositional modal formulae into classical firstorder ones, we try to derive automatically classica firstorder formulae characterizing precisely the class of frames validating the schema. The technique we consider can, in many cases, be easily applied even without computer support. Although we mainly concentrate on Kripke semantics, the technique we apply is much more general, as it is based on elimination of secondorder quantifiers from formulae. We show many examples of application of the method. These can also serve as new, automated proofs of considered correspondences. 
.

[16] 
1993. Fuzzy ifthenunless rules and their implementation. International Journal of Uncertainty Fuzziness and KnowledgeBased Systems, 1(2):167–182. World Scientific. DOI: 10.1142/S0218488593000097. We consider the possibility of generalizing the notion of a fuzzy IfThen rule to take into account its context dependent nature. We interpret fuzzy rules as modeling a forward directed causal relationship between the antecedent and the conclusion, which applies in most contexts, but on occasion breaks down in exceptional contexts. The default nature of the rule is modeled by augmenting the original IfThen rule with an exception part. We then consider the proper semantic correlate to such an addition and propose a ternary relation which satisfies a number of intuitive constraints described in terms of a number of inference rules. In the rest of the paper, we consider implementational issues arising from the unless extension and propose the use of reason maintenance systems, in particular TMS's, where a fuzzy IfThenUnless rule is encoded into a dependency net. We verify that the net satisfies the constraints stated in the inference schemes and conclude with a discussion concerning the integration of qualitative INOUT labelings of the TMS with quantitative degree of membership labelings for the variables in question. 
.

1992  
[15] 
1992. Axiomatizing Fixpoint Logics. 
.

[14] 
1992. NML3  A nonmonotonic logic with explicit defaults. Journal of applied nonclassical logics, 2(1):9–48. Éditions HermèsLavoisier. 
.

1991  
[13] 
1991. On Strictly Arithmetical Completeness in Logics of Programs. We introduce and discuss a notion of strictly arithmetical completeness related to relative completeness of Cook (1978) and arithmetical completeness of Harel (1978). We present a powerful technique of obtaining strictly arithmetical axiomatizations of logics of programs. Given a modeltheoretic semantics of a logic and a set of formulae defining (in a metalanguage) its nonclassical connectives, we automatically derive strictly arithmetically complete and sound proof systems for this logic. As examples of application of the technique we obtain new axiomatizations of algorithmic logic, (concurrent) dynamic logic and temporal logic. 
.

1989  
[12] 
1989. On Temporal Logic for Distributed Systems and its Application to Processes Communicating by Interrupts. Fundamenta Informaticae, 12(2):191–204. IOS Press. 
.

1988  
[11] 
1988. Towards the Temporal Approach to Abstract Data Types. Fundamenta Informaticae, 11(1):49–64. IOS Press. 
.

[10] 
1988. An Incompleteness Result in Process Algebra. 
.

[9] 
1988. Propositional Description of Finite CauseEffect Structures. An alternative method of describing semantics of causeeffect structures is presented. It is based on a model of discrete dynamic systems. The model is similar to a conditionevent Petri net, differing in the way restrictions on the alterability of actions are imposed. 
.

[8] 
1988. Incompleteness of FirstOrder Temporal Logic with Until. The results presented in this paper concern the axiomatizability problem of firstorder temporal logic with linear and discrete time. We show that the logic is incomplete, i.e., it cannot be provided with a finitistic and complete proof system. We show two incompleteness theorems. Although the first one is weaker (it assumes some firstorder signature), we decided to present it, for its proof is much simpler and contains an interesting fact that finite sets are characterizable by means of temporal formulas. The second theorem shows that the logic is incomplete independently of any particular signature. 
.

1987  
[7] 
1987. A Complete Axiomatic Characterization of FirstOrder Temporal Logic of Linear Time. As shown in (Szalas, 1986, 1986, 1987) there is no finitistic and complete axiomatization of FirstOrder Temporal Logic of linear and discrete time. In this paper we give an infinitary proof system for the logic. We prove that the proof system is sound and complete. We also show that any syntactically consistent temporal theory has a model. As a corollary we obtain that the Downward Theorem of Skolem, Lowenheim and Tarski holds in the case of considered logic. 
.

[6] 
1987. Arithmetical Axiomatization of FirstOrder Temporal Logic. 
.

1986  
[5] 
1986. Concerning the Semantic Consequence Relation in FirstOrder Temporal Logic. In this paper we consider the firstorder temporal logic with linear and discrete time. We prove that the set of tautologies of this logic is not arithmetical (i.e., it is neither <em>Î£</em><sup>0</sup><sub><em>n</em></sub> nor <em>Î </em><sup>0</sup><sub><em>n</em></sub> for any natural number <em>n</em>). Thus we show that there is no finitistic and complete axiomatization of the considered logic. 
.

1985  
[4] 
1985. A Note on PCI: Distributed Processes Communicating by Interrupts. 
.

[3] 
1985. Exception Handling in Parallel Computations. 
.

1984  
[2] 
1984. On an Application of Algorithmic Theory of Stacks. Fundamenta Informaticae, 7(3):378–388. IOS Press. 
.

1981  
[1] 
1981. Algorithmic Logic with Recursive Functions. Fundamenta Informaticae, 4(4):975–995. IOS Press. 
.
Page responsible: Patrick Doherty
Last updated: 20140430