Hide menu

AIICS Publications: Journal Publications

Hide abstracts BibTeX entries
2024
[163] Ayal Taitler, Ron Alford, Joan Espasa, Gregor Behnke, Daniel Fiser, Michael Gimelfarb, Florian Pommerening, Scott Sanner, Enrico Scala, Dominik Schreiber, Javier Segovia-Aguas and Jendrik Seipp. 2024.
The 2023 International Planning Competition.
The AI Magazine, ??(??):????. AMER ASSOC ARTIFICIAL INTELL.
DOI: 10.1002/aaai.12169.
Publication status: Epub ahead of print
Note: Funding Agencies|Swedish Research Council [2022-06725]; Some of the computations for the Learning track - Swedish Research Council; University Of Edinburgh [EP/P020267/1]; EPSRC [882500]; European Research Council (ERC) under the European Union [FA8702-19-C-0001, W56KGU-18-D-0004, DFARS 252.227-7013]; MITRE Independent Research and Development Program; Natural Sciences and Engineering Research Council of Canada (NSERC)

In this article, we present an overview of the 2023 International Planning Competition. It featured five distinct tracks designed to assess cutting-edge methods and explore the frontiers of planning within these settings: the classical (deterministic) track, the numeric track, the Hierarchical Task Networks (HTN) track, the learning track, and the probabilistic and reinforcement learning track. Each of these tracks evaluated planning methodologies through one or more subtracks, with the goal of pushing the boundaries of current planner performance. To achieve this objective, the competition introduced a combination of well-established challenges and entirely novel ones. Within this article, each track offers an exploration of its historical context, justifies its relevance within the planning landscape, discusses emerging domains and trends, elucidates the evaluation methodology, and ultimately presents the results.

[162] Johannes Klaus Fichte, Sarah Alice Gaggl, Markus Hecher and Dominik Rusovac. 2024.
IASCAR: Incremental Answer Set Counting by Anytime Refinement.
Theory and Practice of Logic Programming, ??(??):????. CAMBRIDGE UNIV PRESS.
DOI: 10.1017/S1471068424000036.
Publication status: Epub ahead of print
Note: Funding Agencies|BMBF [01IS20056NAVAS]; ELLIIT - Swedish government; Austrian Science Fund (FWF) [J4656, P32830, Y1329]; GWK; Swedish Research Council [2022-06725]

Answer set programming (ASP) is a popular declarative programming paradigm with various applications. Programs can easily have many answer sets that cannot be enumerated in practice, but counting still allows quantifying solution spaces. If one counts under assumptions on literals, one obtains a tool to comprehend parts of the solution space, so-called answer set navigation. However, navigating through parts of the solution space requires counting many times, which is expensive in theory. Knowledge compilation compiles instances into representations on which counting works in polynomial time. However, these techniques exist only for conjunctive normal form (CNF) formulas, and compiling ASP programs into CNF formulas can introduce an exponential overhead. This paper introduces a technique to iteratively count answer sets under assumptions on knowledge compilations of CNFs that encode supported models. Our anytime technique uses the inclusion-exclusion principle to improve bounds by over- and undercounting systematically. In a preliminary empirical analysis, we demonstrate promising results. After compiling the input (offline phase), our approach quickly (re)counts.

[161] Henrik Carlsen, Bjorn Nykvist, Somya Joshi and Fredrik Heintz. 2024.
Chasing artificial intelligence in shared socioeconomic pathways.
One Earth, 7(1):18–22. CELL PRESS.
DOI: 10.1016/j.oneear.2023.12.015.
Note: Funding Agencies|Mistra Geopolitics research program [2016/11]

The development of artificial intelligence has likely reached an inflection point, with significant implications for how research needs to address emerging technologies and how they drive long-term socioeconomic development of importance for climate change scenarios.

[160] Patrick Doherty and Andrzej Szalas. 2024.
Dual forgetting operators in the context of weakest sufficient and strongest necessary conditions.
Artificial Intelligence, 326(??):????. ELSEVIER.
DOI: 10.1016/j.artint.2023.104036.
Note: Funding Agencies|ELLIIT Network Organization for Information and Communication Technology, Sweden; Mahasarakham Development Fund, Mahasarakham University, Thailand; National Science Centre Poland [2017/27/B/ST6/02018]

Forgetting is an important concept in knowledge representation and automated reasoning with widespread applications across a number of disciplines. A standard forgetting operator, characterized in [26] in terms of model-theoretic semantics and primarily focusing on the propositional case, opened up a new research subarea. In this paper, a new operator called weak forgetting, dual to standard forgetting, is introduced and both together are shown to offer a new more uniform perspective on forgetting operators in general. Both the weak and standard forgetting operators are characterized in terms of entailment and inference, rather than a model theoretic semantics. This naturally leads to a useful algorithmic perspective based on quantifier elimination and the use of Ackermanns Lemma and its fixpoint generalization. The strong formal relationship between standard forgetting and strongest necessary conditions and weak forgetting and weakest sufficient conditions is also characterized quite naturally through the entailment based, inferential perspective used. The framework used to characterize the dual forgetting operators is also generalized to the first-order case and includes useful algorithms for computing first-order forgetting operators in special cases. Practical examples are also included to show the importance of both weak and standard forgetting in modeling and representation.

2023
[159] Cyrille Berger, Patrick Doherty, Piotr Rudol and Mariusz Wzorek. 2023.
RGS?: RDF graph synchronization for collaborative robotics.
Autonomous Agents and Multi-Agent Systems, 37(2):????. SPRINGER.
DOI: 10.1007/s10458-023-09629-2.
Note: Funding Agencies|ELLIIT Network Organization for Information and Communication Technology, Sweden [RIT15-0097]; Swedish Foundation for Strategic Research SSF (Smart Systems Project) [B09]; Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation

In the context of collaborative robotics, distributed situation awareness is essential for supporting collective intelligence in teams of robots and human agents where it can be used for both individual and collective decision support. This is particularly important in applications pertaining to emergency rescue and crisis management. During operational missions, data and knowledge is gathered incrementally and in different ways by heterogeneous robots and humans. The purpose of this paper is to describe an RDF Graph Synchronization System called RGS⊕. It is assumed that a dynamic set of agents provide or retrieve knowledge stored in their local RDF Graphs which are continuously synchronized between agents. The RGS⊕ System was designed to handle unreliable communication and does not rely on a static centralized infrastructure. It is capable of synchronizing knowledge as timely as possible and allows agents to access knowledge while it is incrementally acquired. A deeper empirical analysis of the RGS⊕ System is provided that shows both its efficiency and efficacy.

[158] Barbara Dunin-Keplicz and Andrzej Szalas. 2023.
Modeling and shadowing paraconsistent BDI agents.
Annals of Mathematics and Artificial Intelligence, ??(??):????. SPRINGER.
DOI: 10.1007/s10472-023-09902-w.
Publication status: Epub ahead of print
Note: Funding Agencies|Narodowe Centrum Nauki [2015/19/B/ST6/02589]; National Science Centre Poland

The Bdi model of rational agency has been studied for over three decades. Many robust multiagent systems have been developed, and a number of Bdi logics have been studied. Following this intensive development phase, the importance of integrating Bdi models with inconsistency handling and revision theory have been emphasized. There is also a demand for a tighter connection between Bdi-based implementations and Bdi logics. In this paper, we address these postulates by introducing a novel, paraconsistent logical Bdi model close to implementation, with building blocks that can be represented as Sql/rule-based databases. Importantly, tractability is achieved by reasoning as querying. This stands in a sharp contrast to the high complexity of known Bdi logics. We also extend belief shadowing, a shallow and lightweight alternative to deep and computationally demanding belief revision, to encompass agents motivational attitudes.

[157] Katarina Sperling, Linnéa Stenliden, Jörgen Nissen and Fredrik Heintz. 2023.
Behind the Scenes of Co-designing AI and LA in K-12 Education.
Postdigital Science and Education, ??(??):????.
DOI: 10.1007/s42438-023-00417-5.
Publication status: Epub ahead of print
fulltext:print: https://liu.diva-portal.org/smash/get/di...

This article explores the complex challenges of co-designing an AI- and learning analytics (LA)-integrated learning management system (LMS). While co-design has been proposed as a human-centred design approach for scaling AI and LA adoption, our understanding of how these design processes play out in real-life settings remains limited. This study is based on ethnographic fieldwork in primary and secondary schools and employs a relational materialist approach to trace, visualise, and analyse the increasingly complex and transformative relations between a growing number of actors. The findings shed light on the intricate ecosystem in which AI and LA are being introduced and on the marketisation of K-12 education. Instead of following a rational and sequential approach that can be easily executed, the co-design process emerged as a series of events, shifting from solely generating ideas with teachers to integrating and commercialising the LMS into a school market with an already high prevalence of educational technology (EdTech). AI and LA in education, co-design and data-driven schooling served as negotiating ideas, boundary objects, which maintained connectivity between actors, despite limited AI and LA implementation and the development of a stand-alone app. Even though teachers and students were actively involved in the design decisions, the co-design process did not lead to extensive adoption of the LMS nor did it sufficiently address the ethical issues related to the unrestricted collection of student data.

[156] Tom Ziemke. 2023.
Understanding Social Robots: Attribution of Intentional Agency to Artificial and Biological Bodies.
Artificial Life, 29(3):351–366. MIT Press.
DOI: 10.1162/artl_a_00404.
Note: Funding: ELLIIT; Excellence Center at Linkoping-Lund in Information Technology; Swedish Research Council (VR) [2022-04602]
Fulltext: https://doi.org/10.1162/artl_a_00404
fulltext:print: https://liu.diva-portal.org/smash/get/di...

Much research in robotic artificial intelligence (AI) and Artificial Life has focused on autonomous agents as an embodied and situated approach to AI. Such systems are commonly viewed as overcoming many of the philosophical problems associated with traditional computationalist AI and cognitive science, such as the grounding problem (Harnad) or the lack of intentionality (Searle), because they have the physical and sensorimotor grounding that traditional AI was argued to lack. Robot lawn mowers and self-driving cars, for example, more or less reliably avoid obstacles, approach charging stations, and so on—and therefore might be considered to have some form of artificial intentionality or intentional directedness. It should be noted, though, that the fact that robots share physical environments with people does not necessarily mean that they are situated in the same perceptual and social world as humans. For people encountering socially interactive systems, such as social robots or automated vehicles, this poses the nontrivial challenge to interpret them as intentional agents to understand and anticipate their behavior but also to keep in mind that the intentionality of artificial bodies is fundamentally different from their natural counterparts. This requires, on one hand, a “suspension of disbelief †but, on the other hand, also a capacity for the “suspension of belief.†This dual nature of (attributed) artificial intentionality has been addressed only rather superficially in embodied AI and social robotics research. It is therefore argued that Bourgine and Varela’s notion of Artificial Life as the practice of autonomous systems needs to be complemented with a practice of socially interactive autonomous systems, guided by a better understanding of the differences between artificial and biological bodies and their implications in the context of social interactions between people and technology.

[155] Vitor Hugo Serravalle Reis Rodrigues, Paulo Roberto de Melo Barros Junior, Euler Bentes dos Santos Marinho and Jose Luis Lima de Jesus Silva. 2023.
Wavelet gated multiformer for groundwater time series forecasting.
Scientific Reports, 13(1):????. NATURE PORTFOLIO.
DOI: 10.1038/s41598-023-39688-0.
Note: Funding: Carl Tryggers Foundation; VFN (Verifiering for Nyttiggorande beslutar) at Linkopings University; Swedish National Infrastructure for Computing (SNIC) [SNIC 2022/22-843]
Fulltext: https://doi.org/10.1038/s41598-023-39688...

Developing accurate models for groundwater control is paramount for planning and managing life-sustaining resources (water) from aquifer reservoirs. Significant progress has been made toward designing and employing deep-forecasting models to tackle the challenge of multivariate time-series forecasting. However, most models were initially taught only to optimize natural language processing and computer vision tasks. We propose the Wavelet Gated Multiformer, which combines the strength of a vanilla Transformer with the Wavelet Crossformer that employs inner wavelet cross-correlation blocks. The self-attention mechanism (Transformer) computes the relationship between inner time-series points, while the cross-correlation finds trending periodicity patterns. The multi-headed encoder is channeled through a mixing gate (linear combination) of sub-encoders (Transformer and Wavelet Crossformer) that output trending signatures to the decoder. This process improved the model’s predictive capabilities, reducing Mean Absolute Error by 31.26 % compared to the second-best performing transformer-like models evaluated. We have also used the Multifractal Detrended Cross-Correlation Heatmaps (MF-DCCHM) to extract cyclical trends from pairs of stations across multifractal regimes by denoising the pair of signals with Daubechies wavelets. Our dataset was obtained from a network of eight wells for groundwater monitoring in Brazilian aquifers, six rainfall stations, eleven river flow stations, and three weather stations with atmospheric pressure, temperature, and humidity sensors.

[154] Mattias Forsblad, Philip Lindblad, Mattias Arvola, Ignacio Solís-Marcos, Henrik Danielsson and Mikael Wiberg. 2023.
How Children With Mild Intellectual Disability Experience Self-driving Buses: In Support of Agency.
Transaction on Transport Sciences, 14(2):21–31.
DOI: 10.5507/tots.2023.002.
Fulltext: https://doi.org/10.5507/tots.2023.002
Link: https://tots.upol.cz/getrevsrc.php?ident...
fulltext:print: https://liu.diva-portal.org/smash/get/di...

Emerging technology for public transportation is often not fully aligned with an inclusive design strategy. Many people with intellectual disability experience their needs and desires not being fully considered. Responding to this problem, the purpose of this study is to investigate how children with mild intellectual disability experience self-driving buses. On each bus, a person called \"safety driver\" monitors the ride and takes control if a problematic situation arises. The purpose is also to investigate what roles support persons and safety drivers play. In addition, the research aims to propose improvements in how the design of these self-driving buses can better motivate children with intellectual disability to use them in support of their agency. To address this, we arranged and studied seven rides on self-driving buses, for 16 children diagnosed to have mild intellectual disability, and their support persons. Interviews with the children were held after the rides, and both the rides and interviews were video recorded. The analysis was in part inductive but also employed a theory based on motivation: self-determination theory. For several children, the bus worked as a vehicle for a social sightseeing tour of the local environment, and the current design did not hinder such an experience. Overall, many of the children had a positive experience, but there is room for improvement regarding the design of the buses. Some children expressed curiosity and a few frustrations with how the bus behaved in traffic. For instance, it was difficult for the children to understand why the bus braked for things that were hard for them to perceive. From observation, it appears that the accompanying support person and safety driver played an important role in making children safe and shaping the social environment on the bus. The support persons were also essential for some children to ride the bus at all. The safety driver provided the children with information about how the bus worked. Both the safety driver and the support person had a positive impact on the children's experience. To meet the children's needs and skills, and to improve their motivation for riding the buses again, the buses need to decelerate less abruptly, have easier and consistently designed seatbelts, and communicate what they do, see, and signal more clearly. We argue that further studies at this level of detail are crucial to ensure that new technologies are indeed designed for everyone.

[153] Vitor Hugo Serravalle Reis Rodrigues, Paulo Roberto de Melo Barros Junior, Euler Bentes dos Santos Marinho and Jose Luis Silva. 2023.
Wavelet Gated Multiformer for Groundwater Time Series Forecasting.
Scientific Reports, ??(??):????.
DOI: 10.21203/rs.3.rs-2647096/v1.
Publication status: Accepted
Fulltext: https://doi.org/10.21203/rs.3.rs-2647096...
Link: https://doi.org/10.21203/rs.3.rs-2647096...

Developing accurate models for groundwater control is paramount for planning and managing life-sustaining resources (water) from aquifer reservoirs. Significant progress has been made toward designing and employing deep-forecasting models to tackle the challenge of multivariate time-series forecasting. However, most models were initially taught only to optimize natural language processing and computer vision tasks. We propose the Wavelet Gated Multiformer, which combines the strength of a vanilla Transformer with the Wavelet Crossformer that employs inner wavelet cross-correlation blocks. The self-attention mechanism (Transformer) computes the relationship between inner time-series points, while the cross-correlation finds trending periodicity patterns. The multi-headed encoder is channeled through a mixing gate (linear combination) of sub-encoders (Transformer and Wavelet Crossformer) that output trending signatures to the decoder. This process can improve the model's predictive capabilities. We have also used the Multifractal Detrended Cross-Correlation Heatmaps (MF-DCCHM) to extract cyclical trends from pairs of stations across multifractal regimes by denoising the pair of signals with Daubechies wavelets. Our dataset was obtained from a network of eight wells for groundwater monitoring in Brazilian aquifers, six rainfall stations, eleven river flow stations, and three weather stations with atmospheric pressure, temperature, and humidity sensors.

[152] Linda Mannila, Teemu Leinonen, Merja Bauters and Marjaana Veermans. 2023.
Student and teacher co-agency when combining CT with arts and design in a cross-curricular project.
Computers and Education Open, 4(??):????. ELSEVIER.
DOI: 10.1016/j.caeo.2023.100132.
fulltext:print: https://liu.diva-portal.org/smash/get/di...

The technological development has raised awareness for the importance of digital competence and computational thinking (CT) to understand the digital world and has resulted in revised curricula in many countries. In Finland, a new curriculum for grades 1–9 came into force in 2016 introducing digital competence (including programming) to be integrated in other subjects. Most teachers lack prior experience in programming and there is a need for suitable instructional models. This article presents a cross-curricular teaching sequence and the results from a case study conducted in four Finnish schools. Students in grades 4–6 collaboratively worked on a project combining arts, design and CT with other subjects. The results show that students demonstrated several CT abilities while working on their projects, in particular creativity, tinkering and debugging. The findings also indicate that teachers and students learned together (co-agency) and suggest that models like the teaching sequence can help and encourage teachers to integrate programming and CT in a cross-curricular manner. Still, the teachers’ knowledge, ambition level and understanding of the task at hand, as well as the organizational support appear to play a notable role when planning and carrying out projects of this kind. While CT is commonly seen as developed through programming, the teaching sequence seems to have fostered CT abilities through the project as a whole, with programming playing the role of a tool or a glue depending on the time available, and the students’ skill and ambition level.

[151] Kashyap Haresamudram, Stefan Larsson and Fredrik Heintz. 2023.
Three Levels of AI Transparency.
Computer, 56(2):93–100. IEEE COMPUTER SOC.
DOI: 10.1109/MC.2022.3213181.
Note: Funding Agencies|AI Transparency and Consumer Trust; Wallenberg AI; Autonomous Systems and Software Program-Humanities and Society (WASP-HS)

The concept of transparency is fragmented in artificial intelligence (AI) research, often limited to transparency of the algorithm alone. We propose that AI transparency operates on three levels-algorithmic, interaction, and social-all of which need to be considered to build trust in AI. We expand upon these levels using current research directions, and identify research gaps resulting from the conceptual fragmentation of AI transparency highlighted within the context of the three levels.

[150] Finn Rietz, Sven Magg, Fredrik Heintz, Todor Stoyanov, Stefan Wermter and Johannes A. Stork. 2023.
Hierarchical goals contextualize local reward decomposition explanations.
Neural Computing & Applications, 35(??):16693–16704. Springer London Ltd.
DOI: 10.1007/s00521-022-07280-8.
Note: Funding Agencies|Orebro University; Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; Federal Ministry for Economic Affairs and Climate [FKZ 20X1905A-D]
fulltext:print: https://liu.diva-portal.org/smash/get/di...

One-step reinforcement learning explanation methods account for individual actions but fail to consider the agents future behavior, which can make their interpretation ambiguous. We propose to address this limitation by providing hierarchical goals as context for one-step explanations. By considering the current hierarchical goal as a context, one-step explanations can be interpreted with higher certainty, as the agents future behavior is more predictable. We combine reward decomposition with hierarchical reinforcement learning into a novel explainable reinforcement learning framework, which yields more interpretable, goal-contextualized one-step explanations. With a qualitative analysis of one-step reward decomposition explanations, we first show that their interpretability is indeed limited in scenarios with multiple, different optimal policies-a characteristic shared by other one-step explanation methods. Then, we show that our framework retains high interpretability in such cases, as the hierarchical goal can be considered as context for the explanation. To the best of our knowledge, our work is the first to investigate hierarchical goals not as an explanation directly but as additional context for one-step reinforcement learning explanations.

2022
[149] Paulo Roberto de Melo Barros Junior, Kianny Lopes Bunge, Vitor Hugo Serravalle Reis Rodrigues, Michell Thompson Ferreira Santiago, Euler Bentes dos Santos Marinho and Jose Luis Lima de Jesus Silva. 2022.
Multi-fractal detrended cross-correlation heatmaps for time series analysis.
Scientific Reports, 12(1):????. NATURE PORTFOLIO.
DOI: 10.1038/s41598-022-26207-w.
Note: Funding Agencies|Swedish National Infrastructure for Computing (SNIC) [SNIC 2022/22-843]
fulltext:print: https://liu.diva-portal.org/smash/get/di...

Complex systems in biology, climatology, medicine, and economy hold emergent properties such as non-linearity, adaptation, and self-organization. These emergent attributes can derive from large-scale relationships, connections, and interactive behavior despite not being apparent from their isolated components. It is possible to better comprehend complex systems by analyzing cross-correlations between time series. However, the accumulation of non-linear processes induces multiscale structures, therefore, a spectrum of power-law exponents (the fractal dimension) and distinct cyclical patterns. We propose the Multifractal detrended cross-correlation heatmaps (MF-DCCHM) based on the DCCA cross-correlation coefficients with sliding boxes, a systematic approach capable of mapping the relationships between fluctuations of signals on different scales and regimes. The MF-DCCHM uses the integrated series of magnitudes, sliding boxes with sizes of up to 5% of the entire series, and an average of DCCA coefficients on top of the heatmaps for the local analysis. The heatmaps have shown the same cyclical frequencies from the spectral analysis across different multifractal regimes. Our dataset is composed of sales and inventory from the Brazilian automotive sector and macroeconomic descriptors, namely the Gross Domestic Product (GDP) per capita, Nominal Exchange Rate (NER), and the Nominal Interest Rate (NIR) from the Central Bank of Brazil. Our results indicate cross-correlated patterns that can be directly compared with the power-law spectra for multiple regimes. We have also identified cyclical patterns of high intensities that coincide with the Brazilian presidential elections. The MF-DCCHM uncovers non-explicit cyclic patterns, quantifies the relations of two non-stationary signals (noise effect removed), and has outstanding potential for mapping cross-regime patterns in multiple domains.

[148] Joakim Nivre, Ali Basirat, Luise Durlich and Adam Moss. 2022.
Nucleus Composition in Transition-based Dependency Parsing.
Computational linguistics - Association for Computational Linguistics (Print), 48(4):849–886. MIT PRESS.
DOI: 10.1162/coli_a_00450.
Note: Funding Agencies|Swedish Research Council [2016-01817]
fulltext:print: https://liu.diva-portal.org/smash/get/di...

Dependency-based approaches to syntactic analysis assume that syntactic structure can be analyzed in terms of binary asymmetric dependency relations holding between elementary syntactic units. Computational models for dependency parsing almost universally assume that an elementary syntactic unit is a word, while the influential theory of Lucien Tesniere instead posits a more abstract notion of nucleus, which may be realized as one or more words. In this article, we investigate the effect of enriching computational parsing models with a concept of nucleus inspired by Tesniere. We begin by reviewing how the concept of nucleus can be defined in the framework of Universal Dependencies, which has become the de facto standard for training and evaluating supervised dependency parsers, and explaining how composition functions can be used to make neural transition-based dependency parsers aware of the nuclei thus defined. We then perform an extensive experimental study, using data from 20 languages to assess the impact of nucleus composition across languages with different typological characteristics, and utilizing a variety of analytical tools including ablation, linear mixed-effects models, diagnostic classifiers, and dimensionality reduction. The analysis reveals that nucleus composition gives small but consistent improvements in parsing accuracy for most languages, and that the improvement mainly concerns the analysis of main predicates, nominal dependents, clausal dependents, and coordination structures. Significant factors explaining the rate of improvement across languages include entropy in coordination structures and frequency of certain function words, in particular determiners. Analysis using dimensionality reduction and diagnostic classifiers suggests that nucleus composition increases the similarity of vectors representing nuclei of the same syntactic type.

[147] Lena Katharina Schiffer, Marco Kuhlmann and Giorgio Satta. 2022.
Tractable Parsing for CCGs of Bounded Degree.
Computational linguistics - Association for Computational Linguistics (Print), 48(3):593–633. MIT PRESS.
DOI: 10.1162/coli_a_00441.
Note: Funding Agencies|German Research Foundation (DFG) Research Training Group [GRK 1763]; Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation
fulltext:print: https://liu.diva-portal.org/smash/get/di...

Unlike other mildly context-sensitive formalisms, Combinatory Categorial Grammar (CCG) cannot be parsed in polynomial time when the size of the grammar is taken into account. Refining this result, we show that the parsing complexity of CCG is exponential only in the maximum degree of composition. When that degree is fixed, parsing can be carried out in polynomial time. Our finding is interesting from a linguistic perspective because a bounded degree of composition has been suggested as a universal constraint on natural language grammar. Moreover, ours is the first complexity result for a version of CCG that includes substitution rules, which are used in practical grammars but have been ignored in theoretical work.

[146] Per Nilsen, Petra Svedberg, Jens Nygren, Micael Frideros, Jan Johansson and Stephen Schueller. 2022.
Accelerating the impact of artificial intelligence in mental healthcare through implementation science.
Implementation research and practice, 3(??):????. Sage Publications.
DOI: 10.1177/26334895221112033.
fulltext:print: https://liu.diva-portal.org/smash/get/di...

<strong>BACKGROUND:</strong> The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps regarding how to implement and best use AI to add value to mental healthcare services, providers, and consumers. The aim of this paper is to identify challenges and opportunities for AI use in mental healthcare and to describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare.<strong>METHODS:</strong> The paper is based on a selective review of articles concerning AI in mental healthcare and implementation science.<strong>RESULTS:</strong> Research in implementation science has established the importance of considering and planning for implementation from the start, the progression of implementation through different stages, and the appreciation of determinants at multiple levels. Determinant frameworks and implementation theories have been developed to understand and explain how different determinants impact on implementation. AI research should explore the relevance of these determinants for AI implementation. Implementation strategies to support AI implementation must address determinants specific to AI implementation in mental health. There might also be a need to develop new theoretical approaches or augment and recontextualize existing ones. Implementation outcomes may have to be adapted to be relevant in an AI implementation context.<strong>CONCLUSION:</strong> Knowledge derived from implementation science could provide an important starting point for research on implementation of AI in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research. However, when taking advantage of the existing knowledge basis, it is important to also be explorative and study AI implementation in health and mental healthcare as a new phenomenon in its own right since implementing AI may differ in various ways from implementing evidence-based practices in terms of what implementation determinants, strategies, and outcomes are most relevant.<strong>Plain Language Summary:</strong> The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps concerning how to implement and best use AI to add value to mental healthcare services, providers, and consumers. This paper is based on a selective review of articles concerning AI in mental healthcare and implementation science, with the aim to identify challenges and opportunities for the use of AI in mental healthcare and describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. AI offers opportunities for identifying the patients most in need of care or the interventions that might be most appropriate for a given population or individual. AI also offers opportunities for supporting a more reliable diagnosis of psychiatric disorders and ongoing monitoring and tailoring during the course of treatment. However, AI implementation challenges exist at organizational/policy, individual, and technical levels, making it relevant to draw on implementation science knowledge for understanding and facilitating implementation of AI in mental healthcare. Knowledge derived from implementation science could provide an important starting point for research on AI implementation in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research.

[145] Michael Boyden, Ali Basirat and Karl Berglund. 2022.
Digital Conceptual History and the Emergence of a Globalized Climate Imaginary.
Contributions to the History of Concepts, 17(2):95–122. BERGHAHN JOURNALS.
DOI: 10.3167/choc.2022.170205.

This article offers an exploratory quantitative analysis of the conceptual career of climate in US English over the period 1800-2010. Our aim is to qualify two, closely related arguments circulating in Environmental Humanities scholarship regarding the concepts history, namely that we only started to think of climate as a global entity after the introduction of general circulation models during the final quarter of the twentieth century, and, second, that climatic change only became an issue of environmental concern once scientists began to approach climate as a global model. While we do not dispute that the computer revolution resulted in a significantly new understanding of climate, our analysis points to a longer process of singularization and growing abstraction starting in the early nineteenth century that might help to nuance and deepen insights developed in environmental history.

[144] Katarina Sperling, Linnéa Stenliden, Jörgen Nissen and Fredrik Heintz. 2022.
Still w(AI)ting for the automation of teaching: An exploration of machine learning in Swedish primary education using Actor-Network Theory.
European Journal of Education, 57(4):584–600. Wiley-Blackwell Publishing Inc..
DOI: 10.1111/ejed.12526.
Fulltext: https://doi.org/10.1111/ejed.12526
fulltext:print: https://liu.diva-portal.org/smash/get/di...

Machine learning and other artificial intelligence (AI) technologies are predicted to play a transformative role in primary education, where these technologies for automation and personalization are now being introduced to classroom instruction. This article explores the rationales and practices by which machine learning and AI are emerging in schools. We report on ethnographic fieldwork in Sweden, where a machine learning teaching aid in mathematics, the AI Engine, was tried out by 22 teachers and more than 250 primary education students. By adopting an Actor-Network Theory approach, the analysis focuses on the interactions within the network of heterogeneous actors bound by the AI Engine as an obligatory passage point. The findings show how the actions and accounts emerging within the complex ecosystem of human actors compensate for the unexpected and undesirable algorithmic decisions of the AI Engine. We discuss expectations about AI in education, contradictions in how the AI Engine worked and uncertainties about how machine learning algorithms ‘learn’ and predict. These factors contribute to our understanding of the potential of automation and personalisation—a process that requires continued re-negotiations. The findings are presented in the form of a fictional play in two acts, an ethnodrama. The ethnodrama highlights controversies in the use of AI in education, such as the lack of transparency in algorithmic decision-making—and how this can play out in real-life learning contexts. The findings of this study contribute to a better understanding of AI in primary education.

[143] Ivan D. Rodriguez, Blai Bonet, Sebastian Sardina and Hector Geffner. 2022.
FOND Planning with Explicit Fairness Assumptions.
The journal of artificial intelligence research, 74(??):887–916. AI Access Foundation ; AAAI Press.
DOI: 10.1613/jair.1.13599.
Note: Funding Agencies|ERC Advanced Grant [885107]; EU [952215]; Knut and Alice Wallenberg (KAW) Foundation under the WASP program
fulltext:print: http://liu.diva-portal.org/smash/get/div...

We consider the problem of reaching a propositional goal condition in fully-observable non deterministic (FOND) planning under a general class of fairness assumptions that are given explicitly. The fairness assumptions are of the form A/B and say that state trajectories that contain infinite occurrences of an action a from A in a state s and finite occurrence of actions from B, must also contain infinite occurrences of action a in s followed by each one of its possible outcomes. The infinite trajectories that violate this condition are deemed as unfair, and the solutions are policies for which all the fair trajectories reach a goal state. We show that strong and strong-cyclic FOND planning, as well as QNP planning, a planning model introduced recently for generalized planning, are all special cases of FOND planning with fairness assumptions of this form which can also be combined. FOND+ planning, as this form of planning is called, combines the syntax of FOND planning with some of the versatility of LTL for expressing fairness constraints. A sound and complete FOND+ planner is implemented by reducing FOND+ planning to answer set programs, and its performance is evaluated in comparison with FOND and QNP planners, and LTL synthesis tools. Two other FOND+ planners are introduced as well which are more scalable but are not complete.

[142] Peter Vamplew, Benjamin J. Smith, Johan Källström, Gabriel Ramos, Roxana R?dulescu, Diederik M. Roijers, Conor F. Hayes, Fredrik Heintz, Patrick Mannion, Pieter J. K. Libin, Richard Dazeley and Cameron Foale. 2022.
Scalar reward is not enough: a response to Silver, Singh, Precup and Sutton (2021).
Autonomous Agents and Multi-Agent Systems, 36(2):????. Springer.
DOI: 10.1007/s10458-022-09575-5.
Note: Funding: Flemish Government; National Cancer Institute of the U.S. National Institutes of Health [1R01CA240452-01A1]; Research Foundation Flanders (FWO) [1242021N]; Swedish Governmental Agency for Innovation Systems [NFFP7/2017-04885]; Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; National University of Ireland Galway Hardiman Scholarship; FAPERGS [19/2551-0001277-2]; FAPESP [2020/05165-1]
fulltext:print: http://liu.diva-portal.org/smash/get/div...

The recent paper “Reward is Enough†by Silver, Singh, Precup and Sutton posits that the concept of reward maximisation is sufficient to underpin all intelligence, both natural and artificial, and provides a suitable basis for the creation of artificial general intelligence. We contest the underlying assumption of Silver et al. that such reward can be scalar-valued. In this paper we explain why scalar rewards are insufficient to account for some aspects of both biological and computational intelligence, and argue in favour of explicitly multi-objective models of reward maximisation. Furthermore, we contend that even if scalar reward functions can trigger intelligent behaviour in specific cases, this type of reward is insufficient for the development of human-aligned artificial general intelligence due to unacceptable risks of unsafe or unethical behaviour.

[141] Patrick Doherty and Andrzej Szalas. 2022.
A landscape and implementation framework for probabilistic rough sets using PROBLOG.
Information Sciences, 593(??):546–576. Elsevier Science Inc.
DOI: 10.1016/j.ins.2021.12.062.
Note: Funding Agencies|ELLIIT Network Organization for Information and Communication Technology, Sweden; Swedish Foundation for Strategic Research SSF [RIT15-0097]; Guangdong Department of Science and Technology, China [2020A1313030098]; National Science Centre Poland [2017/27/B/ST6/02018]
fulltext:print: http://liu.diva-portal.org/smash/get/div...

Reasoning about uncertainty is one of the main cornerstones of Knowledge Representation. More recently, combining logic with probability has been of major interest. Rough set methods have been proposed for modeling incompleteness and imprecision based on indiscernibility and its generalizations and there is a large body of work in this direction. More recently, the classical theory has been generalized to include probabilistic rough set methods of which there are also a great variety of proposals. Pragmatic, easily accessible, and easy to use tools for specification and reasoning with this wide variety of methods is lacking. It is the purpose of this paper to fill in that gap where the focus will be on probabilistic rough set methods. A landscape of (probabilistic) rough set reasoning methods and the variety of choices involved in specifying them is surveyed first. While doing this, an abstract generalization of all the considered approaches is derived which subsumes each of the methods. One then shows how, via this generalization, one can specify and reason about any of these methods using PROBLOG, a popular and widely used probabilistic logic programming language based on PROBLOG. The paper also considers new techniques in this context such as the use of probabilistic target sets when defining rough sets and the use of partially specified base relations that are also probabilistic. Additionally, probabilistic approaches using tolerance spaces are proposed. The paper includes a rich set of examples and provides a framework based on a library of generic PROBLOG relations that make specification of any of these methods, straightforward, efficient and compact. Complete, ready to run PROBLOG code is included in the Appendix for all examples considered.

[140] Conor F. Hayes, Roxana R?dulescu, Eugenio Bargiacchi, Johan Källström, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten, Luisa M. Zintgraf, Richard Dazeley, Fredrik Heintz, Enda Howley, Athirai A. Irissappane, Patrick Mannion, Ann Nowé, Gabriel Ramos, Marcello Restelli, Peter Vamplew and Diederik M. Roijers. 2022.
A practical guide to multi-objective reinforcement learning and planning.
Autonomous Agents and Multi-Agent Systems, 36(1):????. Springer.
DOI: 10.1007/s10458-022-09552-y.
Note: Funding: Fonds voor Wetenschappelijk Onderzoek (FWO)FWO [1SA2820N]; Flemish GovernmentEuropean Commission; FWOFWO [iBOF/21/027]; National University of Ireland Galway Hardiman Scholarship; FAPERGSFundacao de Amparo a Ciencia e Tecnologia do Estado do Rio Grande do Sul (FAPERGS) [19/2551-0001277-2]; FAPESPFundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) [2020/05165-1]; Swedish Governmental Agency for Innovation SystemsVinnova [NFFP7/2017-04885]; Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; LIFT - Dutch Research Council (NWO) [019.011]; 2017 Microsoft Research PhD Scholarship Program; 2020 Microsoft Research EMEA PhD Award
fulltext:print: http://liu.diva-portal.org/smash/get/div...

Real-world sequential decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems.

[139] Edward Curry, Fredrik Heintz, Morten Irgens, Arnold W. M. Smeulders and Stefano Stramigioli. 2022.
Partnership on AI, Data, and Robotics.
Communications of the ACM, 65(4):54–55. ASSOC COMPUTING MACHINERY.
DOI: 10.1145/3513000.

n/a

[138] Johan Källström, R. Granlund and Fredrik Heintz. 2022.
Design of simulation-based pilot training systems using machine learning agents.
Aeronautical Journal, 126(1300):907–931. Cambridge University Press.
DOI: 10.1017/aer.2022.8.
Note: Funding Agencies|Swedish Governmental Agency for Innovation SystemsVinnova [NFFP7/2017-04885]; Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; Swedish Research CouncilSwedish Research CouncilEuropean Commission [2020/5-230]
fulltext:print: http://liu.diva-portal.org/smash/get/div...

The high operational cost of aircraft, limited availability of air space, and strict safety regulations make training of fighter pilots increasingly challenging. By integrating Live, Virtual, and Constructive simulation resources, efficiency and effectiveness can be improved. In particular, if constructive simulations, which provide synthetic agents operating synthetic vehicles, were used to a higher degree, complex training scenarios could be realised at low cost, the need for support personnel could be reduced, and training availability could be improved. In this work, inspired by the recent improvements of techniques for artificial intelligence, we take a user perspective and investigate how intelligent, learning agents could help build future training systems. Through a domain analysis, a user study, and practical experiments, we identify important agent capabilities and characteristics, and then discuss design approaches and solution concepts for training systems to utilise learning agents for improved training value.

[137] Marco Kuhlmann, Andreas Maletti and Lena Katharina Schiffer. 2022.
The tree-generative capacity of combinatory categorial grammars.
Journal of computer and system sciences (Print), 124(??):214–233. Academic Press Ltd - Elsevier Science Ltd.
DOI: 10.1016/j.jcss.2021.10.005.
Note: Funding Agencies|Centre for Industrial IT (CENIIT) [15.02]; German Research Foundation (DFG) Research Training Group GRK 1763 Quantitative Logics and AutomataGerman Research Foundation (DFG)
fulltext:print: http://liu.diva-portal.org/smash/get/div...

The generative capacity of combinatory categorial grammars (CCGs) as generators of tree languages is investigated. It is demonstrated that the tree languages generated by CCGs can also be generated by simple monadic context-free tree grammars. However, the important subclass of pure combinatory categorial grammars cannot even generate all regular tree languages. Additionally, the tree languages generated by combinatory categorial grammars with limited rule degrees are characterized: If only application rules are allowed, then these grammars can generate only a proper subset of the regular tree languages, whereas they can generate exactly the regular tree languages once first-degree composition rules are permitted. (C) 2021 The Author(s). Published by Elsevier Inc.

2021
[136] Patrick Doherty, Cyrille Berger, Piotr Rudol and Mariusz Wzorek. 2021.
Hastily formed knowledge networks and distributed situation awareness for collaborative robotics.
Autonomous Intelligent Systems, 1(1):????. Springer.
DOI: 10.1007/s43684-021-00016-w.
Note: Funding Agencies|ELLIIT Network Organization for Information and Communication Technology, Sweden (Project B09) and the Swedish Foundation for Strategic Research SSF (Smart Systems Project RIT15-0097). The first author is also supported by an RExperts Program Grant 2020A1313030098 from the Guangdong Department of Science and Technology, China in addition to a Sichuan Province International Science and Technology Innovation Cooperation Project Grant 2020YFH0160.
Fulltext: https://doi.org/10.1007/s43684-021-00016...

In the context of collaborative robotics, distributed situation awareness is essential for supporting collective intelligence in teams of robots and human agents where it can be used for both individual and collective decision support. This is particularly important in applications pertaining to emergency rescue and crisis management. During operational missions, data and knowledge are gathered incrementally and in different ways by heterogeneous robots and humans. We describe this as the creation of Hastily Formed Knowledge Networks (HFKNs). The focus of this paper is the specification and prototyping of a general distributed system architecture that supports the creation of HFKNs by teams of robots and humans. The information collected ranges from low-level sensor data to high-level semantic knowledge, the latter represented in part as RDF Graphs. The framework includes a synchronization protocol and associated algorithms that allow for the automatic distribution and sharing of data and knowledge between agents. This is done through the distributed synchronization of RDF Graphs shared between agents. High-level semantic queries specified in SPARQL can be used by robots and humans alike to acquire both knowledge and data content from team members. The system is empirically validated and complexity results of the proposed algorithms are provided. Additionally, a field robotics case study is described, where a 3D mapping mission has been executed using several UAVs in a collaborative emergency rescue scenario while using the full HFKN Framework.

[135] Olov Andersson, Patrick Doherty, Mårten Lager, Jens-Olof Lindh, Linnea Persson, Elin A. Topp, Jesper Tordenlid and Bo Wahlberg. 2021.
WARA-PS: a research arena for public safety demonstrations and autonomous collaborative rescue robotics experimentation.
Autonomous Intelligent Systems, 1(1):????. Springer Singapore.
DOI: 10.1007/s43684-021-00009-9.
fulltext:print: http://liu.diva-portal.org/smash/get/div...

A research arena (WARA-PS) for sensing, data fusion, user interaction, planning and control of collaborative autonomous aerial and surface vehicles in public safety applications is presented. The objective is to demonstrate scientific discoveries and to generate new directions for future research on autonomous systems for societal challenges. The enabler is a computational infrastructure with a core system architecture for industrial and academic collaboration. This includes a control and command system together with a framework for planning and executing tasks for unmanned surface vehicles and aerial vehicles. The motivating application for the demonstration is marine search and rescue operations. A state-of-art delegation framework for the mission planning together with three specific applications is also presented. The first one concerns model predictive control for cooperative rendezvous of autonomous unmanned aerial and surface vehicles. The second project is about learning to make safe real-time decisions under uncertainty for autonomous vehicles, and the third one is on robust terrain-aided navigation through sensor fusion and virtual reality tele-operation to support a GPS-free positioning system in marine environments. The research results have been experimentally evaluated and demonstrated to industry and public sector audiences at a marine test facility. It would be most difficult to do experiments on this large scale without the WARA-PS research arena. Furthermore, these demonstrator activities have resulted in effective research dissemination with high public visibility, business impact and new research collaborations between academia and industry.

[134] Mariusz Wzorek, Cyrille Berger and Patrick Doherty. 2021.
Router and gateway node placement in wireless mesh networks for emergency rescue scenarios.
Autonomous Intelligent Systems, 1(1):????. Springer.
DOI: 10.1007/s43684-021-00012-0.
Note: Funding Agencies|ELLIIT network organization for Information and Communication Technology; Swedish Foundation for Strategic ResearchSwedish Foundation for Strategic Research [RIT 15-0097]; Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation;The 3rd author was also supported by an RExperts Program Grant 2020A1313030098 from the Guangdong Department of Science and Technology, China and a Sichuan Province International Science and Technology Innovation Cooperation Project Grant 2020YFH0160.
Fulltext: https://doi.org/10.1007/s43684-021-00012...
Link: https://link.springer.com/article/10.100...

The focus of this paper is on base functionalities required for UAV-based rapid deployment of an ad hoc communication infrastructure in the initial phases of rescue operations. The main idea is to use heterogeneous teams of UAVs to deploy communication kits that include routers, and are used in the generation of ad hoc Wireless Mesh Networks (WMN). Several fundamental problems are considered and algorithms are proposed to solve these problems. The Router Node Placement problem (RNP) and a generalization of it that takes into account additional constraints arising in actual field usage is considered first. The RNP problem tries to determine how to optimally place routers in a WMN. A new algorithm, the RRT-WMN algorithm, is proposed to solve this problem. It is based in part on a novel use of the Rapidly Exploring Random Trees (RRT) algorithm used in motion planning. A comparative empirical evaluation between the RRT-WMN algorithm and existing techniques such as the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and Particle Swarm Optimization (PSO), shows that the RRT-WMN algorithm has far better performance both in amount of time taken and regional coverage as the generalized RNP problem scales to realistic scenarios. The Gateway Node Placement Problem (GNP) tries to determine how to locate a minimal number of gateway nodes in a WMN backbone network while satisfying a number of Quality of Service (QoS) constraints.Two alternatives are proposed for solving the combined RNP-GNP problem. The first approach combines the RRT-WMN algorithm with a preexisting graph clustering algorithm. The second approach, WMNbyAreaDecomposition, proposes a novel divide-and-conquer algorithm that recursively partitions a target deployment area into a set of disjoint regions, thus creating a number of simpler RNP problems that are then solved concurrently. Both algorithms are evaluated on real-world GIS models of different size and complexity. WMNbyAreaDecomposition is shown to outperform existing algorithms using 73% to 92% fewer router nodes while at the same time satisfying all QoS requirements.

[133] Ali Basirat, Marc Allassonniere-Tang and Aleksandrs Berdicevskis. 2021.
An empirical study on the contribution of formal and semantic features to the grammatical gender of nouns.
Linguistics Vanguard, 7(1):????. WALTER DE GRUYTER GMBH.
DOI: 10.1515/lingvan-2020-0048.
Note: Funding Agencies|IDEXLYON Fellowship Grant [16-IDEX-0005]; University of Lyon Grant NSCO ED 476 [ANR-10-LABX-0081]; French National Research AgencyFrench National Research Agency (ANR) [ANR-11-IDEX-0007]

This study conducts an experimental evaluation of two hypotheses about the contributions of formal and semantic features to the grammatical gender assignment of nouns. One of the hypotheses (Corbett and Fraser 2000) claims that semantic features dominate formal ones. The other hypothesis, formulated within the optimal gender assignment theory (Rice 2006), states that form and semantics contribute equally. Both hypotheses claim that the combination of formal and semantic features yields the most accurate gender identification. In this paper, we operationalize and test these hypotheses by trying to predict grammatical gender using only character-based embeddings (that capture only formal features), only context-based embeddings (that capture only semantic features) and the combination of both. We performed the experiment using data from three languages with different gender systems (French, German and Russian). Formal features are a significantly better predictor of gender than semantic ones, and the difference in prediction accuracy is very large. Overall, formal features are also significantly better than the combination of form and semantics, but the difference is very small and the results for this comparison are not entirely consistent across languages.

[132] Filip Strömbäck, Linda Mannila and Mariam KAMKAR. 2021.
The Non-Deterministic Path to Concurrency ? Exploring how Students Understand the Abstractions of Concurrency.
Informatics in Education. An International Journal, 20(4):683–715. Vilnius University Press.
DOI: 10.15388/infedu.2021.29.
Fulltext: https://doi.org/10.15388/infedu.2021.29
fulltext:print: http://liu.diva-portal.org/smash/get/div...

Concurrency is often perceived as difficult by students. One reason for this may be due to the fact that abstractions used in concurrent programs leave more situations undefined compared to sequential programs (e.g., in what order statements are executed), which makes it harder to create a proper mental model of the execution environment. Students who aim to explore the abstractions through testing are further hindered by the non-determinism of concurrent programs since even incorrect programs may seem to work properly most of the time. In this paper we aim to explore how students understanding these abstractions by examining 137 solutions to two concurrency questions given on the final exam in two years of an introductory concurrency course. To highlight problematic areas of these abstractions, we present alternative abstractions under which each incorrect solution would be correct.

[131] Pontus Haglund, Filip Strömbäck and Linda Mannila. 2021.
Understanding Students? Failure to use Functions as a Tool for Abstraction ? An Analysis of Questionnaire Responses and Lab Assignments in a CS1 Python Course.
Informatics in Education. An International Journal, 20(4):583–614. Vilnius University Press.
DOI: 10.15388/infedu.2021.26.
Fulltext: https://doi.org/10.15388/infedu.2021.26
fulltext:print: https://liu.diva-portal.org/smash/get/di...

Controlling complexity through the use of abstractions is a critical part of problem solving in programming. Thus, becoming proficient with procedural and data abstraction through the use of user-defined functions is important. Properly using functions for abstraction involves a number of other core concepts, such as parameter passing, scope and references, which are known to be difficult. Therefore, this paper aims to study students proficiency with these core concepts, and students ability to apply procedural and data abstraction to solve problems. We collected data from two years of an introductory Python course, both from a questionnaire and from two lab assignments. The data shows that students had difficulties with the core concepts, and a number of issues solving problems with abstraction. We also investigate the impact of using a visualization tool when teaching the core concepts.

[130] Gerald Steinbauer, Martin Kandlhofer, Tara Chklovski, Fredrik Heintz and Sven Koenig. 2021.
Education in Artificial Intelligence K-12.
Künstliche Intelligenz, 35(2):127–129. Springer.
DOI: 10.1007/s13218-021-00734-6.
Note: Funding Agencies: Graz University of Technology
fulltext:print: http://liu.diva-portal.org/smash/get/div...

[129] Jonas Lundberg, Mattias Arvola and Karljohan Lundin Palmerius. 2021.
Human Autonomy in Future Drone Traffic: Joint Human-AI Control in Temporal Cognitive Work.
Frontiers in Artificial Intelligence, 4(??):????. Frontiers Media S.A..
DOI: 10.3389/frai.2021.704082.
Note: Funding: Swedish Transport Administration
fulltext:print: http://liu.diva-portal.org/smash/get/div...

The roles of human operators are changing due to increased intelligence and autonomy of computer systems. Humans will interact with systems at a more overarching level or only in specific situations. This involves learning new practices and changing habitual ways of thinking and acting, including reconsidering human autonomy in relation to autonomous systems. This paper describes a design case of a future autonomous management system for drone traffic in cities in a key scenario we call The Computer in Brussels. Our approach to designing for human collaboration with autonomous systems builds on scenario-based design and cognitive work analysis facilitated by computer simulations. We use a temporal method, called the Joint Control Framework to describe human and automated work in an abstraction hierarchy labeled Levels of Autonomy in Cognitive Control. We use the Score notation to analyze patterns of temporal developments that span levels of the abstraction hierarchy and discuss implications for human-automation communication in traffic management. We discuss how autonomy at a lower level can prevent autonomy on higher levels, and vice versa. We also discuss the temporal nature of autonomy in minute-to-minute operative work. Our conclusion is that human autonomy in relation to autonomous systems is based on fundamental trade-offs between technological opportunities to automate and values of what human actors find meaningful.

[128] Erik Sandewall. 2021.
Ethics, Human Rights, the Intelligent Robot, and its Subsystem for Moral Beliefs.
International Journal of Social Robotics, 13(4):557–567. Springer.
DOI: 10.1007/s12369-019-00540-z.
fulltext:print: http://liu.diva-portal.org/smash/get/div...

The Universal Declaration of Human Rights specifies a number of properties that characterize human beings, such as dignity, conscience, and several others. In this article we focus on these properties and on how they have been defined in the history of philosophy. We show how they can be interpreted in terms of a prototypical architecture for an intelligent robot, and how the robot can be provided with several aspects of ethical capability in this way. The key idea is to provide the robot with a Moral Belief System that cooperates with, and moderates the robots capability of planning and action.

[127] Fredrik Heintz. 2021.
Three Interviews About K-12 AI Education in America, Europe, and Singapore.
Künstliche Intelligenz, 35(??):233–237. SPRINGER HEIDELBERG.
DOI: 10.1007/s13218-021-00730-w.

As the impact and importance of artificial intelligence (AI) grows, there is a growing trend to teach AI in primary and secondary education (K-12). To provide an international perspective, we have conducted three interviews with practitioners and policy makers from AI4K12 in the US (D. Touretzky, C. Gardner-McCune, and D. Seehorn), from Singapore (L. Liew) and from the European Commission (F. Benini).

[126] Gerald Steinbauer, Martin Kandlhofer, Tara Chklovski, Fredrik Heintz and Sven Koenig. 2021.
A Differentiated Discussion About AI Education K-12.
Künstliche Intelligenz, 35(2):131–137. Springer Nature.
DOI: 10.1007/s13218-021-00724-8.
Note: Funding Agencies|Graz University of Technology
fulltext:print: http://liu.diva-portal.org/smash/get/div...

AI Education for K-12 and in particular AI literacy gained huge interest recently due to the significantly influence in daily life, society, and economy. In this paper we discuss this topic of early AI education along four dimensions: (1) formal versus informal education, (2) cooperation of researchers in AI and education, (3) the level of education, and (4) concepts and tools.

[125] Full text  Mattias Tiger, David Bergström, Andreas Norrstig and Fredrik Heintz. 2021.
Enhancing Lattice-Based Motion Planning With Introspective Learning and Reasoning.
IEEE Robotics and Automation Letters, 6(3):4385–4392. Institute of Electrical and Electronics Engineers (IEEE).
DOI: 10.1109/LRA.2021.3068550.
Note: Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; National Graduate School in Computer Science (CUGS), Sweden; Excellence Center at Linkoping-Lund for Information Technology (ELLIIT); TAILOR Project - EU Horizon 2020 research and innovation programme [952215]; Knut and Alice Wallenberg FoundationKnut & Alice Wallenberg Foundation [KAW 2019.0350]
Fulltext: https://doi.org/10.1109/LRA.2021.3068550
fulltext:print: https://liu.diva-portal.org/smash/get/di...

Lattice-based motion planning is a hybrid planning method where a plan is made up of discrete actions, while simultaneously also being a physically feasible trajectory. The planning takes both discrete and continuous aspects into account, for example action pre-conditions and collision-free action-duration in the configuration space. Safe motion planning rely on well-calibrated safety-margins for collision checking. The trajectory tracking controller must further be able to reliably execute the motions within this safety margin for the execution to be safe. In this work we are concerned with introspective learning and reasoning about controller performance over time. Normal controller execution of the different actions is learned using machine learning techniques with explicit uncertainty quantification, for safe usage in safety-critical applications. By increasing the model accuracy the safety margins can be reduced while maintaining the same safety as before. Reasoning takes place to both verify that the learned models stays safe and to improve collision checking effectiveness in the motion planner using more accurate execution predictions with a smaller safety margin. The presented approach allows for explicit awareness of controller performance under normal circumstances, and detection of incorrect performance in abnormal circumstances. Evaluation is made on the nonlinear dynamics of a quadcopter in 3D using simulation.

[124] Full text  Susanne Kjallander, Linda Mannila, Anna Akerfeldt and Fredrik Heintz. 2021.
Elementary Students First Approach to Computational Thinking and Programming.
Education Sciences, 11(2):????. MDPI.
DOI: 10.3390/educsci11020080.
Note: Funding Agencies|Marcus and Amalia Wallenberg Foundation [MAW 2017.0096]
fulltext:print: https://liu.diva-portal.org/smash/get/di...

Digital competence and programming are actively highlighted areas in education worldwide. They are becoming part of curricula all over the world, including the Swedish elementary school curriculum, Children are expected to develop computational thinking through programming activities, mainly in mathematics-which are supposed to be based on both proven experience and scientific grounds. Both are lacking in the lower grades of elementary school. This article gives unique insight into pupils learning during the first programming lessons based on a group of Swedish pupils experiences when entering school. The goal of the article is to inform education policy and practice. The large interdisciplinary, longitudinal research project studies approximately 1500 students aged 6-16 and their teachers over three years, using video documentation, questionnaires, and focus group interviews. This article reports on empirical data collected during the first year in one class with 30 pupils aged 6-7 years. The social semiotic, multimodal theoretical framework \"Design for Learning\" is used to investigate potential signs of learning in pupils multimodal representations when they, for example, use block programming in the primary and secondary transformation unit. We show that young pupils have positive attitudes to programming and high self-efficacy, and that pupils signs of learning in programming are multimodal and often visible in social interactions.

[123] Patrick Doherty and Andrzej Szalas. 2021.
Rough set reasoning using answer set programs.
International Journal of Approximate Reasoning, 130(March):126–149. Elsevier.
DOI: 10.1016/j.ijar.2020.12.010.
Note: Funding: ELLIIT Network Organization for Information and Communication Technology, Sweden; Swedish Foundation for Strategic Research SSF(Smart Systems Project) [RIT15-0097]; Jinan University (Zhuhai Campus); National Science Centre PolandNational Science Centre, Poland [2017/27/B/ST6/02018]
Fulltext: https://doi.org/10.1016/j.ijar.2020.12.0...
fulltext:print: http://liu.diva-portal.org/smash/get/div...

Reasoning about uncertainty is one of the main cornerstones of Knowledge Representation. Formal representations of uncertainty are numerous and highly varied due to different types of uncertainty intended to be modeled such as vagueness, imprecision and incompleteness. There is a rich body of theoretical results that has been generated for many of these approaches. It is often the case though, that pragmatic tools for reasoning with uncertainty lag behind this rich body of theoretical results. Rough set theory is one such approach for modeling incompleteness and imprecision based on indiscernibility and its generalizations. In this paper, we provide a pragmatic tool for constructively reasoning with generalized rough set approximations that is based on the use of Answer Set Programming (Asp). We provide an interpretation of answer sets as (generalized) approximations of crisp sets (when possible) and show how to use Asp solvers as a tool for reasoning about (generalized) rough set approximations situated in realistic knowledge bases. The paper includes generic Asp templates for doing this and also provides a case study showing how these techniques can be used to generate reducts for incomplete information systems. Complete, ready to run clingo Asp code is provided in the Appendix, for all programs considered. These can be executed for validation purposes in the clingo Asp solver.

[122] Jonas Lundberg and Björn Johansson. 2021.
A framework for describing interaction between human operators and autonomous, automated, and manual control systems.
Cognition, Technology & Work, 23(??):381–401. SPRINGER LONDON LTD.
DOI: 10.1007/s10111-020-00637-w.
Note: Funding Agencies|Linkoping University - Swedish Transport Administration; Air Navigation Services of Sweden
fulltext:print: https://liu.diva-portal.org/smash/get/di...

This paper addresses how to describe critical episodes of interaction between human operators and autonomous, automated, and manual control systems. The first part of the paper poses three questions: (1) what levels of cognitive control are important to include in a descriptive framework for joint human-autonomy in process control; (2) how should one describe temporal developments in joint socio-technical systems; and (3) how does one analyse communication and control at the system joints. The paper proceeds by proposing a new framework for description and analysis, the Joint Control Framework (JCF), with a simple notation, the Score (JCF-S). It allows descriptions of the three previously mentioned aspects through three analytical activities: process mapping (PM), analysis of Levels of Autonomy in Cognitive Control (LACC), and temporal descriptions of human-machine interaction (T-HMI) through the Score notation. This facilitates analyses across cases and domains. The framework is discussed based on an analysis of two episodes; one work episode (from an air traffic control tower simulator); and one work procedure (from an unmanned traffic management system design concept).

2020
[121] Jonas Lundberg and Björn Johansson. 2020.
A framework for describing interaction between human operators and autonomous, automated, and manual control systems.
Cognition, Technology & Work, 23(3):381–401.
DOI: 10.1007/s10111-020-00637-w.
fulltext:print: https://liu.diva-portal.org/smash/get/di...

This paper addresses how to describe critical episodes of interaction between human operators and autonomous, automated,and manual control systems. The first part of the paper poses three questions: (1) what levels of cognitive control are impor-tant to include in a descriptive framework for joint human-autonomy in process control; (2) how should one describe temporaldevelopments in joint socio-technical systems; and (3) how does one analyse communication and control at the system joints.The paper proceeds by proposing a new framework for description and analysis, the Joint Control Framework (JCF), with asimple notation, the Score (JCF-S). It allows descriptions of the three previously mentioned aspects through three analyticalactivities: process mapping (PM), analysis of Levels of Autonomy in Cognitive Control (LACC), and temporal descriptionsof human–machine interaction (T-HMI) through the Score notation. This facilitates analyses across cases and domains. Theframework is discussed based on an analysis of two episodes; one work episode (from an air traffic control tower simulator);and one work procedure (from an unmanned traffic management system design concept).

[120] J. Luis Silva, Barbara Brena and C. Moyses Araujo. 2020.
g-C3N4/WTe2 Hybrid Electrocatalyst for Efficient Hydrogen Evolution Reaction.
The Journal of Physical Chemistry C, 124(16):8726–8735. American Chemical Society (ACS).
DOI: 10.1021/acs.jpcc.9b11982.

Recent experiments have highlighted the efficiency of nonprecious metal-based hybrid structures, such as g-C<sub>3</sub>N<sub>4</sub>/MoS<sub>2</sub> and g-C<sub>3</sub>N<sub>4</sub>/graphene for hydrogen evolution reaction (HER). This work focuses on the interface effects of such hybrid heterostructures that could lead to the enhanced catalytic activity of g-C<sub>3</sub>N<sub>4</sub>. We have concentrated on the hybrid electrocatalysts with the architecture g-C<sub>3</sub>N<sub>4</sub>/X (X = WTe<sub>2</sub>, MoS<sub>2</sub>, and graphene), where the interface plays an important role in the overall HER. These promising candidates have been assessed using three main factors extracted from density functional theory calculations, namely: (i) the free energy of hydrogen adsorption on the catalytic site Δ<em>G</em><sub>H</sub>, (ii) Schottky barrier potentials, and (iii) induced charge polarization across the interface. We have found that particularly g-C<sub>3</sub>N<sub>4</sub>/WTe<sub>2</sub> displays a suitable combination of the investigated properties standing out as a potential electrocatalyst for efficient hydrogen evolution reaction. Furthermore, the electronic structure fingerprints controlling the HER thermodynamics have been investigated. In particular, the N–H bonds have been found to display strong s–p hybridization and, additionally, Δ<em>G</em><sub>H</sub> decreases as the center of N p-band approaches the Fermi energy. This is also a relevant result in understanding HER mechanisms of organic compounds.

[119] Jalal Nouri, Lechen Zhang, Linda Mannila and Eva Norén. 2020.
Development of computational thinking, digital competence and 21stcentury skills when learning programming in K-9.
Education Inquiry, 11(1):????. Routledge.
DOI: 10.1080/20004508.2019.1627844.

Teachers around the world have started teaching programming at the K-9 level, some due to the formal introduction of programming in the national curriculum, others without such pressure and on their own initiative. In this study, we attempted to understand which skills – both CT-related and general – are developed among pupils in the process of working with programming in schools. To do so, we interviewed 19 Swedish teachers who had been teaching programming for a couple of years on their own initiative. The teachers were selected based on their experience in teaching programming. Our thematic analysis of these interviews shed light on what skills teachers perceive pupils develop when programming. This led us to identify three themes related to CT skills and five themes related to general skills. The CT skills identified corresponded well with and were thus thematically structured according to the dimensions of CT proposed in the framework of Brennan and Resnick, namely computational concepts, computational practices and computational perspectives. In addition to the CT skills, our thematic analysis also resulted in the identification of general skills related to digital competency and 21st century skills, namely cognitive skills and attitudes, language skills, collaborative skills and attitudes and creative problem-solving skills and attitudes.

[118] Andrzej Szalas. 2020.
On the Probability and Cost of Ignorance, Inconsistency, Nonsense and More.
Journal of Multiple-Valued Logic and Soft Computing, 34(5-6):423–450. Old City Publishing.
Note: Funding agencies:  National Science Centre PolandNational Science Center, PolandNational Science Centre, Poland [2017/27/B/ST6/02018]
Journal home page: https://www.oldcitypublishing.com/journa...

Ignorance, inconsistency, nonsense and similar phenomena are omnipresent in everyday reasoning. They have been intensively studied, especially in the area of multiple-valued logics. Therefore we develop a framework for belief bases, combining multiple-valued and probabilistic reasoning, with the main focus on the way belief bases are actually used and accessed through queries.As an implementation tool we use a probabilistic programming language PROBLOG. Though based on distribution semantics with the independence assumption, we show how its constructs can successfully be used in implementing the considered logics and belief bases. In particular, we develop a technique for shifting probabilistic dependencies to the level of symbolic parts of belief bases.We also discuss applications of the framework in reasoning with Likert-type scales, widely exploited in questionnaire-based experimental research in psychology, economics, sociology, politics, public opinion measurements, and related areas.

[117] Andrzej Szalas. 2020.
A Paraconsistent ASP-like Language with Tractable Model Generation.
Journal of Applied Logics - IfCoLog Journal of Logic and Applications, 7(3):361–389. College Publications.
Note: Funding agencies: This work has been supported by grant 2017/27/B/ST6/02018 of the National Science Centre Poland.
Link to article: http://www.collegepublications.co.uk/dow...
fulltext:print: http://liu.diva-portal.org/smash/get/div...

Answer Set Programming (ASP) is nowadays a dominant rule-based knowledge representation tool. Though existing ASP variants enjoy efficient implementations, generating an answer set remains intractable. The goal of this research is to define a new ASP-like rule language, 4SP, with tractable model generation. The language combines ideas of ASP and a paraconsistent rule language 4QL. Though 4SP shares the syntax of ASP and for each program all its answer sets are among 4SP models, the new language differs from ASP in its logical foundations, the intended methodology of its use and complexity of computing models. As we show in the paper, 4QL can be seen as a paraconsistent counterpart of ASP programs stratified with respect to default negation. Although model generation for 4QL programs is tractable, dropping stratification makes it intractable for both 4QL and ASP. To retain tractability while allowing non-stratified programs, in 4SP we introduce trial expressions interlacing programs with hypotheses as to the truth values of default negations. This allows us to develop a model generation algorithm with deterministic polynomial time complexity. We also show relationships among 4SP, ASP and 4QL.

[116] Full text  Fredrik Präntare and Fredrik Heintz. 2020.
An anytime algorithm for optimal simultaneous coalition structure generation and assignment.
Autonomous Agents and Multi-Agent Systems, 34(1):????. SPRINGER.
DOI: 10.1007/s10458-020-09450-1.
Note: Funding Agencies|Linkoping University; Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation
fulltext:print: http://liu.diva-portal.org/smash/get/div...

An important research problem in artificial intelligence is how to organize multiple agents, and coordinate them, so that they can work together to solve problems. Coordinating agents in a multi-agent system can significantly affect the systems performance-the agents can, in many instances, be organized so that they can solve tasks more efficiently, and consequently benefit collectively and individually. Central to this endeavor is coalition formation-the process by which heterogeneous agents organize and form disjoint groups (coalitions). Coalition formation often involves finding a coalition structure (an exhaustive set of disjoint coalitions) that maximizes the systems potential performance (e.g., social welfare) through coalition structure generation. However, coalition structure generation typically has no notion of goals. In cooperative settings, where coordination of multiple coalitions is important, this may generate suboptimal teams for achieving and accomplishing the tasks and goals at hand. With this in mind, we consider simultaneously generating coalitions of agents and assigning the coalitions to independent alternatives (e.g., tasks/goals), and present an anytime algorithm for the simultaneous coalition structure generation and assignment problem. This combinatorial optimization problem hasmany real-world applications, including forming goal-oriented teams. To evaluate the presented algorithms performance, we present five methods for synthetic problem set generation, and benchmark the algorithm against the industry-grade solver CPLEXusing randomized data sets of varying distribution and complexity. To test its anytime-performance, we compare the quality of its interim solutions against those generated by a greedy algorithm and pure random search. Finally, we also apply the algorithm to solve the problem of assigning agents to regions in a major commercial strategy game, and show that it can be used in game-playing to coordinate smaller sets of agents in real-time.

[115] Full text  Mattias Tiger and Fredrik Heintz. 2020.
Incremental Reasoning in Probabilistic Signal Temporal Logic.
International Journal of Approximate Reasoning, 119(??):325–352. Elsevier.
DOI: 10.1016/j.ijar.2020.01.009.
Note: Funding agencies: National Graduate School in Computer Science, Sweden (CUGS); Swedish Research Council (VR) Linnaeus Center CADICSSwedish Research Council; ELLIIT Excellence Center at Linkoping-Lund for Information Technology; Wallenberg AI, Autonomous Systems and Softwar
Fulltext: https://doi.org/10.1016/j.ijar.2020.01.0...
fulltext:print: http://liu.diva-portal.org/smash/get/div...

Robot safety is of growing concern given recent developments in intelligent autonomous systems. For complex agents operating in uncertain, complex and rapidly-changing environments it is difficult to guarantee safety without imposing unrealistic assumptions and restrictions. It is therefore necessary to complement traditional formal verification with monitoring of the running system after deployment. Runtime verification can be used to monitor that an agent behaves according to a formal specification. The specification can contain safety-related requirements and assumptions about the environment, environment-agent interactions and agent-agent interactions. A key problem is the uncertain and changing nature of the environment. This necessitates requirements on how probable a certain outcome is and on predictions of future states. We propose Probabilistic Signal Temporal Logic (ProbSTL) by extending Signal Temporal Logic with a sub-language to allow statements over probabilities, observations and predictions. We further introduce and prove the correctness of the incremental stream reasoning technique progression over well-formed formulas in ProbSTL. Experimental evaluations demonstrate the applicability and benefits of ProbSTL for robot safety.

2019
[114] Lukasz Bialek, Barbara Dunin-Keplicz and Andrzej Szalas. 2019.
A paraconsistent approach to actions in informationally complex environments.
Annals of Mathematics and Artificial Intelligence, 86(4):231–255. SPRINGER.
DOI: 10.1007/s10472-019-09627-9.
Note: Funding Agencies|Polish National Science Centre [2015/19/B/ST6/02589]; ELLIIT Network Organization for Information and Communication Technology; Swedish Foundation for Strategic Research FSR (SymbiKBot Project)
fulltext:print: http://liu.diva-portal.org/smash/get/div...

Contemporary systems situated in real-world open environments frequently have to cope with incomplete and inconsistent information that typically increases complexity of reasoning and decision processes. Realistic modeling of such informationally complex environments calls for nuanced tools. In particular, incomplete and inconsistent information should neither trivialize nor stop both reasoning or planning. The paper introduces ACTLOG, a rule-based four-valued language designed to specify actions in a paraconsistent and paracomplete manner. ACTLOG is an extension of 4QL(Bel), a language for reasoning with paraconsistent belief bases. Each belief base stores multiple world representations. In this context, ACTLOGs action may be seen as a belief bases transformer. In contrast to other approaches, ACTLOG actions can be executed even when the underlying belief base contents is inconsistent and/or partial. ACTLOG provides a nuanced action specification tools, allowing for subtle interplay among various forms of nonmonotonic, paraconsistent, paracomplete and doxastic reasoning methods applicable in informationally complex environments. Despite its rich modeling possibilities, it remains tractable. ACTLOG permits for composite actions by using sequential and parallel compositions as well as conditional specifications. The framework is illustrated on a decontamination case study known from the literature.

[113] Jacek Szklarski, Lukasz Bialek and Andrzej Szalas. 2019.
Paraconsistent Reasoning in Cops and Robber Game with Uncertain Information: A Simulation-Based Analysis.
International Journal of Uncertainty Fuzziness and Knowledge-Based Systems, 27(3):429–455. WORLD SCIENTIFIC PUBL CO PTE LTD.
DOI: 10.1142/S021848851950020X.
Note: Funding Agencies|Polish National Science Centre [2012/05/B/ST6/03094, 2015/19/B/ST6/02589]

We apply a non-classical four-valued logic in the process of reasoning regarding strategies for cops in a modified game of \"Cops and Robber\" played on a graph. We extend the game by introducing uncertainty in a form of random failures of detecting devices. This is realized by allowing that a robber can be detected in a node only with the given probability P-A. Additionally, with the probability P-F, cops can be given a false-positive, i.e., they are informed that the robber is located at some node, whereas it is located somewhere else. Consequently, non-zero P-F introduces a measurement noise into the system. All the cops have access to information provided by the detectors and can communicate with each other, so they can coordinate the search. By adjusting the number of detectors,P-A, and P-F we can achieve a smooth transition between the two well-known variants of the game: \"with fully visible robber\" and \"with invisible robber\". We compare a simple probabilistic strategy for cops with the non-parametric strategy based on reasoning with a four-valued paraconsistent logic. It is shown that this novel approach leads to a good performance, as measured by the required mean catch-time. We conclude that this type of reasoning can be applied in real-world applications where there is no knowledge about the underlying source of errors which is particularly useful in robotics.

[112] Daniele DellAglio, Thomas Eiter, Fredrik Heintz and Danh Le-Phuoc. 2019.
Special issue on stream reasoning.
Semantic Web, 10(3):453–455. IOS PRESS.
DOI: 10.3233/SW-190351.

n/a

[111] Full text  Magnus Selin, Mattias Tiger, Daniel Duberg, Fredrik Heintz and Patric Jensfelt. 2019.
Efficient Autonomous Exploration Planning of Large Scale 3D-Environments.
IEEE Robotics and Automation Letters, 4(2):1699–1706. Institute of Electrical and Electronics Engineers (IEEE).
DOI: 10.1109/LRA.2019.2897343.
fulltext:postprint: https://liu.diva-portal.org/smash/get/di...

Exploration is an important aspect of robotics, whether it is for mapping, rescue missions or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this work we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries.

2018
[110] Pia Niemelä, Tiina Partanen, Linda Mannila, Timo Poranen and Hannu-Matti Järvinen. 2018.
Code ABC MOOC for Math Teachers.
Communications in Computer and Information Science, 865(??):66–96. Springer.
DOI: 10.1007/978-3-319-94640-5_4.

Computing is the latest add-on to enhance the K-12 curricula of many countries, with the purpose of closing the digital skills gap. The revised Finnish Curriculum 2014 integrates computing mainly into math. Consequently, Finland needs to train math teachers to teach computing at elementary level. This study describes the Python and Racket tracks of the Code ABC MOOC that introduce programming basics for math teachers. Their suitability for math is compared based on the course content and feedback. The results show that conceptually the functional paradigm of Racket approaches math more closely, in particular algebra. In addition, Racket is generally regarded as more challenging in terms of syntax and e.g. for utilizing recursion as an iteration mechanism. Math teachers also rank its suitability higher because the content and exercises of the track are specifically tailored for their subject.

[109] Francesco Luca De Angelis, Giovanna Di Marzo Serugendo and Andrzej Szalas. 2018.
Paraconsistent Rule-Based Reasoning with Graded Truth Values.
, 5(1):185–220. College Publications.
Link: http://www.collegepublications.co.uk/dow...

Modern artificial systems, such as cooperative traffic systems or swarm robotics, are made of multiple autonomous agents, each handling uncertain, partial and potentially inconsistent information, used in their reasoning and decision making. Graded reasoning, being a suitable tool for addressing phenomena related to such circumstances, is investigated in the literature in many contexts – from graded modal logics to various forms of approximate reasoning. In this paper we first introduce a family of many-valued paraconsistent logics parametrised by a number of truth/falsity/inconsistency grades allowing one to handle multiple truth-values at the desired level of accuracy. Second, we define a corresponding family of rule-based languages with graded truth-values as first-class citizens, enjoying tractable query evaluation. In addition, we introduce introspection operators allowing one to resolve inconsistencies and/or lack of information in a non-monotonic manner. We illustrate and discuss the use of the framework in an autonomous robot scenario.

[108] Alexander Kleiner. 2018.
The Low-Cost Evolution of AI in Domestic Floor Cleaning Robots.
The AI Magazine, 39(2):89–90. AMER ASSOC ARTIFICIAL INTELL.
DOI: 10.1609/aimag.v39i2.2806.

This article discusses AI methods deployed on domestic floor cleaning robots in the recent past and the way in which those methods are changing today. Formerly, innovations were tightly coupled with a price point customers were willing to pay. Today, there is a substantial increase in the AI found in these systems, driven by new challenges and scalable infrastructures.

2017
[107] Full text  Oleg Burdakov, Jonas Kvarnström and Patrick Doherty. 2017.
Optimal scheduling for replacing perimeter guarding unmanned aerial vehicles.
Annals of Operations Research, 249(1):163–174. Springer.
DOI: 10.1007/s10479-016-2169-5.
fulltext:postprint: http://liu.diva-portal.org/smash/get/div...

Guarding the perimeter of an area in order to detect potential intruders is an important task in a variety of security-related applications. This task can in many circumstances be performed by a set of camera-equipped unmanned aerial vehicles (UAVs). Such UAVs will occasionally require refueling or recharging, in which case they must temporarily be replaced by other UAVs in order to maintain complete surveillance of the perimeter. In this paper we consider the problem of scheduling such replacements. We present optimal replacement strategies and justify their optimality.

2016
[106] Full text  Mehul Bhatt, Esra Erdem, Fredrik Heintz and Michael Spranger. 2016.
Cognitive robotics in JOURNAL OF EXPERIMENTAL and THEORETICAL ARTIFICIAL INTELLIGENCE, vol 28, issue 5, pp 779-780.
Journal of experimental and theoretical artificial intelligence (Print), 28(5):779–780. TAYLOR & FRANCIS LTD.
DOI: 10.1080/0952813X.2016.1218649.

n/a

[105] Alexander Kleiner, Fredrik Heintz and Satoshi Tadokoro. 2016.
Editorial: Special Issue on Safety, Security, and Rescue Robotics (SSRR), Part 2.
Journal of Field Robotics, 33(4):409–410. WILEY-BLACKWELL.
DOI: 10.1002/rob.21661.

n/a

[104] Alexander Kleiner, Fredrik Heintz and Satoshi Tadokoro. 2016.
Editorial: Special Issue on Safety, Security, and Rescue Robotics (SSRR), Part 1.
Journal of Field Robotics, 33(3):263–264. WILEY-BLACKWELL.
DOI: 10.1002/rob.21653.

n/a

[103] Full text  Mikael Nilsson, Jonas Kvarnström and Patrick Doherty. 2016.
Efficient Processing of Simple Temporal Networks with Uncertainty: Algorithms for Dynamic Controllability Verification.
Acta Informatica, 53(6-8):723–752. Springer Publishing Company.
DOI: 10.1007/s00236-015-0248-8.
fulltext:postprint: http://liu.diva-portal.org/smash/get/div...

Temporal formalisms are essential for reasoning about actions that are carried out over time. The exact durations of such actions are generally hard to predict. In temporal planning, the resulting uncertainty is often worked around by only considering upper bounds on durations, with the assumption that when an action happens to be executed more quickly, the plan will still succeed. However, this assumption is often false: If we finish cooking too early, the dinner will be cold before everyone is ready to eat. Using <em>Simple Temporal Networks with Uncertainty (STNU)</em>, a planner can correctly take both lower and upper duration bounds into account. It must then verify that the plans it generates are executable regardless of the actual outcomes of the uncertain durations. This is captured by the property of <em>dynamic controllability</em> (DC), which should be verified incrementally during plan generation. Recently a new incremental algorithm for verifying dynamic controllability was proposed: <em>EfficiantIDC</em>, which can verify if an STNU that is DC remains DC after the addition or tightening of a constraint (corresponding to a new action being added to a plan). The algorithm was shown to have a worst case complexity of <em>O</em>(n<sup>4</sup>) for each addition or tightening. This can be amortized over the construction of a whole STNU for an amortized complexity in <em>O</em>(n<sup>3</sup>). In this paper we improve the <em>EfficientIDC</em> algorithm in a way that prevents it from having to reprocess nodes. This improvement leads to a lower worst case complexity in <em>O</em>(n<sup>3</sup>).

[102] Full text  Håkan Warnquist, Jonas Kvarnström and Patrick Doherty. 2016.
A Modeling Framework for Troubleshooting Automotive Systems.
Applied Artificial Intelligence, 30(3):257–296. Taylor & Francis.
DOI: 10.1080/08839514.2016.1156955.
Note: The published article is a shorter version than the version in manuscript form. The status of this article was earlier Manuscript.Funding agencies: Scania CV AB; FFI - Strategic Vehicle Research and Innovation; Excellence Center at Linkoping and Lund in Information Technology (ELLIIT); Research Council (VR) Linnaeus Center CADICS
fulltext:postprint: http://liu.diva-portal.org/smash/get/div...

This article presents a novel framework for modeling the troubleshooting process for automotive systems such as trucks and buses. We describe how a diagnostic model of the troubleshooting process can be created using event-driven, nonstationary, dynamic Bayesian networks. Exact inference in such a model is in general not practically possible. Therefore, we evaluate different approximate methods for inference based on the Boyen–Koller algorithm. We identify relevant model classes that have particular structure such that inference can be made with linear time complexity. We also show how models created using expert knowledge can be tuned using statistical data. The proposed learning mechanism can use data that is collected from a heterogeneous fleet of modular vehicles that can consist of different components. The proposed framework is evaluated both theoretically and experimentally on an application example of a fuel injection system.

2015
[101] Daniel de Leng. 2015.
Querying Flying Robots and Other Things: Ontology-supported stream reasoning.
, 22(2):44–47. Association for Computing Machinery (ACM).
Note: DOI does not work: 10.1145/2845155
Link to publication: http://xrds.acm.org/article.cfm?aid=2845...

A discussion on the role of ontologies and stream reasoning in Internet of Things applications.

[100] Linh Anh Nguyen, Thi-Bich-Loc Nguyen and Andrzej Szalas. 2015.
Towards richer rule languages with polynomial data complexity for the Semantic Web.
Data & Knowledge Engineering, 96-97(??):57–77. ELSEVIER SCIENCE BV.
DOI: 10.1016/j.datak.2015.04.005.

We introduce a Horn description logic called Horn-DL, which is strictly and essentially richer than Horn-Reg(1), Horn-SHTQ and Horn-SROIQ, while still has PTime data complexity. In comparison with Horn-SROIQ, Horn-DL additionally allows the universal role and assertions of the form irreflexive(s), -s(a, b), a b. More importantly, in contrast to all the well-known Horn fragments epsilon L, DL-Lite, DLP, Horn-SHIQ, and Horn-SROIQ of description logics, Horn-DL allows a form of the concept constructor \"universal restriction\" to appear at the left hand side of terminological inclusion axioms. Namely, a universal restriction can be used in such places in conjunction with the corresponding existential restriction. We develop the first algorithm with PTime data complexity for checking satisfiability of Horn-DL knowledge bases.

[99] Barbara Dunin-Keplicz, Alina Strachocka, Andrzej Szalas and Rineke Verbrugge. 2015.
Paraconsistent semantics of speech acts.
Neurocomputing, 151(2):943–952. Elsevier.
DOI: 10.1016/j.neucom.2014.10.001.

This paper discusses an implementation of four speech acts: assert, concede, request and challenge in a paraconsistent framework. A natural four-valued model of interaction yields multiple new cognitive situations. They are analyzed in the context of communicative relations, which partially replace the concept of trust. These assumptions naturally lead to six types of situations, which often require performing conflict resolution and belief revision. The particular choice of a rule-based, DATALOC. like query language 4QL as a four-valued implementation framework ensures that, in contrast to the standard two-valued approaches, tractability of the model is achieved.

2014
[98] Håkan Warnquist, Mattias Nyberg and Jonas Biteus. 2014.
Guided Integrated Remote and Workshop Troubleshooting of Heavy Trucks.
International Journal of Commercial Vehicles, 7(1):25–36. SAE International.
DOI: 10.4271/2014-01-0284.

When a truck or bus suffers from a breakdown it is important that the vehicle comes back on the road as soon as possible. In this paper we present a prototype diagnostic decision support system capable of automatically identifying possible causes of a failure and propose recommended actions on how to get the vehicle back on the road as cost efficiently as possible.This troubleshooting system is novel in the way it integrates the remote diagnosis with the workshop diagnosis when providing recommendations. To achieve this integration, a novel planning algorithm has been developed that enables the troubleshooting system to guide the different users (driver, help-desk operator, and mechanic) through the entire troubleshooting process.In this paper we formulate the problem of integrated remote and workshop troubleshooting and present a working prototype that has been implemented to demonstrate all parts of the troubleshooting system.

[97] Linh Anh Nguyen, Thi-Bich-Loc Nguyen and Andrzej Szalas. 2014.
A Horn Fragment with PTime Data Complexity of Regular Description Logic with Inverse.
VNU Journal of Computer Science and Communication Engineering, 30(4):14–28.
Link to publication: http://www.jcsce.vnu.edu.vn/index.php/jc...

We study a Horn fragment called Horn-RegI of the regular description logic with inverse RegI, which extends the description logic ALC with inverse roles and regular role inclusion axioms characterized by finite automata. In contrast to the well-known Horn fragments EL, DL-Lite, DLP, Horn-SHIQ and Horn-SROIQ of description logics, Horn-RegI allows a form of the concept constructor \"universal restriction\" to appear at the left hand side of terminological inclusion axioms, while still has PTIME data complexity. Namely, a universal restriction can be used in such places in conjunction with the corresponding existential restriction. We provide an algorithm with PTIME data complexity for checking satisfiability of Horn-RegI knowledge bases.

[96] Full text  Gianpaolo Conte, Piotr Rudol and Patrick Doherty. 2014.
Evaluation of a Light-weight Lidar and a Photogrammetric System for Unmanned Airborne Mapping Applications: [Bewertung eines Lidar-systems mit geringem Gewicht und eines photogrammetrischen Systems für Anwendungen auf einem UAV].
Photogrammetrie - Fernerkundung - Geoinformation, ??(4):287–298. E. Schweizerbart'sche Verlagsbuchhandlung.
DOI: 10.1127/1432-8364/2014/0223.
Link to article: http://www.ingentaconnect.com/content/sc...

This paper presents a comparison of two light-weight and low-cost airborne mapping systems. One is based on a lidar technology and the other on a video camera. The airborne lidar system consists of a high-precision global navigation satellite system (GNSS) receiver, a microelectromechanical system (MEMS) inertial measurement unit, a magnetic compass and a low-cost lidar scanner. The vision system is based on a consumer grade video camera. A commercial photogrammetric software package is used to process the acquired images and generate a digital surface model. The two systems are described and compared in terms of hardware requirements and data processing. The systems are also tested and compared with respect to their application on board of an unmanned aerial vehicle (UAV). An evaluation of the accuracy of the two systems is presented. Additionally, the multi echo capability of the lidar sensor is evaluated in a test site covered with dense vegetation. The lidar and the camera systems were mounted and tested on-board an industrial unmanned helicopter with maximum take-off weight of around 100 kilograms. The presented results are based on real flight-test data.

[95] Son Thanh Cao, Linh Anh Nguyen and Andrzej Szalas. 2014.
WORL: a nonmonotonic rule language for the semantic web.
, 1(1):57–69. Springer Berlin/Heidelberg.
DOI: 10.1007/s40595-013-0009-y.

We develop a new Web ontology rule language, called WORL, which combines a variant of OWL 2 RL with eDatalog ¬ . We allow additional features like negation, the minimal number restriction and unary external checkable predicates to occur at the left-hand side of concept inclusion axioms. Some restrictions are adopted to guarantee a translation into eDatalog ¬ . We also develop the well-founded semantics and the stable model semantics for WORL as well as the standard semantics for stratified WORL (SWORL) via translation into eDatalog ¬ . Both WORL with respect to the well-founded semantics and SWORL with respect to the standard semantics have PTime data complexity. In contrast to the existing combined formalisms, in WORL and SWORL negation in concept inclusion axioms is interpreted using nonmonotonic semantics.

[94] Erik Sandewall. 2014.
Editorial Material: A perspective on the early history of artificial intelligence in Europe.
AI Communications, 27(1):81–86. IOS Press.
DOI: 10.3233/AIC-130585.

n/a

[93] Gurkan Tuna, Bilel Nefzi and Gianpaolo Conte. 2014.
Unmanned aerial vehicle-aided communications system for disaster recovery.
Journal of Network and Computer Applications, 41(??):27–36. Elsevier.
DOI: 10.1016/j.jnca.2013.10.002.

After natural disasters such as earthquakes, floods, hurricanes, tornados and fires, providing emergency management schemes which mainly rely on communications systems is essential for rescue operations. To establish an emergency communications system during unforeseen events such as natural disasters, we propose the use of a team of unmanned aerial vehicles (UAVs). The proposed system is a post-disaster solution and can be used whenever and wherever required. Each UAV in the team has an onboard computer which runs three main subsystems responsible for end-to-end communication, formation control and autonomous navigation. The onboard computer and the low-level controller of the UAV cooperate to accomplish the objective of providing local communications infrastructure. In this study, the subsystems running on each UAV are explained and evaluated by simulation studies and field tests using an autonomous helicopter. While the simulation studies address the efficiency of the end-to-end communication subsystem, the field tests evaluate the accuracy of the navigation subsystem. The results of the field tests and the simulation studies show that the proposed system can be successfully used in case of disasters to establish an emergency communications system.

2013
[92] Patrick Doherty and Andrzej Szalas. 2013.
Automated Generation of Logical Constraints on Approximation Spaces Using Quantifier Elimination.
Fundamenta Informaticae, 127(1-4):135–149. IOS Press.
DOI: 10.3233/FI-2013-900.
Note: Funding Agencies|Swedish Research Council (VR) Linnaeus Center CADICS||ELLIIT Excellence Center at Linkoping-Lund in Information Technology||CUAS project||SSF, the Swedish Foundation for Strategic Research||

This paper focuses on approximate reasoning based on the use of approximation spaces. Approximation spaces and the approximated relations induced by them are a generalization of the rough set-based approximations of Pawlak. Approximation spaces are used to define neighborhoods around individuals and rough inclusion functions. These in turn are used to define approximate sets and relations. In any of the approaches, one would like to embed such relations in an appropriate logical theory which can be used as a reasoning engine for specific applications with specific constraints. We propose a framework which permits a formal study of the relationship between properties of approximations and properties of approximation spaces. Using ideas from correspondence theory, we develop an analogous framework for approximation spaces. We also show that this framework can be strongly supported by automated techniques for quantifier elimination.

[91] Andrzej Szalas. 2013.
How an agent might think.
Logic journal of the IGPL (Print), 21(3):515–535. Oxford University Press (OUP): Policy A - Oxford Open Option A.
DOI: 10.1093/jigpal/jzs051.

The current article is devoted to extensions of the rule query language 4QL proposed by Małuszyński and Szałas. 4QL is a Datalog<sup>¬¬</sup>-like language, allowing one to use rules with negation in heads and bodies of rules. It is based on a simple and intuitive semantics and provides uniform tools for lightweight versions of well-known forms of non-monotonic reasoning. In addition, 4QL is tractable w.r.t. data complexity and captures PTime queries. In the current article we relax most of restrictions of 4QL, obtaining a powerful but still tractable query language 4QL<sup>+</sup>. In its development we mainly focused on its pragmatic aspects: simplicity, tractability and generality. In the article we discuss our approach and choices made, define a new, more general semantics and investigate properties of 4QL<sup>+</sup>.

[90] Full text  Cyrille Berger. 2013.
Toward rich geometric map for SLAM: online detection of planets in 2D LIDAR.
Journal of Automation, Mobile Robotics & Intelligent Systems, 7(1):35–41.
Link to article: http://www.jamris.org/archive.php

Rich geometric models of the environment are needed for robots to carry out their missions. However a robot operating in a large environment would require a compact representation. In this article, we present a method that relies on the idea that a plane appears as a line segment in a 2D scan, and that by tracking those lines frame after frame, it is possible to estimate the parameters of that plane. The method is divided in three steps: fitting line segments on the points of the 2D scan, tracking those line segments in consecutive scan and estimating the parameters with a graph based SLAM (Simultaneous Localisation And Mapping) algorithm.

[89] Full text  Patrick Doherty, Fredrik Heintz and Jonas Kvarnström. 2013.
High-level Mission Specification and Planning for Collaborative Unmanned Aircraft Systems using Delegation.
Unmanned Systems, 1(1):75–119. World Scientific.
DOI: 10.1142/S2301385013500052.

Automated specification, generation and execution of high level missions involving one or more heterogeneous unmanned aircraft systems is in its infancy. Much previous effort has been focused on the development of air vehicle platforms themselves together with the avionics and sensor subsystems that implement basic navigational skills. In order to increase the degree of autonomy in such systems so they can successfully participate in more complex mission scenarios such as those considered in emergency rescue that also include ongoing interactions with human operators, new architectural components and functionalities will be required to aid not only human operators in mission planning, but also the unmanned aircraft systems themselves in the automatic generation, execution and partial verification of mission plans to achieve mission goals. This article proposes a formal framework and architecture based on the unifying concept of delegation that can be used for the automated specification, generation and execution of high-level collaborative missions involving one or more air vehicles platforms and human operators. We describe an agent-based software architecture, a temporal logic based mission specification language, a distributed temporal planner and a task specification language that when integrated provide a basis for the generation, instantiation and execution of complex collaborative missions on heterogeneous air vehicle systems. A prototype of the framework is operational in a number of autonomous unmanned aircraft systems developed in our research lab.

[88] Full text  Fredrik Heintz, Jonas Kvarnström and Patrick Doherty. 2013.
Stream-Based Hierarchical Anchoring.
Künstliche Intelligenz, 27(2):119–128. Springer.
DOI: 10.1007/s13218-013-0239-2.

Autonomous systems situated in the real world often need to recognize, track, and reason about various types of physical objects. In order to allow reasoning at a symbolic level, one must create and continuously maintain a correlation between symbols denoting physical objects and sensor data being collected about them, a process called anchoring.In this paper we present a stream-based hierarchical anchoring framework. A classification hierarchy is associated with expressive conditions for hypothesizing the type and identity of an object given streams of temporally tagged sensor data. The anchoring process constructs and maintains a set of object linkage structures representing the best possible hypotheses at any time. Each hypothesis can be incrementally generalized or narrowed down as new sensor data arrives. Symbols can be associated with an object at any level of classification, permitting symbolic reasoning on different levels of abstraction. The approach is integrated in the DyKnow knowledge processing middleware and has been applied to an unmanned aerial vehicle traffic monitoring application.

[87] H. Levent Akin, Nobuhiro Ito, Adam Jacoff, Alexander Kleiner, Johannes Pellenz and Arnoud Visser. 2013.
RoboCup Rescue Robot and Simulation Leagues.
The AI Magazine, 34(1):????. AAAI Press.
Link to journal: http://www.aaai.org/ojs/index.php/aimaga...
fulltext:preprint: http://liu.diva-portal.org/smash/get/div...

The RoboCup Rescue Robot and Simulation competitions have been held since 2000. The experience gained during these competitions has increased the maturity level of the field, which allowed deploying robots after real disasters (e.g. Fukushima Daiichi nuclear disaster). This article provides an overview of these competitions and highlights the state of the art and the lessons learned.

[86] Full text  Quirin Hamp, Omar Gorgis, Patrick Labenda, Marc Neumann, Thomas Predki, Leif Heckes, Alexander Kleiner and Leonard Reindl. 2013.
Study of efficiency of USAR operations with assistive technologies.
Advanced Robotics, 27(5):337–350.
DOI: 10.1080/01691864.2013.763723.
Note: Funding Agencies|German Federal Ministry of Education and Research|13N9759|German Federal Agency for Technical Relief (THW)||RIF e.V.||JT-electronic GmbH||carat robotic innovation GmbH||Berlin- Oberspree Sondermaschinenbau GmbH (BOS)||SEEBA||
fulltext:postprint: http://liu.diva-portal.org/smash/get/div...

This paper presents presents a study on eciency of Urban Search and Rescue (USAR) missions that has been carried out within the framework of the German research project I-LOV. After three years of development, first field tests have been carried out in 2011 by professionals such as the Rapid Deployment Unit for Salvage Operations Abroad (SEEBA). We present results from evaluating search teams in simulated USAR scenarios equipped with newly developed technical search means and digital data input terminals developed in the I-LOV project. In particular, USAR missions assisted by the “bioradarâ€, a ground-penetrating radar system for the detection of humanoid movements, a semi-active video probe of more than 10 m length for rubble pile exploration, a snake-like rescue robot, and the decision support system FRIEDAA were evaluated and compared with conventional USAR missions. Results of this evaluation indicate that the developed technologies represent an advantages for USAR missions, which are discussed in this paper.

[85] Full text  C. Dornhege and Alexander Kleiner. 2013.
A Frontier-Void-Based Approach for Autonomous Exploration in 3D.
Advanced Robotics, 27(6):459–468. Taylor and Francis.
DOI: 10.1080/01691864.2013.763720.
Note: Funding Agencies|Deutsche Forschungsgemeinschaft in the Transregional Collaborative Research Center|SFB/TR8|
fulltext:postprint: http://liu.diva-portal.org/smash/get/div...

We consider the problem of an autonomous robot searching for objects in unknown 3d space. Similar to the well known frontier-based exploration in 2d, the problem is to determine a minimal sequence of sensor viewpoints until the entire search space has been explored. We introduce a novel approach that combines the two concepts of voids, which are unexplored volumes in 3d, and frontiers, which are regions on the boundary between voids and explored space. Our approach has been evaluated on a mobile platform equipped with a manipulator searching for victims in a simulated USAR setup. First results indicate the real-world capability and search efficiency of the proposed method.

2012
[84] Anna Pernestål, Mattias Nyberg and Håkan Warnquist. 2012.
Modeling and inference for troubleshooting with interventions applied to a heavy truck auxiliary braking system.
Engineering applications of artificial intelligence, 25(4):705–719. Elsevier.
DOI: 10.1016/j.engappai.2011.02.018.

Computer assisted troubleshooting with external interventions is considered. The work is motivated by the task of repairing an automotive vehicle at lowest possible expected cost. The main contribution is a decision theoretic troubleshooting system that is developed to handle external interventions. In particular, practical issues in modeling for troubleshooting are discussed, the troubleshooting system is described, and a method for the efficient probability computations is developed. The troubleshooting systems consists of two parts; a planner that relies on AO* search and a diagnoser that utilizes Bayesian networks (BN). The work is based on a case study of an auxiliary braking system of a modern truck. Two main challenges in troubleshooting automotive vehicles are the need for disassembling the vehicle during troubleshooting to access parts to repair, and the difficulty to verify that the vehicle is fault free. These facts lead to that probabilities for faults and for future observations must be computed for a system that has been subject to external interventions that cause changes in the dependency structure. The probability computations are further complicated due to the mixture of instantaneous and non-instantaneous dependencies. To compute the probabilities, we develop a method based on an algorithm, updateBN, that updates a static BN to account for the external interventions.

2011
[83] Full text  Teresa Vidal-Calleja, Cyrille Berger, Joan Solà and Simon Lacroix. 2011.
Large scale multiple robot visual mapping with heterogeneous landmarks in semi-structured terrain.
Robotics and Autonomous Systems, 59(9):654–674. Elsevier.
DOI: 10.1016/j.robot.2011.05.008.

This paper addresses the cooperative localization and visual mapping problem with multiple heterogeneous robots. The approach is designed to deal with the challenging large semi-structured outdoors environments in which aerial/ground ensembles are to evolve. We propose the use of heterogeneous visual landmarks, points and line segments, to achieve effective cooperation in such environments. A large-scale SLAM algorithm is generalized to handle multiple robots, in which a global graph maintains the relative relationships between a series of local sub-maps built by the different robots. The key issue when dealing with multiple robots is to find the link between them, and to integrate these relations to maintain the overall geometric consistency; the events that introduce these links on the global graph are described in detail. Monocular cameras are considered as the primary extereoceptive sensor. In order to achieve the undelayed initialization required by the bearing-only observations, the well-known inverse-depth parametrization is adopted to estimate 3D points. Similarly, to estimate 3D line segments, we present a novel parametrization based on anchored Plücker coordinates, to which extensible endpoints are added. Extensive simulations show the proposed developments, and the overall approach is illustrated using real-data taken with a helicopter and a ground rover.

[82] Barbara Dunin-Keplicz, Anh Linh Nguyen and Andrzej Szalas. 2011.
Converse-PDL with Regular Inclusion Axioms: A Framework for MAS Logics.
Journal of Applied Non-Classical Logics, 21(1):61–91. Lavoisier.
DOI: 10.3166/JANCL.21.61-91.

<em>In this paper we study automated reasoning in the modal logic CPDLreg which is a combination of CPDL (Propositional Dynamic Logic with Converse) and REGc (Regular Grammar Logic with Converse). The logic CPDL is widely used in many areas, including program verification, theory of action and change, and knowledge representation. On the other hand, the logic REGc is applicable in reasoning about epistemic states and ontologies (via Description Logics). The modal logic CPDLreg can serve as a technical foundation for reasoning about agency. Even very rich multi-agent logics can be embedded into CPDLreg via a suitable translation. Therefore, CPDLreg can serve as a test bed to implement and possibly verify new ideas without providing specific automated reasoning techniques for the logic in question. This process can to a large extent be automated. The idea of embedding various logics into CPDLreg is illustrated on a rather advanced logic TEAMLOGK designed to specify teamwork in multi-agent systems. Apart from defining informational and motivational attitudes of groups of agents, TEAMLOGK allows for grading agents' beliefs, goals and intentions. The current paper is a companion to our paper (Dunin-Kęplicz et al., 2010a). The main contribution are proofs of soundness and completeness of the tableau calculus for CPDLreg provided in (Dunin-Kęplicz et al., 2010a).</em>

[81] Linh Anh Nguyen and Andrzej Szalas. 2011.
ExpTime Tableau Decision Procedures for Regular Grammar Logics with Converse.
Studia Logica: An International Journal for Symbolic Logic, 98(3):387–428. Springer Berlin/Heidelberg.
DOI: 10.1007/s11225-011-9341-3.

Grammar logics were introduced by Fariñas del Cerro and Penttonen in 1988 and have been widely studied. In this paper we consider regular grammar logics with converse (<em>REG</em> <sup><em>c</em> </sup>logics) and present sound and complete tableau calculi for the general satisfiability problem of <em>REG</em> <sup><em>c</em> </sup>logics and the problem of checking consistency of an ABox w.r.t. a TBox in a <em>REG</em> <sup><em>c</em> </sup>logic. Using our calculi we develop ExpTime (optimal) tableau decision procedures for the mentioned problems, to which various optimization techniques can be applied. We also prove a new result that the data complexity of the instance checking problem in <em>REG</em> <sup><em>c</em></sup>logics is coNP-complete.

[80] Jan Maluszynski and Andrzej Szalas. 2011.
Logical Foundations and Complexity of 4QL, a Query Language with Unrestricted Negation.
Journal of Applied Non-Classical Logics, 21(2):211–232. Lavoisier.
DOI: 10.3166/JANCL.21.211-232.

The paper discusses properties of 4QL, a DATALOG¬¬-like query language, originally outlined by Małuszy´nski and Szałas (Małuszy´nski &amp; Szałas, 2011). 4QL allows one to use rules with negation in heads and bodies of rules. It is based on a simple and intuitive semantics and provides uniform tools for “lightweight†versions of known forms of nonmonotonic reasoning. Negated literals in heads of rules may naturally lead to inconsistencies. On the other hand, rules do not have to attach meaning to some literals. Therefore 4QL is founded on a four-valued semantics, employing the logic introduced in (Małuszy´nski et al., 2008; Vitória et al., 2009) with truth values: ‘true’, ‘false’, ‘inconsistent’ and ‘unknown’. In addition, 4QL is tractable w.r.t. data complexity and captures PTIME queries. Even though DATALOG¬¬ is known as a concept for the last 30 years, to our best knowledge no existing approach enjoys these properties. In the current paper we:<ul><li>investigate properties of well-supported models of 4QL</li><li>prove the correctness of the algorithm for computing well-supported models</li><li>show that 4QL has PTIME data complexity and captures PTIME.</li></ul>

[79] Mark Buller, Paul Cuddihy, Ernest Davis, Patrick Doherty, Finale Doshi-Velez, Esra Erdem, Douglas Fisher, Nancy Green, Knut Hinkelmann, James McLurkin, Mary Lou Maher, Rajiv Maheswaran, Sara Rubinelli, Nathan Schurr, Donia Scott, Dylan Shell, Pedro Szekely, Barbara Thoenssen and Arnold B Urken. 2011.
Reports of the AAAI 2011 Spring Symposia.
The AI Magazine, 32(3):119–127. AAAI Press.

The Association for the Advancement of Artificial Intelligence presented the 2011 Spring Symposium Series Monday through Wednesday, March 21-23, 2011, at Stanford University. This report summarizes the eight symposia.

[78] Full text  Erik Sandewall. 2011.
From systems to logic in the early development of nonmonotonic reasoning.
Artificial Intelligence, 175(1):416–427. Elsevier.
DOI: 10.1016/j.artint.2010.04.013.

This note describes how the notion of nonmonotonic reasoning emerged in Artificial Intelligence from the mid-1960s to 1980. It gives particular attention to the interplay between three kinds of activities: design of high-level programming systems for AI, design of truth-maintenance systems, and the development of nonmonotonic logics. This was not merely a development from logic to implementation: in several cases there was a development from a system design to a corresponding logic. The article concludes with some reflections on the roles and relationships between logicist theory and system design in AI, and in particular in Knowledge Representation.

2010
[77] Full text  Erik Sandewall. 2010.
Exercising Moral Copyright for Evolving Publications.
ScieCom Info, 6(3):????. Svenskt Resurscentrum för Vetenskaplig Kommunikation.
Link: http://www.sciecom.org/ojs/index.php/sci...

[76] Anh Linh Nguyen and Andrzej Szalas. 2010.
Tableaux with Global Caching for Checking Satisfiability of a Knowledge Base in the Description Logic SH.
Transactions on Computational Collective Intelligence, 1(1):21–38. Springer. ISBN: 978-3-642-15033-3.
DOI: 10.1007/978-3-642-15034-0_2.

Description logics (DLs) are a family of knowledge representation languages which can be used to represent the terminological knowledge of an application domain in a structured and formally well-understood way. DLs can be used, for example, for conceptual modeling or as ontology languages. In fact, OWL (Web Ontology Language), recommended by W3C, is based on description logics. In the current paper we give the first direct ExpTime (optimal) tableau decision procedure, which is not based on transformation or on the pre-completion technique, for checking satisfiability of a knowledge base in the description logic <em>SH</em><img src=\"http://www.springerlink.com/jsMath/fonts/cmsy10/alpha/120/char53.png\" /><img src=\"http://www.springerlink.com/jsMath/fonts/cmsy10/alpha/120/char48.png\" />. Our procedure uses sound global caching and can be implemented as an extension of the highly optimized tableau prover TGC to obtain an efficient program for the mentioned satisfiability problem.

[75] Full text  Erik Sandewall. 2010.
Defeasible inheritance with doubt index and its axiomatic characterization.
Artificial Intelligence, 174(18):1431–1459. Elsevier.
DOI: 10.1016/j.artint.2010.09.001.

This article introduces and uses a representation of defeasible inheritance networks where links in the network are viewed as propositions, and where defeasible links are tagged with a quantitative indication of the proportion of exceptions, called the doubt index. This doubt index is used for restricting the length of the chains of inference. The representation also introduces the use of defeater literals that disable the chaining of subsumption links. The use of defeater literals replaces the use of negative defeasible inheritance links, expressing \"most A are not B\". The new representation improves the expressivity significantly. Inference in inheritance networks is defined by a combination of axioms that constrain the contents of network extensions, a heuristic restriction that also has that effect, and a nonmonotonic operation of minimizing the set of defeater literals while retaining consistency. We introduce an underlying semantics that defines the meaning of literals in a network, and prove that the axioms are sound with respect to this semantics. We also discuss the conditions for obtaining completeness. Traditional concepts, assumptions and issues in research on nonmonotonic or defeasible inheritance are reviewed in the perspective of this approach.

[74] Linh Anh Nguyen and Andrzej Szalas. 2010.
Checking Consistency of an ABox w.r.t. Global Assumptions in PDL.
Fundamenta Informaticae, 102(1):97–113. IOS Press.
DOI: 10.3233/FI-2010-299.

We reformulate Pratts tableau decision procedure of checking satisfiability of a set of formulas in PDL. Our formulation is simpler and its implementation is more direct. Extending the method we give the first Ex PT m E (optimal) tableau decision procedure not based on transformation for checking consistency of an ABox w.r.t. a TBox in PDL (here, PDL is treated as a description logic). We also prove a new result that the data complexity of the instance checking problem in PDL is coNP-complete.

[73] Full text  Oleg Burdakov, Patrick Doherty, Kaj Holmberg and Per-Magnus Olsson. 2010.
Optimal placement of UV-based communications relay nodes.
Journal of Global Optimization, 48(4):511–531. Springer.
DOI: 10.1007/s10898-010-9526-8.
Note: The original publication is available at www.springerlink.com:Oleg Burdakov, Patrick Doherty, Kaj Holmberg and Per-Magnus Olsson, Optimal placement of UV-based communications relay nodes, 2010, Journal of Global Optimization, (48), 4, 511-531.http://dx.doi.org/10.1007/s10898-010-9526-8Copyright: Springer Science Business Mediahttp://www.springerlink.com/
fulltext:postprint: http://liu.diva-portal.org/smash/get/div...

We consider a constrained optimization problem with mixed integer and real variables. It models optimal placement of communications relay nodes in the presence of obstacles. This problem is widely encountered, for instance, in robotics, where it is required to survey some target located in one point and convey the gathered information back to a base station located in another point. One or more unmanned aerial or ground vehicles (UAVs or UGVs) can be used for this purpose as communications relays. The decision variables are the number of unmanned vehicles (UVs) and the UV positions. The objective function is assumed to access the placement quality. We suggest one instance of such a function which is more suitable for accessing UAV placement. The constraints are determined by, firstly, a free line of sight requirement for every consecutive pair in the chain and, secondly, a limited communication range. Because of these requirements, our constrained optimization problem is a difficult multi-extremal problem for any fixed number of UVs. Moreover, the feasible set of real variables is typically disjoint. We present an approach that allows us to efficiently find a practically acceptable approximation to a global minimum in the problem of optimal placement of communications relay nodes. It is based on a spatial discretization with a subsequent reduction to a shortest path problem. The case of a restricted number of available UVs is also considered here. We introduce two label correcting algorithms which are able to take advantage of using some peculiarities of the resulting restricted shortest path problem. The algorithms produce a Pareto solution to the two-objective problem of minimizing the path cost and the number of hops. We justify their correctness. The presented results of numerical 3D experiments show that our algorithms are superior to the conventional Bellman-Ford algorithm tailored to solving this problem.

[72] Full text  Mattias Krysander, Fredrik Heintz, Jacob Roll and Erik Frisk. 2010.
FlexDx: A Reconfigurable Diagnosis Framework.
Engineering applications of artificial intelligence, 23(8):1303–1313. Elsevier.
DOI: 10.1016/j.engappai.2010.01.004.
fulltext:postprint: http://liu.diva-portal.org/smash/get/div...

Detecting and isolating multiple faults is a computationally expensive task. It typically consists of computing a set of tests and then computing the diagnoses based on the test results. This paper describes FlexDx, a reconfigurable diagnosis framework which reduces the computational burden while retaining the isolation performance by only running a subset of all tests that is sufficient to find new conflicts. Tests in FlexDx are thresholded residuals used to indicate conflicts in the monitored system. Special attention is given to the issues introduced by a reconfigurable diagnosis framework. For example, tests are added and removed dynamically, tests are partially performed on historic data, and synchronous and asynchronous processing are combined. To handle these issues FlexDx has been implemented using DyKnow, a stream-based knowledge processing middleware framework. Concrete methods for each component in the FlexDx framework are presented. The complete approach is exemplified on a dynamic system which clearly illustrates the complexity of the problem and the computational gain of the proposed approach.

[71] Barbara Dunin-Keplicz, Linh Anh Nguyen and Andrzej Szalas. 2010.
A Framework for Graded Beliefs, Goals and Intentions.
Fundamenta Informaticae, 100(1-4):53–76. IOS Press.
DOI: 10.3233/FI-2010-263.

In natural language we often use graded concepts, reflecting different intensity degrees of certain features. Whenever such concepts appear in a given real-life context, they need to be appropriately expressed in its models. In this paper, we provide a framework which allows for extending the BGI model of agency by grading beliefs, goals and intentions. We concentrate on TEAMLOG [6, 7, 8, 9, 12] and provide a complexity-optimal decision method for its graded version TEAMLOG(K) by translating it into CPDLreg (propositional dynamic logic with converse and \"inclusion axioms\" characterized by regular languages). We also develop a tableau calculus which leads to the first EXPTIME (optimal) tableau decision procedure for CPDLreg. As CPDLreg is suitable for expressing complex properties of graded operators, the procedure can also be used as a decision tool for other multiagent formalisms.

[70] Barbara Dunin-Keplicz, Linh Anh Nguyen and Andrzej Szalas. 2010.
A Layered Rule-Based Architecture for Approximate Knowledge Fusion.
COMPUTER SCIENCE AND INFORMATION SYSTEMS, 7(3):617–642. COMSIS CONSORTIUM.
DOI: 10.2298/CSIS100209015D.

In this paper we present a framework for fusing approximate knowledge obtained from various distributed, heterogenous knowledge sources. This issue is substantial in modeling multi-agent systems, where a group of loosely coupled heterogeneous agents cooperate in achieving a common goal. In paper [5] we have focused on defining general mechanism for knowledge fusion. Next, the techniques ensuring tractability of fusing knowledge expressed as a Horn subset of propositional dynamic logic were developed in [13,16]. Propositional logics may seem too weak to be useful in real-world applications. On the other hand, propositional languages may be viewed as sublanguages of first-order logics which serve as a natural tool to define concepts in the spirit of description logics [2]. These notions may be further used to define various ontologies, like e. g. those applicable in the Semantic Web. Taking this step, we propose a framework, in which our Horn subset of dynamic logic is combined with deductive database technology. This synthesis is formally implemented in the framework of HSPDL architecture. The resulting knowledge fusion rules are naturally applicable to real-world data.

[69] Full text  Oleg Burdakov, Patrick Doherty, Kaj Holmberg, Jonas Kvarnström and Per-Magnus Olsson. 2010.
Relay Positioning for Unmanned Aerial Vehicle Surveillance.
The international journal of robotics research, 29(8):1069–1087. Sage Publications.
DOI: 10.1177/0278364910369463.

When unmanned aerial vehicles (UAVs) are used for surveillance, information must often be transmitted to a base station in real time. However, limited communication ranges and the common requirement of free line of sight may make direct transmissions from distant targets impossible. This problem can be solved using relay chains consisting of one or more intermediate relay UAVs. This leads to the problem of positioning such relays given known obstacles, while taking into account a possibly mission-specific quality measure. The maximum quality of a chain may depend strongly on the number of UAVs allocated. Therefore, it is desirable to either generate a chain of maximum quality given the available UAVs or allow a choice from a spectrum of Pareto-optimal chains corresponding to different trade-offs between the number of UAVs used and the resulting quality. In this article, we define several problem variations in a continuous three-dimensional setting. We show how sets of Pareto-optimal chains can be generated using graph search and present a new label-correcting algorithm generating such chains significantly more efficiently than the best-known algorithms in the literature. Finally, we present a new dual ascent algorithm with better performance for certain tasks and situations.

[68] Barbara Dunin-Keplicz, Linh Anh Nguyen and Andrzej Szalas. 2010.
Tractable approximate knowledge fusion using the Horn fragment of serial propositional dynamic logic.
International Journal of Approximate Reasoning, 51(3):346–362. Elsevier.
DOI: 10.1016/j.ijar.2009.11.002.

In this paper we investigate a technique for fusing approximate knowledge obtained from distributed, heterogeneous information sources. This issue is substantial, e.g., in modeling multiagent systems, where a group of loosely coupled heterogeneous agents cooperate in achieving a common goal. Information exchange, leading ultimately to knowledge fusion, is a natural and vital ingredient of this process. We use a generalization of rough sets and relations [30], which depends on allowing arbitrary similarity relations. The starting point of this research is [6], where a framework for knowledge fusion in multiagent systems is introduced. Agents individual perceptual capabilities are represented by similarity relations, further aggregated to express joint capabilities of teams, This aggregation, expressing a shift from individual to social level of agents activity, has been formalized by means of dynamic logic. The approach of Doherty et al. (2007) [6] uses the full propositional dynamic logic, which does not guarantee tractability of reasoning. Our idea is to adapt the techniques of Nguyen [26-28] to provide an engine for tractable approximate database querying restricted to a Horn fragment of serial dynamic logic. We also show that the obtained formalism is quite powerful in applications.

[67] Karolina Eliasson. 2010.
A case-based approach to dialogue systems.
Journal of experimental and theoretical artificial intelligence (Print), 22(1):23–51. Taylor & Francis.
DOI: 10.1080/09528130902723708.

We describe an approach to integrate dialogue management, machine-learning and action planning in a system for dialogue between a human and a robot. Case-based techniques are used because they permit life-long learning from experience and demand little prior knowledge and few static hand-written structures. This approach has been developed through the work on an experimental dialogue system, called CEDERIC, that is connected to an unmanned aerial vehicle (UAV). A single case base and case-based reasoning engine is used both for understanding and for planning actions by the UAV. Dialogue experiments both with experienced and novice users, where the users have solved tasks by dialogue with this system, showed very adequate success rates.

[66] Full text  Fredrik Heintz, Jonas Kvarnström and Patrick Doherty. 2010.
Bridging the sense-reasoning gap: DyKnow - Stream-based middleware for knowledge processing.
Advanced Engineering Informatics, 24(1):14–26. Elsevier.
DOI: 10.1016/j.aei.2009.08.007.

Engineering autonomous agents that display rational and goal-directed behavior in dynamic physical environments requires a steady flow of information from sensors to high-level reasoning components. However, while sensors tend to generate noisy and incomplete quantitative data, reasoning often requires crisp symbolic knowledge. The gap between sensing and reasoning is quite wide, and cannot in general be bridged in a single step. Instead, this task requires a more general approach to integrating and organizing multiple forms of information and knowledge processing on different levels of abstraction in a structured and principled manner. We propose knowledge processing middleware as a systematic approach to organizing such processing. Desirable properties are presented and motivated. We argue that a declarative stream-based system is appropriate for the required functionality and present DyKnow, a concrete implemented instantiation of stream-based knowledge processing middleware with a formal semantics. Several types of knowledge processes are defined and motivated in the context of a UAV traffic monitoring application. In the implemented application, DyKnow is used to incrementally bridge the sense-reasoning gap and generate partial logical models of the environment over which metric temporal logical formulas are evaluated. Using such formulas, hypotheses are formed and validated about the type of vehicles being observed. DyKnow is also used to generate event streams representing for example changes in qualitative spatial relations, which are used to detect traffic violations expressed as declarative chronicles.

2009
[65] Dov Gabbay and Andrzej Szalas. 2009.
Annotation Theories over Finite Graphs.
Studia Logica: An International Journal for Symbolic Logic, 93(2-3):147–180. Springer.
DOI: 10.1007/s11225-009-9220-3.

In the current paper we consider theories with vocabulary containing a number of binary and unary relation symbols. Binary relation symbols represent labeled edges of a graph and unary relations represent unique annotations of the graph’s nodes. Such theories, which we call <em>annotation theories</em>, can be used in many applications, including the formalization of argumentation, approximate reasoning, semantics of logic programs, graph coloring, etc. We address a number of problems related to annotation theories over finite models, including satisfiability, querying problem, specification of preferred models and model checking problem. We show that most of considered problems are NPTime- or co-NPTime-complete. In order to reduce the complexity for particular theories, we use second-order quantifier elimination. To our best knowledge none of existing methods works in the case of annotation theories. We then provide a new second-order quantifier elimination method for stratified theories, which is successful in the considered cases. The new result subsumes many other results, including those of [2, 28, 21].

[64] Aida Vitoria, Jan Maluszynski and Andrzej Szalas. 2009.
Modelling and Reasoning with Paraconsistent Rough Sets.
Fundamenta Informaticae, 97(4):405–438.
DOI: 10.3233/FI-2009-209.

We present a language for defining paraconsistent rough sets and reasoning about them. Our framework relates and brings together two major fields: rough sets [23] and paraconsistent logic programming [9]. To model inconsistent and incomplete information we use a four-valued logic. The language discussed in this paper is based on ideas of our previous work [21, 32, 22] developing a four-valued framework for rough sets. In this approach membership function, set containment and set operations are four-valued, where logical values are t (true), f (false), i (inconsistent) and u (unknown). We investigate properties of paraconsistent rough sets as well as develop a paraconsistent rule language, providing basic computational machinery for our approach.

[63] Full text  Patrick Doherty, Jonas Kvarnström and Fredrik Heintz. 2009.
A Temporal Logic-based Planning and Execution Monitoring Framework for Unmanned Aircraft Systems.
Autonomous Agents and Multi-Agent Systems, 19(3):332–377. Springer.
DOI: 10.1007/s10458-009-9079-8.

Research with autonomous unmanned aircraft systems is reaching a new degree of sophistication where targeted missions require complex types of deliberative capability integrated in a practical manner in such systems. Due to these pragmatic constraints, integration is just as important as theoretical and applied work in developing the actual deliberative functionalities. In this article, we present a temporal logic-based task planning and execution monitoring framework and its integration into a fully deployed rotor-based unmanned aircraft system developed in our laboratory. We use a very challenging emergency services application involving body identification and supply delivery as a vehicle for showing the potential use of such a framework in real-world applications. TALplanner, a temporal logic-based task planner, is used to generate mission plans. Building further on the use of TAL (Temporal Action Logic), we show how knowledge gathered from the appropriate sensors during plan execution can be used to create state structures, incrementally building a partial logical model representing the actual development of the system and its environment over time. We then show how formulas in the same logic can be used to specify the desired behavior of the system and its environment and how violations of such formulas can be detected in a timely manner in an execution monitor subsystem. The pervasive use of logic throughout the higher level deliberative layers of the system architecture provides a solid shared declarative semantics that facilitates the transfer of knowledge between different modules.

[62] Andrzej Szalas and Dov Gabbay. 2009.
Voting by Eliminating Quantifiers.
Studia Logica: An International Journal for Symbolic Logic, 92(3):365–379. Springer.
DOI: 10.1007/s11225-009-9200-7.

Mathematical theory of voting and social choice has attracted much attention. In the general setting one can view social choice as a method of aggregating individual, often conflicting preferences and making a choice that is the best compromise. How preferences are expressed and what is the “best compromise†varies and heavily depends on a particular situation. The method we propose in this paper depends on expressing individual preferences of voters and specifying properties of the resulting ranking by means of first-order formulas. Then, as a technical tool, we use methods of second-order quantifier elimination to analyze and compute results of voting. We show how to specify voting, how to compute resulting rankings and how to verify voting protocols.

[61] Full text  Gianpaolo Conte and Patrick Doherty. 2009.
Vision-Based Unmanned Aerial Vehicle Navigation Using Geo-Referenced Information.
EURASIP Journal on Advances in Signal Processing, 2009(387308):1–18. Hindawi Publishing Corporation.
DOI: 10.1155/2009/387308.

This paper investigates the possibility of augmenting an Unmanned Aerial Vehicle (UAV) navigation system with a passive video camera in order to cope with long-term GPS outages. The paper proposes a vision-based navigation architecture which combines inertial sensors, visual odometry, and registration of the on-board video to a geo-referenced aerial image. The vision-aided navigation system developed is capable of providing high-rate and drift-free state estimation for UAV autonomous navigation without the GPS system. Due to the use of image-to-map registration for absolute position calculation, drift-free position performance depends on the structural characteristics of the terrain. Experimental evaluation of the approach based on offline flight data is provided. In addition the architecture proposed has been implemented on-board an experimental UAV helicopter platform and tested during vision-based autonomous flights.

2008
[60] Full text  Erik Johan Sandewall. 2008.
Extending the concept of publication: Factbases and knowledgebases.
Learned Publishing, 21(2):123–131. Association of Learned and Professional Society Publishers.
DOI: 10.1087/095315108X288893.

The concept of a 'publication' no longer applies only to printed works, information technology has extended its application to several other types of works. This article describes a facility called the Common Knowledge Library that publishes modules of formally structured information representing facts and knowledge of various kinds. Publications of this new type have some characteristics in common with databases, and others in common with software modules, however, they also share some important characteristics with traditional publications. A framework for citation of previous work is important in order to provide an incentive for contributors of such modules. Peer review - the traditional method of quality assurance for scientific articles - must also be applied, although in a modified form, for fact and knowledge modules. The construction of the Common Knowledge Library is a cumulative process, new contributions are obtained by interpreting the contents of existing knowledge sources on the Internet, and the existing contents of the Library are an important resource for that interpretation process.

[59] Andrzej Szalas. 2008.
Towards Incorporating Background Theories into Quantifier Elimination.
Journal of applied non-classical logics, 18(2-3):325–340. Éditions Hermès-Lavoisier.
DOI: 10.3166/jancl.18.325-340.

In the paper we present a technique for eliminating quantifiers of arbitrary order, in particular of first-order. Such a uniform treatment of the elimination problem has been problematic up to now, since techniques for eliminating first-order quantifiers do not scale up to higher-order contexts and those for eliminating higher-order quantifiers are usually based on a form of monotonicity w.r.t implication (set inclusion) and are not applicable to the first-order case. We make a shift to arbitrary relations \"ordering\" the underlying universe. This allows us to incorporate background theories into higher-order quantifier elimination methods which, up to now, has not been achieved. The technique we propose subsumes many other results, including the Ackermann's lemma and various forms of fixpoint approaches when the \"ordering\" relations are interpreted as implication and reveals the common principle behind these approaches.

[58] Full text  Anders Lund, Peter Andersson, J. Eriksson, J. Hallin, T. Johansson, R. Jonsson, H. Löfgren, C. Paulin and A. Tell. 2008.
Automatic fitting procedures for EPR spectra of disordered systems: matrix diagonalization and perturbation methods applied to fluorocarbon radicals.
Spectrochimica Acta Part A - Molecular and Biomolecular Spectroscopy, 69(5):1294–1300. Elsevier.
DOI: 10.1016/j.saa.2007.09.040.
Note: Original publication: A. Lund, P. Andersson, J. Eriksson, J. Hallin, T. Johansson, R. Jonsson, H. Löfgren, C. Paulin and A. Tell, Automatic fitting procedures for EPR spectra of disordered systems: matrix diagonalization and perturbation methods applied to fluorocarbon radicals, 2008, Spectrochimica Acta Part A, (69), 5, 1294-1300. http://dx.doi.org/10.1016/j.saa.2007.09.040. Copyright: Elsevier B.V., http://www.elsevier.com/
fulltext:postprint: http://liu.diva-portal.org/smash/get/div...

Two types of automatic fitting procedures for EPR spectra of disordered systems have been developed, one based on matrix diagonalisation of a general spin Hamiltonian, the other on 2<sup>nd</sup> order perturbation theory. The first program is based on a previous Fortran code complemented with a newly written interface in Java to provide user-friendly in- and output. The second is intended for the special case of free radicals with several relatively weakly interacting nuclei, in which case the general method becomes slow. A least squares’ fitting procedure utilizing analytical or numerical derivatives of the theoretically calculated spectrum with respect to the g-and hyperfine structure (hfs) tensors was used to refine those parameters in both cases. ‘Rigid limit’ ESR spectra from radicals in organic matrices and in polymers, previously studied experimentally at low temperature, were analysed by both methods. Fluoro-carbon anion radicals could be simulated, quite accurately with the exact method, whereas automatic fitting on e.g. the c-C<sub>4</sub>F<sub>8</sub><sup>-</sup> anion radical is only feasible with the 2<sup>nd</sup> order approximative treatment. Initial values for the <sup>19</sup>F hfs tensors estimated by DFT calculations were quite close to the final. For neutral radicals of the type XCF<sub>2</sub>CF<sub>2</sub>• the refinement of the hfs tensors by the exact method worked better than the approximate. The reasons are discussed. The ability of the fitting procedures to recover the correct magnetic parameters of disordered systems was investigated by fittings to synthetic spectra with known hfs tensors. The exact and the approximate methods are concluded to be complementary, one being general, but limited to relatively small systems, the other being a special treatment, suited for S=½ systems with several moderately large hfs.

2007
[57] Full text  Thomas Lemaire, Cyrille Berger, Il-Kyun Jung and Simon Lacroix. 2007.
Vision-Based SLAM: Stereo and Monocular Approaches.
International Journal of Computer Vision, 74(3):343–364. Kluwer Academic Publishers.
DOI: 10.1007/s11263-007-0042-3.

Building a spatially consistent model is a key functionality to endow a mobile robot with autonomy. Without an initial map or an absolute localization means, it requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides data from which stable features can be extracted and matched as the robot moves. But it does not directly provide 3D information, which is a difficulty for estimating the geometry of the environment. This article presents two approaches to the SLAM problem using vision: one with stereovision, and one with monocular images. Both approaches rely on a robust interest point matching algorithm that works in very diverse environments. The stereovision based approach is a classic SLAM implementation, whereas the monocular approach introduces a new way to initialize landmarks. Both approaches are analyzed and compared with extensive experimental results, with a rover and a blimp.

[56] Full text  Patrick Doherty, Witold Lukaszewicz and Andrzej Szalas. 2007.
Communication between agents with heterogeneous perceptual capabilities.
Information Fusion, 8(1):56–69. Elsevier.
DOI: 10.1016/j.inffus.2005.05.006.

In real world applications robots and software agents often have to be equipped with higher level cognitive functions that enable them to reason, act and perceive in changing, incompletely known and unpredictable environments. One of the major tasks in such circumstances is to fuse information from various data sources. There are many levels of information fusion, ranging from the fusing of low level sensor signals to the fusing of high level, complex knowledge structures. In a dynamically changing environment even a single agent may have varying abilities to perceive its environment which are dependent on particular conditions. The situation becomes even more complex when different agents have different perceptual capabilities and need to communicate with each other. In this paper, we propose a framework that provides agents with the ability to fuse both low and high level approximate knowledge in the context of dynamically changing environments while taking account of heterogeneous and contextually limited perceptual capabilities. To model limitations on an agent's perceptual capabilities we introduce the idea of partial tolerance spaces. We assume that each agent has one or more approximate databases where approximate relations are represented using lower and upper approximations on sets. Approximate relations are generalizations of rough sets. It is shown how sensory and other limitations can be taken into account when constructing and querying approximate databases for each respective agent. Complex relations inherit the approximativeness of primitive relations used in their definitions. Agents then query these databases and receive answers through the filters of their perceptual limitations as represented by (partial) tolerance spaces and approximate queries. The techniques used are all tractable. © 2005 Elsevier B.V. All rights reserved.

[55] Full text  D.M. Gabbay and Andrzej Szalas. 2007.
Second-order quantifier elimination in higher-order contexts with applications to the semantical analysis of conditionals.
Studia Logica: An International Journal for Symbolic Logic, 87(1):37–50. Springer.
DOI: 10.1007/s11225-007-9075-4.

Second-order quantifier elimination in the context of classical logic emerged as a powerful technique in many applications, including the correspondence theory, relational databases, deductive and knowledge databases, knowledge representation, commonsense reasoning and approximate reasoning. In the current paper we first generalize the result of Nonnengart and Szalas [17] by allowing second-order variables to appear within higher-order contexts. Then we focus on a semantical analysis of conditionals, using the introduced technique and Gabbay's semantics provided in [10] and substantially using a third-order accessibility relation. The analysis is done via finding correspondences between axioms involving conditionals and properties of the underlying third-order relation. © 2007 Springer Science+Business Media B.V.

[54] Full text  Patrick Doherty and Andrzej Szalas. 2007.
A correspondence framework between three-valued logics and similarity-based approximate reasoning.
Fundamenta Informaticae, 75(1-4):179–193. IOS Press.

This paper focuses on approximate reasoning based on the use of similarity spaces. Similarity spaces and the approximated relations induced by them are a generalization of the rough set-based approximations of Pawlak [17, 18]. Similarity spaces are used to define neighborhoods around individuals and these in turn are used to define approximate sets and relations. In any of the approaches, one would like to embed such relations in an appropriate logic which can be used as a reasoning engine for specific applications with specific constraints. We propose a framework which permits a formal study of the relationship between approximate relations, similarity spaces and three-valued logics. Using ideas from correspondence theory for modal logics and constraints on an accessibility relation, we develop an analogous framework for three-valued logics and constraints on similarity relations. In this manner, we can provide a tool which helps in determining the proper three-valued logical reasoning engine to use for different classes of approximate relations generated via specific types of similarity spaces. Additionally, by choosing a three-valued logic first, the framework determines what constraints would be required on a similarity relation and the approximate relations induced by it. Such information would guide the generation of approximate relations for specific applications.

2006
[53] Erik Sandewall. 2006.
Systems - Opening up the process.
Nature, ??(??):????. Nature Publishing Group.
DOI: 10.1038/nature04994.

n/a

[52] Full text  Patrik Haslum. 2006.
Improving heuristics through relaxed search - An analysis of TP4 and HSP*a in the 2004 planning competition.
The journal of artificial intelligence research, 25(??):233–267. AAAI Press.
DOI: 10.1613/jair.1885.

The h(m) admissible heuristics for (sequential and temporal) regression planning are defined by a parameterized relaxation of the optimal cost function in the regression search space, where the parameter m offers a trade-off between the accuracy and computational cost of the heuristic. Existing methods for computing the h(m) heuristic require time exponential in m, limiting them to small values (m &lt;= 2). The h(m) heuristic can also be viewed as the optimal cost function in a relaxation of the search space: this paper presents relaxed search, a method for computing this function partially by searching in the relaxed space. The relaxed search method, because it compute h(m) only partially, is computationally cheaper and therefore usable for higher values of m. The (complete) h(2) heuristic is combined with partial hm heuristics , for m = 3, ... computed by relaxed search, resulting in a more accurate heuristic. This use of the relaxed search method to improve on the h(2) heuristic is evaluated by comparing two optimal temporal planners: TP4, which does not use it, and HSP*(a), which uses it but is otherwise identical to TP4. The comparison is made on the domains used in the 2004 International Planning Competition, in which both planners participated. Relaxed search is found to be cost effective in some of these domains, but not all. Analysis reveals a characterization of the domains in which relaxed search can be expected to be cost effective, in terms of two measures on the original and relaxed search spaces. In the domains where relaxed search is cost effective, expanding small states is computationally cheaper than expanding large states and small states tend to have small successor states.

[51] Klas Nordberg, Patrick Doherty, Per-Erik Forssén, Johan Wiklund and Per Andersson. 2006.
A flexible runtime system for image processing in a distributed computational environment for an unmanned aerial vehicle.
International Journal of Pattern Recognition and Artificial Intelligence, 20(5):763–780.
DOI: 10.1142/S0218001406004867.

A runtime system for implementation of image processing operations is presented. It is designed for working in a flexible and distributed environment related to the software architecture of a newly developed UAV system. The software architecture can be characterized at a coarse scale as a layered system, with a deliberative layer at the top, a reactive layer in the middle, and a processing layer at the bottom. At a finer scale each of the three levels is decomposed into sets of modules which communicate using CORBA, allowing system development and deployment on the UAV to be made in a highly flexible way. Image processing takes place in a dedicated module located in the process layer, and is the main focus of the paper. This module has been designed as a runtime system for data flow graphs, allowing various processing operations to be created online and on demand by the higher levels of the system. The runtime system is implemented in Java, which allows development and deployment to be made on a wide range of hardware/software configurations. Optimizations for particular hardware platforms have been made using Java's native interface.

[50] Per Olof Pettersson and Patrick Doherty. 2006.
Probabilistic roadmap based path planning for an autonomous unmanned helicopter.
Journal of Intelligent & Fuzzy Systems, 17(4):395–405. IOS Press.

The emerging area of intelligent unmanned aerial vehicle (UAV) research has shown rapid development in recent years and offers a great number of research challenges for artificial intelligence. For both military and civil applications, there is a desire to develop more sophisticated UAV platforms where the emphasis is placed on development of intelligent capabilities. Imagine a mission scenario where a UAV is supplied with a 3D model of a region containing buildings and road structures and is instructed to fly to an arbitrary number of building structures and collect video streams of each of the building's respective facades. In this article, we describe a fully operational UAV platform which can achieve such missions autonomously. We focus on the path planner integrated with the platform which can generate collision free paths autonomously during such missions. Both probabilistic roadmap-based (PRM) and rapidly exploring random trees-based (RRT) algorithms have been used with the platform. The PRM-based path planner has been tested together with the UAV platform in an urban environment used for UAV experimentation.

[49] Full text  Fredrik Heintz and Patrick Doherty. 2006.
A knowledge processing middleware framework and its relation to the JDL data fusion model.
Journal of Intelligent & Fuzzy Systems, 17(4):335–351. IOS Press.

Any autonomous system embedded in a dynamic and changing environment must be able to create qualitative knowledge and object structures representing aspects of its environment on the fly from raw or preprocessed sensor data in order to reason qualitatively about the environment and to supply such state information to other nodes in the distributed network in which it is embedded. These structures must be managed and made accessible to deliberative and reactive functionalities whose successful operation is dependent on being situationally aware of the changes in both the robotic agent's embedding and internal environments. DyKnow is a knowledge processing middleware framework which provides a set of functionalities for contextually creating, storing, accessing and processing such structures. The framework is implemented and has been deployed as part of a deliberative/reactive architecture for an autonomous unmanned aerial vehicle. The architecture itself is distributed and uses real-time CORBA as a communications infrastructure. We describe the system and show how it can be used to create more abstract entity and state representations of the world which can then be used for situation awareness by an unmanned aerial vehicle in achieving mission goals. We also show that the framework is a working instantiation of many aspects of the JDL data fusion model.

[48] Andrzej Szalas. 2006.
Second-order Reasoning in Description Logics.
Journal of applied non-classical logics, 16(3 - 4):517–530. Éditions Hermès-Lavoisier.
DOI: 10.3166/jancl.16.517-530.

Description logics refer to a family of formalisms concentrated around concepts, roles and individuals. They belong to the most frequently used knowledge representation formalisms and provide a logical basis to a variety of well known paradigms. The main reasoning tasks considered in the area of description logics are those reducible to subsumption. On the other hand, any knowledge representation system should be equipped with a more advanced reasoning machinery. Therefore in the current paper we make a step towards integrating description logics with second-order reasoning. One of the important motivations behind introducing second-order formalism follows from the fact that many forms of commonsense and nonmonotonic reasoning used in AI can be modelled within the second-order logic. To achieve our goal we first extend description logics with a possibility to quantify over concepts. Since one of the main criticisms against the use of second-order formalisms is their complexity, we next propose second-order quantifier elimination techniques applicable to a large class of description logic formulas. Finally we show applications of the techniques, in particular in reasoning with circumscribed concepts and approximated terminological formulas.

[47] Full text  Patrick Doherty, Martin Magnusson and Andrzej Szalas. 2006.
Approximate Databases: A support tool for approximate reasoning.
Journal of applied non-classical logics, 16(1-2):87–118. Éditions Hermès-Lavoisier.
DOI: 10.3166/jancl.16.87-117.
Note: Special issue on implementation of logics

This paper describes an experimental platform for approximate knowledge databases called the Approximate Knowledge Database (AKDB), based on a semantics inspired by rough sets. The implementation is based upon the use of a standard SQL database to store logical facts, augmented with several query interface layers implemented in JAVA through which extensional, intensional and local closed world nonmonotonic queries in the form of crisp or approximate logical formulas can be evaluated tractably. A graphical database design user interface is also provided which simplifies the design of databases, the entering of data and the construction of queries. The theory and semantics for AKDBs is presented in addition to application examples and details concerning the database implementation.

2005
[46] Kevin McGee. 2005.
Enactive Cognitive Science: Part 1, Background and Research Themes.
Constructivist Foundations, 1(1):19–34.
Link: http://www.univie.ac.at/constructivism/j...

Enactive cognitive science is an approach to the study of mind that seeks to explain how the structures and mechanisms of autonomous cognitive systems can arise and participate in the generation and maintenance of viable perceiver-dependent worlds -- rather than more conventional cognitivist efforts, such as the attempt to explain cognition in terms of the \"recovery\" of (pre-given, timeless) features of The (objectively-existing and accessible) World. As such, enactive cognitive science is resonant with radical constructivism. And as with other scientific efforts conducted within a constructivist orientation, it is broadly \"conventional\" in its scientific methodology. That is, there is a strong emphasis on testable hypotheses, empirical observation, supportable mechanisms and models, rigorous experimental methods, acceptable criteria of validation, and the like. Nonetheless, this approach to cognitive science does also raise a number of specific questions about the scope of amenable phenomena (e.g. meaning, consciousness, etc.) -- and it also raises questions of whether such a perspective requires an expansion of what is typically considered within the purview of scientific method (e.g. the role of the observer/scientist). This paper is a brief introduction to enactive cognitive science: a description of some of the main research concerns; some examples of how such concerns have been realized in actual research; some of its research methods and proposed explanatory mechanisms and models; some of the potential as both a theoretical and applied science; and several of the major open research questions.

2004
[45] Full text  Fredrik Heintz and Patrick Doherty. 2004.
DyKnow: An approach to middleware for knowledge processing.
Journal of Intelligent & Fuzzy Systems, 15(1):3–13. IOS Press.

Any autonomous system embedded in a dynamic and changing environment must be able to create qualitative knowledge and object structures representing aspects of its environment on the fly from raw or preprocessed sensor data in order to reason qualitatively about the environment. These structures must be managed and made accessible to deliberative and reactive functionalities which are dependent on being situationally aware of the changes in both the robotic agent's embedding and internal environment. DyKnow is a software framework which provides a set of functionalities for contextually accessing, storing, creating and processing such structures. The system is implemented and has been deployed in a deliberative/reactive architecture for an autonomous unmanned aerial vehicle. The architecture itself is distributed and uses real-time CORBA as a communications infrastructure. We describe the system and show how it can be used in execution monitoring and chronicle recognition scenarios for UAV applications.

[44] Full text  Joakim Gustafsson and Jonas Kvarnström. 2004.
Elaboration tolerance through object-orientation.
Artificial Intelligence, 153(1-2):239–285. Elsevier.
DOI: 10.1016/j.artint.2003.08.004.

Although many formalisms for reasoning about action and change have been proposed in the literature, any concrete examples provided in such articles have primarily consisted of tiny domains that highlight some particular aspect or problem. However, since some of the classical problems are now completely or partially solved and since powerful tools are becoming available, it is now necessary to start modeling more complex domains. This article presents a methodology for handling such domains in a systematic manner using an object-oriented framework and provides several examples of the elaboration tolerance exhibited by the resulting models. (C) 2003 Elsevier B.V. All rights reserved.

[43] Full text  Bourhane Kadmiry and D. Driankov. 2004.
A fuzzy gain-scheduler for the attitude control of an unmanned helicopter.
IEEE transactions on fuzzy systems, 12(4):502–515. IEEE Computer Society.
DOI: 10.1109/TFUZZ.2004.832539.

In this paper, we address the design of an attitude controller that achieves stable, and robust aggressive maneuverability for an unmanned helicopter. The controller proposed is in the form of a fuzzy gain-scheduler, and is used for stable and robust altitude, roll, pitch, and yaw control. The controller is obtained from a realistic nonlinear multiple-input-multiple-output model of a real unmanned helicopter platform, the APID-MK3. The results of this work are illustrated by extensive simulation, showing that the objective of aggressive, and robust maneuverability has been achieved. © 2004 IEEE.

[42] Bourhane Kadmiry and Dimiter Driankov. 2004.
A Fuzzy Flight Controller Combining Linguistic and Model-based Fuzzy Control.
Fuzzy sets and systems (Print), 146(3):313–347. Elsevier.
DOI: 10.1016/j.fss.2003.07.002.

In this paper we address the design of a fuzzy flight controller that achieves stable and robust -aggressive- manoeuvrability for an unmanned helicopter. The fuzzy flight controller proposed consists of a combination of a fuzzy gain scheduler and linguistic (Mamdani-type) controller. The fuzzy gain scheduler is used for stable and robust altitude, roll, pitch, and yaw control. The linguistic controller is used to compute the inputs to the fuzzy gain scheduler, i.e., desired values for roll, pitch, and yaw at given desired altitude and horizontal velocities. The flight controller is obtained and tested in simulation using a realistic nonlinear MIMO model of a real unmanned helicopter platform, the APID-MK

[41] Full text  Ubbo Visser and Patrick Doherty. 2004.
Issues in Designing Physical Agents for Dynamic Real-Time Environments: World Modeling, Planning, Learning, and Communicating.
The AI Magazine, 25(2):137–138. AAAI Press.

This article discusses a workshop held in conjunction with the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI-03), held in Acapulco, Mexico, on 11 August 2003.

2003
[40] Full text  Jonas Kvarnström and Martin Magnusson. 2003.
TALplanner in the Third International Planning Competition: Extensions and control rules.
The journal of artificial intelligence research, 20(??):343–377. AAAI Press.
DOI: 10.1613/jair.1189.

TALplanner is a forward-chaining planner that relies on domain knowledge in the shape of temporal logic formulas in order to prune irrelevant parts of the search space. TALplanner recently participated in the third International Planning Competition, which had a clear emphasis on increasing the complexity of the problem domains being used as benchmark tests and the expressivity required to represent these domains in a planning system. Like many other planners, TALplanner had support for some but not all aspects of this increase in expressivity, and a number of changes to the planner were required. After a short introduction to TALplanner, this article describes some of the changes that were made before and during the competition. We also describe the process of introducing suitable domain knowledge for several of the competition domains.

[39] Patrick Doherty, Andrew Skowron, Witold Lukaszewicz and Andrzej Szalas. 2003.
Preface.
Fundamenta Informaticae, 57(2-4):i–iii. IOS Press.

[38] Full text  E Madalinska-Bugaj and Witold Lukaszewicz. 2003.
Formalizing defeasible logic in CAKE.
Fundamenta Informaticae, 57(2-3):193–213. IOS Press.

Due to its efficiency, defeasible logic is one of the most interesting non-monotonic formalisms. Unfortunately, the logic has one major limitation: it does not properly deal with cyclic defeasible rules. In this paper, we provide a new variant of defeasible logic, using CAKE method. The resulting formalism is tractable and properly deals with circular defeasible rules.

[37] Full text  Patrick Doherty, M Grabowski, Witold Lukaszewicz and Andrzej Szalas. 2003.
Towards a framework for approximate ontologies.
Fundamenta Informaticae, 57(2-4):147–165. IOS Press.

Currently, there is a great deal of interest in developing tools for the generation and use of ontologies on the WWW. These knowledge structures are considered essential to the success of the semantic web, the next phase in the evolution of the WWW. Much recent work with ontologies assumes that the concepts used as building blocks are crisp as opposed to approximate. It is a premise of this paper that approximate concepts and ontologies will become increasingly more important as the semantic web becomes a reality. We propose a framework for specifying, generating and using approximate ontologies. More specifically, (1) a formal framework for defining approximate concepts, ontologies and operations on approximate concepts and ontologies is presented. The framework is based on intuitions from rough set theory, (2) algorithms for automatically generating approximate ontologies from traditional crisp ontologies or from large data sets together with additional knowledge are presented. The knowledge will generally be related to similarity measurements between individual objects in the data sets, or constraints of a logical nature which rule out particular constellations of concepts and dependencies in generated ontologies. The techniques for generating approximate ontologies are parameterizable. The paper provides specific instantiations and examples.

[36] Patrick Doherty and et al. 2003.
2003 AAAI Spring Symposium Series.
The AI Magazine, 24(3):131–140. AAAI Press.

The American Association for Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, presented the 2003 Spring Symposium Series, Monday through Wednesday, 24–26 March 2003, at Stanford University. The titles of the eight symposia were Agent-Mediated Knowledge Management, Computational Synthesis: From Basic Building Blocks to High- Level Functions, Foundations and Applications of Spatiotemporal Reasoning (FASTR), Human Interaction with Autonomous Systems in Complex Environments, Intelligent Multimedia Knowledge Management, Logical Formalization of Commonsense Reasoning, Natural Language Generation in Spoken and Written Dialogue, and New Directions in Question-Answering Motivation.

2001
[35] Full text  Erik Sandewall. 2001.
On the Design of Software Individuals.
Electronic Transactions on Artifical Intelligence, 5(??):????. Linköpings Universitet.

[34] Full text  Patrick Doherty and Jonas Kvarnström. 2001.
TALPLANNER - A temporal logic-based planner.
The AI Magazine, 22(3):95–102. AAAI Press.

TALPLANNER is a forward-chaining planner that utilizes domain-dependent knowledge to control search in the state space generated by action invocation. The domain-dependent control knowledge, background knowledge, plans, and goals are all represented, using,formulas in, a temporal logic called TAL, which has been developed independently as a formalism for specifying agent narratives and reasoning about them. In the Fifth International Artificial Intelligence Planning and Scheduling Conference planning competition, TALPLANNER exhibited impressive performance, winning the Outstanding Performance Award in the Domain-Dependent Planning Competition. In this article, we provide an overview of TALPLANNER.

2000
[33] Full text  Mark S Frankel, Roger Elliott, Martin Blume, Jean-Manuel Bourgois, Bernt Hugenholtz, Mats G. Lindquist, Sally Morris and Erik Sandewall. 2000.
Defining and Certifying Electronic Publication in Science.
Learned Publishing, 13(4):251–258. Association of Learned and Professional Society Publishers.
Link to article: http://www.ida.liu.se/ext/caisor/archive...

[32] Full text  Peter Jonsson, Patrik Haslum and Christer Bäckström. 2000.
Towards efficient universal planning: A randomized approach.
Artificial Intelligence, 117(1):1–29. Elsevier.
DOI: 10.1016/S0004-3702(99)00103-4.

One of the most widespread approaches to reactive planning is Schoppers' universal plans. We propose a stricter definition of universal plans which guarantees a weak notion of soundness, not present in the original definition, and isolate three different types of completeness that capture different behaviors exhibited by universal plans. We show that universal plans which run in polynomial time and are of polynomial size cannot satisfy even the weakest type of completeness unless the polynomial hierarchy collapses. By relaxing either the polynomial time or the polynomial space requirement, the construction of universal plans satisfying the strongest type of completeness becomes trivial. As an alternative approach, we study randomized universal planning. By considering a randomized version of completeness and a restricted (but nontrivial) class of problems, we show that there exists randomized universal plans running in polynomial time and using polynomial space which are sound and complete for the restricted class of problems. We also report experimental results on this approach to planning, showing that the performance of a randomized planner is not easily compared to that of a deterministic planner.

[31] Full text  Patrick Doherty and Jonas Kvarnström. 2000.
TALplanner: A temporal logic based forward chaining planner.
Annals of Mathematics and Artificial Intelligence, 30(1-4):119–169. Springer.
DOI: 10.1023/A:1016619613658.

We present TALplanner, a forward-chaining planner based on the use of domain-dependent search control knowledge represented as formulas in the Temporal Action Logic (TAL). TAL is a narrative based linear metric time logic used for reasoning about action and change in incompletely specified dynamic environments. TAL is used as the formal semantic basis for TALplanner, where a TAL goal narrative with control formulas is input to TALplanner which then generates a TAL narrative that entails the goal and control formulas. The sequential version of TALplanner is presented. The expressivity of plan operators is then extended to deal with an interesting class of resource types. An algorithm for generating concurrent plans, where operators have varying durations and internal state, is also presented. All versions of TALplanner have been implemented. The potential of these techniques is demonstrated by applying TALplanner to a number of standard planning benchmarks in the literature.

[30] Patrick Doherty, Witold Lukaszewicz and E. Madalin´ska-Bugaj. 2000.
The PMA and relativizing minimal change for action update.
Fundamenta Informaticae, 44(1-2):95–131. IOS Press.

Recently, a great deal of progress has been made using nonmonotonic temporal logics to formalize reasoning about action and change. In particular, much focus has been placed on the proper representation of non-deterministic actions and the indirect effects of actions. For the latter the use of causal or fluent dependency rule approaches has been dominant. Although much recent effort has also been spent applying the belief revision/update (BR/U) approach to the action and change domain, there has been less progress in dealing with nondeterministic update and indirect effects represented as integrity constraints. We demonstrate that much is to be gained by cross-fertilization between the two paradigms and we show this in the following manner. We first propose a generalization of the PMA, called the modified MPMA which uses intuitions from the TL paradigm to permit representation of nondeterministic update and the use of integrity constraints interpreted as causal or fluent dependency rules. We provide several syntactic characterizations of MPMA, one of which is in terms of a simple temporal logic and provide a representation theorem showing equivalence between the two. In constructing the MPMA, we discovered a syntactic anomaly which we call the redundant atom anomaly that many TL approaches suffer from. We provide a method for avoiding the problem which is equally applicable across paradigms. We also describe a syntactic characterization of MPMA in terms of Dijkstra semantics. We set up a framework for future generalization of the BR/U approach and conclude with a formal comparison of related approaches.

[29] Full text  Jonas Kvarnström and Patrick Doherty. 2000.
Tackling the qualification problem using fluent dependency constraints.
Computational intelligence, 16(2):169–209. Blackwell Publishing.
DOI: 10.1111/0824-7935.00111.

In the area of formal reasoning about action and change, one of the fundamental representation problems is providing concise modular and incremental specifications of action types and world models, where instantiations of action types are invoked by agents such as mobile robots. Provided the preconditions to the action are true, their invocation results in changes to the world model concomitant with the goal-directed behavior of the agent. One particularly difficult class of related problems, collectively called the qualification problem, deals with the need to find a concise incremental and modular means of characterizing the plethora of exceptional conditions that might qualify an action, but generally do not, without having to explicitly enumerate them in the preconditions to an action. We show how fluent dependency constraints together with the use of durational fluents can be used to deal with problems associated with action qualification using a temporal logic for action and change called TAL-Q. We demonstrate the approach using action scenarios that combine solutions to the frame, ramification, and qualification problems in the context of actions with duration, concurrent actions, nondeterministic actions, and the use of both Boolean and non-Boolean fluents. The circumscription policy used for the combined problems is reducible to the first-order case.

1999
[28] Full text  Lars Karlsson and Joakim Gustafsson. 1999.
Reasoning about Concurrent Interaction.
Journal of logic and computation (Print), 9(5):623–650. Oxford University Press.
DOI: 10.1093/logcom/9.5.623.

In this paper we present TAL-C, a logic of action and change for worlds with action concurrency. TAL-C has a first-order semantics and proof theory. It builds on an existing logic TAL, which includes the use of dependency laws for dealing with ramification. It is demonstrated how TAL-C can represent a number of phenomena related to action concurrency: action duration, how the effects of one action interferes with or enables another action, synergistic effects of concurrent actions, conflicting and cumulative effect interactions, and resource conflicts. A central idea is that actions are not described as having effects that directly alter the world state. Instead, actions produce influences, and the way these influences alter the world state are described in specialized influence laws. Finally, we address how TAL-C narratives can be written to support modularity.

[27] Thomas Drakengren and Marcus Bjäreland. 1999.
Reasoning about action in polynomial time.
Artificial Intelligence, 115(1):1–24. Elsevier.
DOI: 10.1016/S0004-3702(99)00065-X.

Although many formalisms for reasoning about action exist, surprisingly few approaches have taken computational complexity into consideration. The contributions of this article are the following: a temporal logic with a restriction for which deciding satisfiability is tractable, a tractable extension for reasoning about action, and NP-completeness results for the unrestricted problems. Many interesting reasoning problems can be modelled, involving nondeterminism, concurrency and memory of actions. The reasoning process is proved to be sound and complete. (C) 1999 Published by Elsevier Science B.V. All rights reserved.

[26] Full text  Patrick Doherty, Jaroslaw Kachniarz and Andrzej Szalas. 1999.
Meta-queries on deductive databases.
Fundamenta Informaticae, 40(1):17–30. IOS Press.
DOI: 10.3233/FI-1999-40102.

We introduce the notion of a meta-query on relational databases and a technique which can be used to represent and solve a number of interesting problems from the area of knowledge representation using logic. The technique is based on the use of quantifier elimination and may also be used to query relational databases using a declarative query language called SHQL (Semi-Horn Query Language), introduced in [6]. SHQL is a fragment of classical first-order predicate logic and allows us to define a query without supplying its explicit definition. All SHQL queries to the database can be processed in polynomial time (both on the size of the input query and the size of the database). We demonstrate the use of the technique in problem solving by structuring logical puzzles from the Knights and Knaves domain as SHQL meta-queries on relational databases. We also provide additional examples demonstrating the flexibility of the technique. We conclude with a description of a newly developed software tool, The Logic Engineer, which aids in the description of algorithms using transformation and reduction techniques such as those applied in the meta-querying approach.

[25] Full text  Patrick Doherty, Witold Lukaszewicz and Andrzej Szalas. 1999.
Declarative PTIME queries for relational databases using quantifier elimination.
Journal of logic and computation (Print), 9(5):737–758. Oxford University Press.
DOI: 10.1093/logcom/9.5.737.

In this paper, we consider the problem of expressing and computing queries on relational deductive databases in a purely declarative query language, called SHQL (Semi-Horn Query Language). Assuming the relational databases in question are ordered, we show that all SHQL queries are computable in PTIME (polynomial time) and the whole class of PTIME queries is expressible in SHQL. Although similar results have been proven for fixpoint languages and extensions to datalog, the claim is that SHQL has the advantage of being purely declarative, where the negation operator is interpreted as classical negation, mixed quantifiers may be used and a query is simply a restricted first-order theory not limited by the rule-based syntactic restrictions associated with logic programs in general. We describe the PTIME algorithm used to compute queries in SHQL which is based in part on quantifier elimination techniques and also consider extending the method to incomplete relational databases using intuitions related to circumscription techniques.

1998
[24] Full text  Patrick Doherty, Joakim Gustafsson, Lars Karlsson and Jonas Kvarnström. 1998.
(TAL) temporal action logics: Language specification and tutorial.
Electronic Transactions on Artifical Intelligence, 2(3-4):273–306.
Link: http://www.ep.liu.se/ej/etai/1998/009/

The purpose of this article is to provide a uniform, lightweight language specication and tutorial for a class of temporal logics for reasoning about action and change that has been developed by our group during the period 1994-1998. The class of logics are collected under the name TAL, an acronym for Temporal Action Logics. TAL has its origins and inspiration in the work with Features and Fluents (FF) by Sandewall, but has diverged from the methodology and approach through the years. We first discuss distinctions and compatibility with FF, move on to the lightweight language specication, and then present a tutorial in terms of an excursion through the different parts of a relatively complex narrative defined using TAL. We conclude with an annotated list of published work from our group. The article tries to strike a reasonable balance between detail and readability, making a number of simplications regarding narrative syntax and translation to a base logical language. Full details are available in numerous technical reports and articles which are listed in the final section of this article.

[23] Patrick Doherty, Witold Lukaszewicz and Andrzej Szalas. 1998.
General domain circumscription and its effective reductions.
Fundamenta Informaticae, 36(1):23–55. IOS Press.
DOI: 10.3233/FI-1998-3612.

We first define general domain circumscription (GDC) and provide it with a semantics. GDC subsumes existing domain circumscription proposals in that it allows varying of arbitrary predicates, functions, or constants, to maximize the minimization of the domain of a theory. We then show that for the class of semi-universal theories without function symbols, that the domain circumscription of such theories can be constructively reduced to logically equivalent first-order theories by using an extension of the DLS algorithm, previously proposed by the authors for reducing second-order formulas. We also show that for a certain class of domain circumscribed theories, that any arbitrary second-order circumscription policy applied to these theories is guaranteed to be reducible to a logically equivalent first-order theory. In the case of semi-universal theories with functions and arbitrary theories which are not separated, we provide additional results, which although not guaranteed to provide reductions in all cases, do provide reductions in some cases. These results are based on the use of fixpoint reductions.

1997
[22] Dag Fritzson, Peter Fritzson, Patrik Nordling and Tommy Persson. 1997.
Rolling Bearing Simulation on MIMD Computers.
The international journal of high performance computing applications, 11(4):299–313. Sage Publications.
DOI: 10.1177/109434209701100404.
Fulltext: https://doi.org/10.1177/1094342097011004...

Rolling bearing simulations are very computationally in tensive. Serial simulations may take weeks to execute, and there is a need to use the potential of parallel comput ing. The specific structure of the rolling bearing problem is used to develop suitable scheduling strategies. The authors discuss the system of stiff ordinary differential equations arising from a bearing model and show how to numerically solve these ordinary differential equations on parallel computers. Benchmarking results are presented for two test cases on three platforms.

[21] Full text  Patrick Doherty, Witold Lukaszewicz and Andrzej Szalas. 1997.
Computing circumscription revisited: A reduction algorithm.
Journal of automated reasoning, 18(3):297–336. Kluwer Academic Publishers.
DOI: 10.1023/A:1005722130532.

In recent years, a great deal of attention has been devoted to logics of common-sense reasoning. Among the candidates proposed, circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic reasoning, but difficult to apply in practice. The major reason for this is the second-order nature of circumscription axioms and the difficulty in finding proper substitutions of predicate expressions for predicate variables. One solution to this problem is to compile, where possible, second-order formulas into equivalent first-order formulas. Although some progress has been made using this approach, the results are not as strong as one might desire and they are isolated in nature. In this article, we provide a general method that can be used in an algorithmic manner to reduce certain circumscription axioms to first-order formulas. The algorithm takes as input an arbitrary second-order formula and either returns as output an equivalent first-order formula, or terminates with failure. The class of second-order formulas, and analogously the class of circumscriptive theories that can be reduced, provably subsumes those covered by existing results. We demonstrate the generality of the algorithm using circumscriptive theories with mixed quantifiers (some involving Skolemization), variable constants, nonseparated formulas, and formulas with n-ary predicate variables. In addition, we analyze the strength of the algorithm, compare it with existing approaches, and provide formal subsumption results.

1996
[20] Andrzej Szalas. 1996.
On Natural Deduction in First-Order Fixpoint Logics.
Fundamenta Informaticae, 26(1):81–94. IOS Press.
DOI: 10.3233/FI-1996-2616.

In the current paper we present a powerful technique of obtaining natural deduction proof systems for first-order fixpoint logics. The term fixpoint logics refers collectively to a class of logics consisting of modal logics with modalities definable at meta-level by fixpoint equations on formulas. The class was found very interesting as it contains most logics of programs with e.g. dynamic logic, temporal logic and the ¯-calculus among them. In this paper we present a technique that allows us to derive automatically natural deduction systems for modal logics from fixpoint equations defining the modalities

[19] Full text  Patrick Doherty, Witold Lukaszewicz and Andrzej Szalas. 1996.
A reduction result for circumscribed semi-horn formulas.
Fundamenta Informaticae, 28(3,4):261–272. IOS Press.
DOI: 10.3233/FI-1996-283404.

Circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic and commonsense reasoning, but difficult to apply in practice due to the use of second-order formulas. One proposal for dealing with the computational problems is to identify classes of first-order formulas whose circumscription can be shown to be equivalent to a first-order formula. In previous work, we presented an algorithm which reduces certain classes of second-order circumscription axioms to logically equivalent first-order formulas. The basis for the algorithm is an elimination lemma due to Ackermann. In this paper, we capitalize on the use of a generalization of Ackermann's Lemma in order to deal with a subclass of universal formulas called <em>semi-Horn formulas</em>. Our results subsume previous results by Kolaitis and Papadimitriou regarding a characterization of circumscribed definite logic programs which are first-order expressible. The method for distinguishing which formulas are reducible is based on a boundedness criterion. The approach we use is to first reduce a circumscribed semi-Horn formula to a fixpoint formula which is reducible if the formula is bounded, otherwise not. In addition to a number of other extensions, we also present a fixpoint calculus which is shown to be sound and complete for bounded fixpoint formulas.

1994
[18] Andrzej Szalas. 1994.
On an Automated Translation of Modal Proof Rules into Formulas of the Classical Logic.
Journal of Applied Non-Classical Logics, 4(2):119–127. Éditions Hermès-Lavoisier.

In this paper we study correspondences between modal proof rules and the classical logic. The method we apply is based on an Ackermann's technique of eliminating second-order quantifiers from formulas. We show that the process of finding suitable correspondences can be reduced to a few simple steps. Moreover, the whole technique can be fully mechanized. We thus provide the reader with a powerful tool, useful in automated translations between modal logics and the classical logic.

1993
[17] Andrzej Szalas. 1993.
On the Correspondence between Modal and Classical Logic: An Automated Approach.
Journal of logic and computation (Print), 3(6):605–620. Oxford University Press.
DOI: 10.1093/logcom/3.6.605.

The current paper is devoted to automated techniques in the correspondence theory. The theory we deal with concerns the problem of finding classical first-order axioms corresponding to propositional modal schemas. Given a modal schema and a semantics base method of translating propositional modal formulae into classical first-order ones, we try to derive automatically classica first-order formulae characterizing precisely the class of frames validating the schema. The technique we consider can, in many cases, be easily applied even without computer support. Although we mainly concentrate on Kripke semantics, the technique we apply is much more general, as it is based on elimination of second-order quantifiers from formulae. We show many examples of application of the method. These can also serve as new, automated proofs of considered correspondences.

[16] Patrick Doherty, Dimiter Driankov and Hans Hellendoorn. 1993.
Fuzzy if-then-unless rules and their implementation.
International Journal of Uncertainty Fuzziness and Knowledge-Based Systems, 1(2):167–182. World Scientific.
DOI: 10.1142/S0218488593000097.

We consider the possibility of generalizing the notion of a fuzzy If-Then rule to take into account its context dependent nature. We interpret fuzzy rules as modeling a forward directed causal relationship between the antecedent and the conclusion, which applies in most contexts, but on occasion breaks down in exceptional contexts. The default nature of the rule is modeled by augmenting the original If-Then rule with an exception part. We then consider the proper semantic correlate to such an addition and propose a ternary relation which satisfies a number of intuitive constraints described in terms of a number of inference rules. In the rest of the paper, we consider implementational issues arising from the unless extension and propose the use of reason maintenance systems, in particular TMS's, where a fuzzy If-Then-Unless rule is encoded into a dependency net. We verify that the net satisfies the constraints stated in the inference schemes and conclude with a discussion concerning the integration of qualitative IN-OUT labelings of the TMS with quantitative degree of membership labelings for the variables in question.

1992
[15] Andrzej Szalas. 1992.
Axiomatizing Fixpoint Logics.
Information Processing Letters, 41(4):175–180. Elsevier.
DOI: 10.1016/0020-0190(92)90175-U.

[14] Patrick Doherty and Witold Lukaszewicz. 1992.
NML-3 - A non-monotonic logic with explicit defaults.
Journal of applied non-classical logics, 2(1):9–48. Éditions Hermès-Lavoisier.

1991
[13] Andrzej Szalas. 1991.
On Strictly Arithmetical Completeness in Logics of Programs.
Theoretical Computer Science, 79(2):341–355. Elsevier.
DOI: 10.1016/0304-3975(91)90336-Z.

We introduce and discuss a notion of strictly arithmetical completeness related to relative completeness of Cook (1978) and arithmetical completeness of Harel (1978). We present a powerful technique of obtaining strictly arithmetical axiomatizations of logics of programs. Given a model-theoretic semantics of a logic and a set of formulae defining (in a metalanguage) its nonclassical connectives, we automatically derive strictly arithmetically complete and sound proof systems for this logic. As examples of application of the technique we obtain new axiomatizations of algorithmic logic, (concurrent) dynamic logic and temporal logic.

1989
[12] Uwe Petermann and Andrzej Szalas. 1989.
On Temporal Logic for Distributed Systems and its Application to Processes Communicating by Interrupts.
Fundamenta Informaticae, 12(2):191–204. IOS Press.

1988
[11] Andrzej Szalas. 1988.
Towards the Temporal Approach to Abstract Data Types.
Fundamenta Informaticae, 11(1):49–64. IOS Press.

[10] Andrzej Szalas. 1988.
An Incompleteness Result in Process Algebra.
Information Processing Letters, 29(2):67–70. Elsevier.
DOI: 10.1016/0020-0190(88)90030-0.

[9] Leszek Holenderski and Andrzej Szalas. 1988.
Propositional Description of Finite Cause-Effect Structures.
Information Processing Letters, 27(3):111–117. Elsevier.
DOI: 10.1016/0020-0190(88)90064-6.

An alternative method of describing semantics of cause-effect structures is presented. It is based on a model of discrete dynamic systems. The model is similar to a condition-event Petri net, differing in the way restrictions on the alterability of actions are imposed.

[8] Andrzej Szalas and Leszek Holenderski. 1988.
Incompleteness of First-Order Temporal Logic with Until.
Theoretical Computer Science, 57(2-3):317–325. Elsevier.
DOI: 10.1016/0304-3975(88)90045-X.

The results presented in this paper concern the axiomatizability problem of first-order temporal logic with linear and discrete time. We show that the logic is incomplete, i.e., it cannot be provided with a finitistic and complete proof system. We show two incompleteness theorems. Although the first one is weaker (it assumes some first-order signature), we decided to present it, for its proof is much simpler and contains an interesting fact that finite sets are characterizable by means of temporal formulas. The second theorem shows that the logic is incomplete independently of any particular signature.

1987
[7] Andrzej Szalas. 1987.
A Complete Axiomatic Characterization of First-Order Temporal Logic of Linear Time.
Theoretical Computer Science, 54(2-3):199–214. Elsevier.
DOI: 10.1016/0304-3975(87)90129-0.

As shown in (Szalas, 1986, 1986, 1987) there is no finitistic and complete axiomatization of First-Order Temporal Logic of linear and discrete time. In this paper we give an infinitary proof system for the logic. We prove that the proof system is sound and complete. We also show that any syntactically consistent temporal theory has a model. As a corollary we obtain that the Downward Theorem of Skolem, Lowenheim and Tarski holds in the case of considered logic.

[6] Andrzej Szalas. 1987.
Arithmetical Axiomatization of First-Order Temporal Logic.
Information Processing Letters, 26(3):111–116. Elsevier.
DOI: 10.1016/0020-0190(87)90047-0.

1986
[5] Andrzej Szalas. 1986.
Concerning the Semantic Consequence Relation in First-Order Temporal Logic.
Theoretical Computer Science, 47(3):329–334. Elsevier.
DOI: 10.1016/0304-3975(86)90157-X.

In this paper we consider the first-order temporal logic with linear and discrete time. We prove that the set of tautologies of this logic is not arithmetical (i.e., it is neither <em>Σ</em><sup>0</sup><sub><em>n</em></sub> nor <em>Π</em><sup>0</sup><sub><em>n</em></sub> for any natural number <em>n</em>). Thus we show that there is no finitistic and complete axiomatization of the considered logic.

1985
[4] Uwe Petermann and Andrzej Szalas. 1985.
A Note on PCI: Distributed Processes Communicating by Interrupts.
SIGPLAN notices, 20(3):37–46. ACM Press.
DOI: 10.1145/382284.382390.

[3] Andrzej Szalas and Danuta Szczepaska. 1985.
Exception Handling in Parallel Computations.
SIGPLAN notices, 20(10):95–104. ACM Press.
DOI: 10.1145/382286.382385.

1984
[2] Andrzej Szalas. 1984.
On an Application of Algorithmic Theory of Stacks.
Fundamenta Informaticae, 7(3):378–388. IOS Press.

1981
[1] Andrzej Szalas. 1981.
Algorithmic Logic with Recursive Functions.
Fundamenta Informaticae, 4(4):975–995. IOS Press.


Page responsible: Patrick Doherty
Last updated: 2014-04-30