AIICS Publications: Journal Publications
Hide abstracts BibTeX entries | ||
2025 | ||
[180] |
2025. TransFusion: Generating long, high fidelity time series using diffusion models with transformers. Machine Learning with Applications, 20(??):100652–100652. DOI: 10.1016/j.mlwa.2025.100652. Fulltext: https://doi.org/10.1016/j.mlwa.2025.1006... The generation of high-quality, long-sequenced time-series data is essential due to its wide range of applications. In the past, standalone Recurrent and Convolutional Neural Network based Generative Adversarial Networks (GAN) were used to synthesize time-series data. However, they are inadequate for generating long sequences of time-series data due to limitations in the architecture, such as difficulties in capturing long-range dependencies, limited temporal coherence, and scalability challenges. Furthermore, GANs are well known for their training instability and mode collapse problem. To address this, we propose TransFusion, a diffusion, and transformers-based generative model to generate high-quality long sequence time-series data. We extended the sequence length to 384, surpassing the previous limit, and successfully generated high quality synthetic data. Also, we introduce two evaluation metrics to evaluate the quality of the synthetic data as well as its predictive characteristics. TransFusion is evaluated using a diverse set of visual and empirical metrics, consistently outperforming the previous state-of-the-art by a significant margin. |
.
|
[179] |
2025. Symbolic Search for Cost-Optimal Planning with Expressive Model Extensions. The journal of artificial intelligence research, 82(??):1349–1405. AI ACCESS FOUNDATION. DOI: 10.1613/jair.1.16869. Note: Funding Agencies|Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; TAILOR - EU [952215]; Swedish Research Council [2022-06725]; Swiss National Science Foundation (SNSF) as part of the project "Unifying the Theory and Algorithms of Factored State-Space Search" (UTA) In classical planning, the task is to derive a sequence of deterministic actions that changes the current fully-observable world state into one that satisfies a set of goal criteria. Algorithms for classical planning are domain-independent, i.e., they are not limited to a particular application and instead can be used to solve different types of reasoning problems. The main language for modeling such problems is the Planning Domain Definition Language (PDDL). Even though it provides many language features for expressing a wide range of planning tasks, most of today's classical planners, especially optimal ones, support only a small subset of its features. The most widely supported fragment is lifted STRIPS plus types and action costs. While this fragment suffices to model some interesting planning tasks, using it to model more realistic problems often incurs a much higher modeling effort. Even if modeling is possible at all, solving the resulting tasks is often infeasible in practice, as the required encoding size increases exponentially. To address these issues, we show how to support more expressive modeling languages natively in optimal classical planning algorithms. Specifically, we focus on symbolic search, a state-of-the-art search algorithm that operates on sets of world states. We show how to extend symbolic search to support classical planning with conditional effects, axioms, and state-dependent action costs. All of these modeling features are expressive in the sense that compiling them away incurs a significant blow-up, so is it often necessary to support them natively. Except for blind (non-symbolic) search, our new symbolic search is the first optimal classical planning algorithm that supports these three modeling extensions in combination, and it even compares favorably to other state-of-the-art approaches that only support a subset of the extensions. |
.
|
[178] |
2025. Emancipating older learners: On the problems of naïve consciousness and the enlightened teacher. European Journal for Research on the Education and Learning of Adults, ??(??):??–??. DOI: 10.3384/rela.2000-7426.00. Fulltext: https://doi.org/10.3384/rela.2000-7426.0... The tension between education and emancipation is a recurrent theme in educational philosophy but remains comparatively neglected in critical educational gerontology (CEG). CEG is a philosophical strand in older adult education aiming at the social emancipation of older learners. CEG links radical emancipation to the outcome of raising older learners’ presumed naïve consciousness with the help of an enlightened teacher. In this essay, we argue that the logic of radical social emancipation features theoretical contradictions at its roots, which translate into empirical ambiguities when employed to raise older learners’ consciousness. We address two problems: (1) older learners’ naïve consciousness and (2) its subsequent dependence on teachers’ critical consciousness. To solve these problems, we propose Jacques Rancière’s logic of educational emancipation, focusing on the individual’s capacity to learn by presuming intellectual equality as a pre-condition for learning rather than an outcome as with CEG. By treating this perspective in older adult education, we hope to emancipate CEG itself by refocusing its scope primarily on the educational realm. |
.
|
[177] |
2025. Exploring teamwork, trust, and emergency response competence in emergent ad-hoc immediate responder groups: an experimental simulation study. Ergonomics, ??(??):??–??. TAYLOR & FRANCIS LTD. DOI: 10.1080/00140139.2025.2478256. Publication status: Epub ahead of print Note: Funding Agencies|Swedish civil contingencies agency; Swedish National board of health and welfare Fulltext: https://doi.org/10.1080/00140139.2025.24... People in the immediate vicinity of an accident sometimes provide first aid as immediate responders and may form spontaneous groups to provide aid together. Previous research has found conflicting results where first aid is sometimes improved and sometimes unaffected by the cooperation in these groups. This study investigated the effect of competence and the teamwork factors leadership, coordination and swift trust on first aid team performance in immediate responder groups. Ad-hoc groups of three participants with mixed or low emergency response competence acted in a simulated traffic accident. Swift trust and emergency response competence increased first aid performance while leadership and coordination did not. Low competence groups were also found to overestimate their team performance while mixed competence groups were found to underestimate their team performance. Further research should investigate how to support swift trust within these groups and explore potential education interventions for improving the effect of teamwork. |
.
|
[176] |
2025. The Fine-Grained Complexity of Graph Homomorphism Problems: Towards the Okrasa and Rzazewski Conjecture. Journal of Multiple-Valued Logic and Soft Computing, 44(4):353–383. OLD CITY PUBLISHING INC. DOI: 10.48550/arXiv.2404.09798. In this paper we are interested in the fine-grained complexity of deciding whether there is a homomorphism from an input graph G to a fixed graph H (the H-COLORING problem). The starting point is that these problems can be viewed as constraint satisfaction problems (CSPs), and that (partial) polymorphisms of binary relations are of paramount importance in the study of complexity classes of such C SPs. Thus, we first investigate the expressivity of binary symmetric relations EH and their corresponding (partial) polymorphisms pPol(E-H). For irreflexive graphs we observe that there is no pair of graphs H and H' such that pPol(E-H) C pPol(E-H'), unless E-H' = 00 or H =H'. More generally we show the existence of an n-ary relation R whose partial polymorphisms strictly subsume those of H and such that CSP(R) is NP-complete if and only if H contains an odd cycle of length at most n. Motivated by this we also describe the sets of total polymorphisms of nontrivial cliques, odd cycles, as well as certain cores, and we give an algebraic characterization of projective cores. As a by-product, we settle the Okrasa and Rza,zewski conjecture for all graphs of at most 7 vertices. |
.
|
[175] |
2025. From crisis to crisis: individual and organisational empowerment at a university for the third age in Lebanon. Zeitschrift für Weiterbildungsforschung, ??(??):??–??. Springer Nature. DOI: 10.1007/s40955-025-00308-1. Fulltext: https://doi.org/10.1007/s40955-025-00308... In the context of the University for Seniors (UfS) in crisis-hit Lebanon, this paper analysed interview transcripts with 11 older learners (participants) and 12 term-curricula (2018–2024) to answer two research questions: (1) How empowered were older learners to understand the underlying causes of and act facing crisis? Moreover, (2) how did the UfS respond to the crisis as an empowering organisation? Results indicate unmistakable signs of interactional and behavioural empowerment among participants, who also exhibited instances of disempowerment. Post-crisis, UfS moved online, opted for a hybrid modality, and augmented the share of disjuncture-oriented course offerings to help learners address their new social realities. Hence, UfS acted above and beyond an often-critiqued liberal humanist mission and mustered the know-how to deal with recurrent crises. These results are subsequently discussed from the vantage point of four different angles. |
.
|
[174] |
2025. Deep Learning and X-Ray Imaging Innovations for Pneumonia Infection Diagnosis: Introducing DeepPneuNet. This paper aims to develop a new deep learning model (DeepPneuNet) and evaluate its performance in predicting Pneumonia infection diagnosis based on patients' chest x-ray images. We have collected 5856 chest x-ray images that are labeled as either \"pneumonia\" or \"normal\" from a public forum. Before applying the DeepPneuNet model, a necessary feature extraction and feature mapping have been done on the input images. Conv2D layers with a 1 x 1 kernel size are followed by ReLU activation functions to make up the model. These layers are in charge of recognizing important patterns and features in the images. A MaxPooling 2D procedure is applied to minimize the spatial size of the feature maps after every two Conv2D layers. The sparse categorical cross-entropy loss function trains the model, and the Adam optimizer with a learning rate of 0.001 is used to optimize it. The DeepPneuNet provides 90.12% accuracy for diagnosis of the Pneumonia infection for a set of real-life test images. With 9,445,586 parameters, the DeepPneuNet model exhibits excellent parameter efficiency. DeepPneuNet is a more lightweight and computationally efficient alternative when compared to the other pre-trained models. We have compared accuracies for predicting Pneumonia diagnosis of our proposed DeepPneuNet model with some state-of-the-art deep learning models. The proposed DeepPneuNet model is more advantageous than the existing state-of-the-art learning models for Pneumonia diagnosis with respect to accuracy, precision, recall, F-score, training parameters, and training execution time. |
.
|
[173] |
2025. FairXAI -A Taxonomy and Framework for Fairness and Explainability Synergy in Machine Learning. IEEE Transactions on Neural Networks and Learning Systems, ??(??):??–??. IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. DOI: 10.1109/TNNLS.2025.3528321. Publication status: Epub ahead of print Note: Funding Agencies|Knut and Alice Wallenberg Foundation; ELLIIT Excellence Center at Linkoeping-Lund for Information Technology; TAILOR (A Network for Trustworthy Artificial Intelligence in Europe) Explainable artificial intelligence (XAI) and fair learning have made significant strides in various application domains, including criminal recidivism predictions, healthcare settings, toxic comment detection, automatic speech detection, recommendation systems, and image segmentation. However, these two fields have largely evolved independently. Recent studies have demonstrated that incorporating explanations into decision-making processes enhances the transparency and trustworthiness of AI systems. In light of this, our objective is to conduct a systematic review of FairXAI, which explores the interplay between fairness and explainability frameworks. To commence, we propose a taxonomy of FairXAI that utilizes XAI to mitigate and evaluate bias. This taxonomy will be a base for machine learning researchers operating in diverse domains. Additionally, we will undertake an extensive review of existing articles, taking into account factors such as the purpose of the interaction, target audience, and domain and context. Moreover, we outline an interaction framework for FairXAI considering various fairness perceptions and propose a FairXAI wheel that encompasses four core properties that must be verified and evaluated. This will serve as a practical tool for researchers and practitioners, ensuring the fairness and transparency of their AI systems. Furthermore, we will identify challenges and conflicts in the interactions between fairness and explainability, which could potentially pave the way for enhancing the responsibility of AI systems. As the inaugural review of its kind, we hope that this survey will inspire scholars to address these challenges by scrutinizing current research in their respective domains. |
.
|
[172] |
2025. Defect attention-based lightweight real-time adaptable surface defect analysis. Ironmaking & steelmaking, ??(??):??–??. SAGE PUBLICATIONS INC. DOI: 10.1177/03019233241296959. Publication status: Epub ahead of print fulltext:postprint: https://liu.diva-portal.org/smash/get/di... Surface defect diagnosis for steel sheets is essential to maintain the quality of the product. However, it is challenging to do such a task due to the high manufacturing speed of 100-140 m/min, making it impossible to observe defects visually. However, developing a lightweight and real-time adaptable intelligent approach can both overcome this issue and satisfy the requirements of the concept of Industry 4.0. Based on that motivation, this study aimed to satisfy such a demand by introducing an attention-based surface defect identification approach called MobileNetV2-AGCA and UNET-AGCA for defect localisation. The effectiveness of the proposed techniques was evaluated by considering steel strip surface defect images of the NEU-CLS and NEU-DET surface defect dataset where six different defects are included. The Adaptive Graph Channel Attention (AGCA) Module was implemented in an edited version of MobileNetV2 and UNET architecture to improve segmentation accuracy while maintaining or improving the execution speed. According to the experimental results, MobileNetV2-AGCA successfully identified the surface defects with an average accuracy of 99.72%. Regarding defect localisation, UNET-AGCA reached an F1 score of 0.795. It is concluded that the proposed methods provided promising results to be applied to a real-time problem related to surface defect identification and localisation. |
.
|
[171] |
2025. Supervised learning approaches for predicting Ebola-Human Protein-Protein interactions. The goal of this research work is to predict protein-protein interactions (PPIs) between the Ebola virus and the host who is at risk of infection. Since there are very limited databases available on the Ebola virus; we have prepared a comprehensive database of all the PPIs between the Ebola virus and human proteins (EbolaInt). Our work focuses on the finding of some new protein-protein interactions between humans and the Ebola virus using some state- of-the-arts machine learning techniques. However, it is basically a two-class problem with a positive interacting dataset and a negative non-interacting dataset. These datasets contain various sequence-based human protein features such as structure of amino acid and conjoint triad and domain-related features. In this research, we have briefly discussed and used some well-known supervised learning approaches to predict PPIs between human proteins and Ebola virus proteins, including K-nearest neighbours (KNN), random forest (RF), support vector machine (SVM), and deep feed-forward multi-layer perceptron (DMLP) etc. We have validated our prediction results using gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis. Our goal with this prediction is to compare all other models' accuracy, precision, recall, and f1-score for predicting these PPIs. In the result section, DMLP is giving the highest accuracy along with the prediction of 2655 potential human target proteins. |
.
|
[170] |
2025. Deep learning inspired game-based cognitive assessment for early dementia detection. Engineering applications of artificial intelligence, 142(??):??–??. PERGAMON-ELSEVIER SCIENCE LTD. DOI: 10.1016/j.engappai.2024.109901. This paper introduces a gaming approach inspired by deep learning for the early detection of dementia. This research employs a convolutional neural network (CNN) model to analyze health metrics and facial images via a cognitive assessment gaming application. We have collected 1000 samples of health metric data from Apollo Diagnostic Center and hospitals, labeled \"demented\" or \"nondemented,\" to train a modified 1-dimensional convolutional neural network (MOD-1D-CNN) for game level 1. Additionally, a dataset of 1800 facial images, also labeled \"demented\" or \"non-demented,\" is collected in our work to train a modified 2-dimensional convolutional neural network (MOD-2D-CNN) for game level 2. The MOD-1D-CNN has achieved a loss of 0.2692 and an accuracy of 70.50% in identifying dementia traits via health metric data; in comparison, the MOD-2D-CNN has achieved a loss of 0.1755 and an accuracy of 95.72% in distinguishing dementia from facial images. A rule-based linear weightage method combines these models and provides a final decision. In addition, a better fusion neural network strategy is also explored in the results analysis with an ablation study. The proposed models are computationally efficient alternatives with significantly fewer parameters than other state-of-the-art models. The performance and parameter counts of these models are compared with those of existing deep learning models, emphasizing the role of AI in enhancing early dementia. |
.
|
2024 | ||
[169] |
2024. Leveraging active queries in collaborative robotic mission planning. INTELLIGENCE & ROBOTICS, 4(1):87–106. OAE PUBLISHING INC. DOI: 10.20517/ir.2024.06. fulltext:print: https://liu.diva-portal.org/smash/get/di... This paper focuses on the high-level specification and generation of 3D models for operational environments using the idea of active queries as a basis for specifying and generating multi-agent plans for acquiring such models. Assuming an underlying multi-agent system, an operator can specify a request for a particular type of model from specific region by specifying an active query. This declarative query is then interpreted and executed by collecting already existing data/information in agent systems or, in the active case, by automatically generating high-level mission plans for agents to retrieve and generate parts of the model that do not already exist. The purpose of an active query is to hide the complexity of multi-agent mission plan generation, data transformations, and distributed collection of data/information in underlying multi-agent systems. A description of an active query system, its integration with an existing multi-agent system and validation of the active query system in field robotics experimentation using Unmanned Aerial Vehicles and simulations are provided. |
.
|
[168] |
2024. CreoleVal: Multilingual Multitask Benchmarks for Creoles. Transactions of the Association for Computational Linguistics, 12(??):950–978. MIT Press. DOI: 10.1162/tacl_a_00682. Fulltext: https://doi.org/10.1162/tacl_a_00682 fulltext:print: https://liu.diva-portal.org/smash/get/di... Creoles represent an under-explored and marginalized group of languages, with few available resources for NLP research. While the genealogical ties between Creoles and a number of highly resourced languages imply a significant potential for transfer learning, this potential is hampered due to this lack of annotated data. In this work we present CreoleVal, a collection of benchmark datasets spanning 8 different NLP tasks, covering up to 28 Creole languages; it is an aggregate of novel development datasets for reading comprehension relation classification, and machine translation for Creoles, in addition to a practical gateway to a handful of preexisting benchmarks. For each benchmark, we conduct baseline experiments in a zero-shot setting in order to further ascertain the capabilities and limitations of transfer learning for Creoles. Ultimately, we see CreoleVal as an opportunity to empower research on Creoles in NLP and computational linguistics, and in general, a step towards more equitable language technology around the globe. |
.
|
[167] |
2024. Where is the reflexive ?I? in the Elements of AI?. International Journal of Lifelong Education, 43(6):664–681. ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD. DOI: 10.1080/02601370.2024.2389107. Note: Funding Agencies|VINNOVA, Sweden's innovation agency grant Fulltext: https://doi.org/10.1080/02601370.2024.23... fulltext:print: https://liu.diva-portal.org/smash/get/di... AI's opportunities and potential high-risk consequences for individuals and societies render mass AI literacy imperative. MOOCs are one effective conduit for its provision. However, MOOCs remain epistemologically one-sided when lifelong learning steadily shifts towards a reflexive epistemology whereby subjectivities and expert knowledge intersect, problematising the latter's relevance to agents when disregarding the first. Addressing the underexplored epistemologies of AI literacy MOOCs and kindled by transformative learning in late modernity, this paper examines how the design of the MOOC Elements of AI prompts reflexivity over AI. A Bloom's taxonomy-based qualitative content analysis categorised 16 learning objectives and 25 assessments according to cognitive processes and knowledge dimensions they serve. Results showed adequate but delayed instruction for reflexivity and a benign constructive misalignment, with assessment hitting higher and wider processes and dimensions than the learning objectives. Following the fleshing out of results, their discussion leads to EAI-specific and general enhancements for identity-based transformative AI literacy MOOCs catering to scale and individuality. |
.
|
[166] |
2024. General Policies, Subgoal Structure, and Planning Width. The journal of artificial intelligence research, 80(??):475–516. AI ACCESS FOUNDATION. DOI: 10.1613/jair.1.15581. Note: Funding Agencies|Alexander von Humboldt Foundation; Federal Ministry for Education and Research of Germany; European Research Council (ERC) [885107, 952215]; Excellence Strategy of the Federal Government; NRW Lander; Knut and Alice Wallenberg (KAW) Foundation fulltext:print: https://liu.diva-portal.org/smash/get/di... It has been observed that many classical planning domains with atomic goals can be solved by means of a simple polynomial exploration procedure, called IW, that runs in time exponential in the problem width, which in these cases is bounded and small. Yet, while the notion of width has become part of state-of-the-art planning algorithms such as BFWS, there is no good explanation for why so many benchmark domains have bounded width when atomic goals are considered. In this work, we address this question by relating bounded width with the existence of general optimal policies that in each planning instance are represented by tuples of atoms of bounded size. We also define the notions of (explicit) serializations and serialized width that have a broader scope, as many domains have a bounded serialized width but no bounded width. Such problems are solved nonoptimally in polynomial time by a variant of the Serialized IW algorithm. Finally, the language of general policies and the semantics of serializations are combined to yield a simple, meaningful, and expressive language for specifying serializations in compact form in the form of sketches, which can be used for encoding domain control knowledge by hand or for learning it from examples. Sketches express general problem decompositions in terms of subgoals, and terminating sketches of bounded width express problem decompositions that can be solved in polynomial time. |
.
|
[165] |
2024. Counting Complexity for Reasoning in Abstract Argumentation. Journal of Artificial Intelligence Research, 80(??):805–834. AI ACCESS FOUNDATION. DOI: 10.1613/jair.1.16210. Note: Funding Agencies|Austrian Science Fund (FWF) [J4656]; Swedish government; German Research Fund DFG [ME 4279/1-2, ME 4279/3-1]; Society for Research Funding in Lower Austria (GFF) [ExzF-0004]; Austrian Science Fund (FWF) [J4656] Funding Source: Austrian Science Fund (FWF) fulltext:print: https://liu.diva-portal.org/smash/get/di... In this paper, we consider counting and projected model counting of extensions in abstract argumentation for various semantics, including credulous reasoning. When asking for projected counts, we are interested in counting the number of extensions of a given argumentation framework, while multiple extensions that are identical when restricted to the projected arguments count as only one projected extension. We establish classical complexity results and parameterized complexity results when the problems are parameterized by the treewidth of the undirected argumentation graph. To obtain upper bounds for counting projected extensions, we introduce novel algorithms that exploit small treewidth of the undirected argumentation graph of the input instance by dynamic programming. Our algorithms run in double or triple exponential time in the treewidth, depending on the semantics under consideration. Finally, we establish lower bounds of bounded treewidth algorithms for counting extensions and projected extension under the exponential time hypothesis (ETH). |
.
|
[164] |
2024. Expressing and Exploiting Subgoal Structure in Classical Planning Using Sketches. The journal of artificial intelligence research, 80(??):171–208. AI ACCESS FOUNDATION. DOI: 10.1613/jair.1.15821. Note: Funding Agencies|ERC Advanced Grant [885107]; EU [952215]; Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; Wallenberg Guest Professor at Linkoping University, Sweden - Swedish Research Council [2022-06725, 2018-05973] fulltext:print: https://liu.diva-portal.org/smash/get/di... Width-based planning methods deal with conjunctive goals by decomposing problems into subproblems of low width. Algorithms like SIW thus fail when the goal is not easily serializable in this way or when some of the subproblems have a high width. In this work, we address these limitations by using a simple but powerful language for expressing finer problem decompositions introduced recently by Bonet and Geffner, called policy sketches. A policy sketch R over a set of Boolean and numerical features is a set of sketch rules C -> E that express how the values of these features are supposed to change. Like general policies, policy sketches are domain general, but unlike policies, the changes captured by sketch rules do not need to be achieved in a single step. We show that many planning domains that cannot be solved by SIW are provably solvable in low polynomial time with the SIWR algorithm, the version of SIW that employs user-provided policy sketches. Policy sketches are thus shown to be a powerful language for expressing domain-specific knowledge in a simple and compact way and a convenient alternative to languages such as HTNs or temporal logics. Furthermore, they make it easy to express general problem decompositions and prove key properties of them like their width and complexity. |
.
|
[163] |
2024. Grounding Stream Reasoning Research. Transactions on Graph Data and Knowledge (TGDK), 2(1):1–47. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH. DOI: 10.4230/TGDK.2.1.2. Fulltext: https://doi.org/10.4230/TGDK.2.1.2 fulltext:print: https://liu.diva-portal.org/smash/get/di... In the last decade, there has been a growing interest in applying AI technologies to implement complex data analytics over data streams. To this end, researchers in various fields have been organising a yearly event called the \"Stream Reasoning Workshop\" to share perspectives, challenges, and experiences around this topic.In this paper, the previous organisers of the workshops and other community members provide a summary of the main research results that have been discussed during the first six editions of the event. These results can be categorised into four main research areas: The first is concerned with the technological challenges related to handling large data streams. The second area aims at adapting and extending existing semantic technologies to data streams. The third and fourth areas focus on how to implement reasoning techniques, either considering deductive or inductive techniques, to extract new and valuable knowledge from the data in the stream.This summary is written not only to provide a crystallisation of the field, but also to point out distinctive traits of the stream reasoning community. Moreover, it also provides a foundation for future research by enumerating a list of use cases and open challenges, to stimulate others to join this exciting research area. |
.
|
[162] |
2024. Bt-GAN: Generating Fair Synthetic Healthdata via Bias-transforming Generative Adversarial Networks. The journal of artificial intelligence research, 79(??):1313–1341. AAAI Press. DOI: 10.1613/jair.1.15317. Note: Funding Agencies|Knut and Alice Wallenberg Foundation; ELLIIT Excellence Center at Linkoeping-Lund for Information Technology; TAILOR-an EU project Fulltext: https://doi.org/10.1613/jair.1.15317 fulltext:print: https://liu.diva-portal.org/smash/get/di... Synthetic data generation offers a promising solution to enhance the usefulness of Electronic Healthcare Records (EHR) by generating realistic de-identified data. However, the existing literature primarily focuses on the quality of synthetic health data, neglecting the crucial aspect of fairness in downstream predictions. Consequently, models trained on synthetic EHR have faced criticism for producing biased outcomes in target tasks. These biases can arise from either spurious correlations between features or the failure of models to accurately represent sub-groups. To address these concerns, we present Bias-transforming Generative Adversarial Networks (Bt-GAN), a GAN-based synthetic data generator specifically designed for the healthcare domain. In order to tackle spurious correlations (i), we propose an information-constrained Data Generation Process (DGP) that enables the generator to learn a fair deterministic transformation based on a well-defined notion of algorithmic fairness. To overcome the challenge of capturing exact sub-group representations (ii), we incentivize the generator to preserve sub-group densities through score-based weighted sampling. This approach compels the generator to learn from underrepresented regions of the data manifold. To evaluate the effectiveness of our proposed method, we conduct extensive experiments using the Medical Information Mart for Intensive Care (MIMIC-III) database. Our results demonstrate that Bt-GAN achieves state-of-the-art accuracy while significantly improving fairness and minimizing bias amplification. Furthermore, we perform an in-depth explainability analysis to provide additional evidence supporting the validity of our study. In conclusion, our research introduces a novel and professional approach to addressing the limitations of synthetic data generation in the healthcare domain. By incorporating fairness considerations and leveraging advanced techniques such as GANs, we pave the way for more reliable and unbiased predictions in healthcare applications. |
.
|
[161] |
2024. The 2023 International Planning Competition. The AI Magazine, 45(2):280–296. AMER ASSOC ARTIFICIAL INTELL. DOI: 10.1002/aaai.12169. Note: Funding Agencies|Swedish Research Council [2022-06725]; Some of the computations for the Learning track - Swedish Research Council; University Of Edinburgh [EP/P020267/1]; EPSRC [882500]; European Research Council (ERC) under the European Union [FA8702-19-C-0001, W56KGU-18-D-0004, DFARS 252.227-7013]; MITRE Independent Research and Development Program; Natural Sciences and Engineering Research Council of Canada (NSERC) fulltext:print: https://liu.diva-portal.org/smash/get/di... In this article, we present an overview of the 2023 International Planning Competition. It featured five distinct tracks designed to assess cutting-edge methods and explore the frontiers of planning within these settings: the classical (deterministic) track, the numeric track, the Hierarchical Task Networks (HTN) track, the learning track, and the probabilistic and reinforcement learning track. Each of these tracks evaluated planning methodologies through one or more subtracks, with the goal of pushing the boundaries of current planner performance. To achieve this objective, the competition introduced a combination of well-established challenges and entirely novel ones. Within this article, each track offers an exploration of its historical context, justifies its relevance within the planning landscape, discusses emerging domains and trends, elucidates the evaluation methodology, and ultimately presents the results. |
.
|
[160] |
2024. IASCAR: Incremental Answer Set Counting by Anytime Refinement. Theory and Practice of Logic Programming, 24(3):502–532. CAMBRIDGE UNIV PRESS. DOI: 10.1017/S1471068424000036. Note: Funding Agencies|BMBF [01IS20056NAVAS]; ELLIIT - Swedish government; Austrian Science Fund (FWF) [J4656, P32830, Y1329]; GWK; Swedish Research Council [2022-06725] Answer set programming (ASP) is a popular declarative programming paradigm with various applications. Programs can easily have many answer sets that cannot be enumerated in practice, but counting still allows quantifying solution spaces. If one counts under assumptions on literals, one obtains a tool to comprehend parts of the solution space, so-called answer set navigation. However, navigating through parts of the solution space requires counting many times, which is expensive in theory. Knowledge compilation compiles instances into representations on which counting works in polynomial time. However, these techniques exist only for conjunctive normal form (CNF) formulas, and compiling ASP programs into CNF formulas can introduce an exponential overhead. This paper introduces a technique to iteratively count answer sets under assumptions on knowledge compilations of CNFs that encode supported models. Our anytime technique uses the inclusion-exclusion principle to improve bounds by over- and undercounting systematically. In a preliminary empirical analysis, we demonstrate promising results. After compiling the input (offline phase), our approach quickly (re)counts. |
.
|
[159] |
2024. Chasing artificial intelligence in shared socioeconomic pathways. One Earth, 7(1):18–22. CELL PRESS. DOI: 10.1016/j.oneear.2023.12.015. Note: Funding Agencies|Mistra Geopolitics research program [2016/11] The development of artificial intelligence has likely reached an inflection point, with significant implications for how research needs to address emerging technologies and how they drive long-term socioeconomic development of importance for climate change scenarios. |
.
|
[158] |
2024. Modeling and shadowing paraconsistent BDI agents. Annals of Mathematics and Artificial Intelligence, 92(??):855–876. SPRINGER. DOI: 10.1007/s10472-023-09902-w. Note: Funding Agencies|Narodowe Centrum Nauki [2015/19/B/ST6/02589]; National Science Centre Poland Fulltext: https://doi.org/10.1007/s10472-023-09902... fulltext:print: https://liu.diva-portal.org/smash/get/di... The Bdi model of rational agency has been studied for over three decades. Many robust multiagent systems have been developed, and a number of Bdi logics have been studied. Following this intensive development phase, the importance of integrating Bdi models with inconsistency handling and revision theory have been emphasized. There is also a demand for a tighter connection between Bdi-based implementations and Bdi logics. In this paper, we address these postulates by introducing a novel, paraconsistent logical Bdi model close to implementation, with building blocks that can be represented as Sql/rule-based databases. Importantly, tractability is achieved by reasoning as querying. This stands in a sharp contrast to the high complexity of known Bdi logics. We also extend belief shadowing, a shallow and lightweight alternative to deep and computationally demanding belief revision, to encompass agents motivational attitudes. |
.
|
[157] |
2024. Dual forgetting operators in the context of weakest sufficient and strongest necessary conditions. Artificial Intelligence, 326(??):??–??. ELSEVIER. DOI: 10.1016/j.artint.2023.104036. Note: Funding Agencies|ELLIIT Network Organization for Information and Communication Technology, Sweden; Mahasarakham Development Fund, Mahasarakham University, Thailand; National Science Centre Poland [2017/27/B/ST6/02018] Fulltext: https://doi.org/10.1016/j.artint.2023.10... fulltext:print: https://liu.diva-portal.org/smash/get/di... Forgetting is an important concept in knowledge representation and automated reasoning with widespread applications across a number of disciplines. A standard forgetting operator, characterized in [26] in terms of model-theoretic semantics and primarily focusing on the propositional case, opened up a new research subarea. In this paper, a new operator called weak forgetting, dual to standard forgetting, is introduced and both together are shown to offer a new more uniform perspective on forgetting operators in general. Both the weak and standard forgetting operators are characterized in terms of entailment and inference, rather than a model theoretic semantics. This naturally leads to a useful algorithmic perspective based on quantifier elimination and the use of Ackermanns Lemma and its fixpoint generalization. The strong formal relationship between standard forgetting and strongest necessary conditions and weak forgetting and weakest sufficient conditions is also characterized quite naturally through the entailment based, inferential perspective used. The framework used to characterize the dual forgetting operators is also generalized to the first-order case and includes useful algorithms for computing first-order forgetting operators in special cases. Practical examples are also included to show the importance of both weak and standard forgetting in modeling and representation. |
.
|
2023 | ||
[156] |
2023. RGS?: RDF graph synchronization for collaborative robotics. Autonomous Agents and Multi-Agent Systems, 37(2):??–??. SPRINGER. DOI: 10.1007/s10458-023-09629-2. Note: Funding Agencies|ELLIIT Network Organization for Information and Communication Technology, Sweden [RIT15-0097]; Swedish Foundation for Strategic Research SSF (Smart Systems Project) [B09]; Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation fulltext:print: https://liu.diva-portal.org/smash/get/di... In the context of collaborative robotics, distributed situation awareness is essential for supporting collective intelligence in teams of robots and human agents where it can be used for both individual and collective decision support. This is particularly important in applications pertaining to emergency rescue and crisis management. During operational missions, data and knowledge is gathered incrementally and in different ways by heterogeneous robots and humans. The purpose of this paper is to describe an RDF Graph Synchronization System called RGS⊕. It is assumed that a dynamic set of agents provide or retrieve knowledge stored in their local RDF Graphs which are continuously synchronized between agents. The RGS⊕ System was designed to handle unreliable communication and does not rely on a static centralized infrastructure. It is capable of synchronizing knowledge as timely as possible and allows agents to access knowledge while it is incrementally acquired. A deeper empirical analysis of the RGS⊕ System is provided that shows both its efficiency and efficacy. |
.
|
[155] |
2023. Behind the Scenes of Co-designing AI and LA in K-12 Education. Postdigital Science and Education, 6(??):321–341. DOI: 10.1007/s42438-023-00417-5. fulltext:print: https://liu.diva-portal.org/smash/get/di... This article explores the complex challenges of co-designing an AI- and learning analytics (LA)-integrated learning management system (LMS). While co-design has been proposed as a human-centred design approach for scaling AI and LA adoption, our understanding of how these design processes play out in real-life settings remains limited. This study is based on ethnographic fieldwork in primary and secondary schools and employs a relational materialist approach to trace, visualise, and analyse the increasingly complex and transformative relations between a growing number of actors. The findings shed light on the intricate ecosystem in which AI and LA are being introduced and on the marketisation of K-12 education. Instead of following a rational and sequential approach that can be easily executed, the co-design process emerged as a series of events, shifting from solely generating ideas with teachers to integrating and commercialising the LMS into a school market with an already high prevalence of educational technology (EdTech). AI and LA in education, co-design and data-driven schooling served as negotiating ideas, boundary objects, which maintained connectivity between actors, despite limited AI and LA implementation and the development of a stand-alone app. Even though teachers and students were actively involved in the design decisions, the co-design process did not lead to extensive adoption of the LMS nor did it sufficiently address the ethical issues related to the unrestricted collection of student data. |
.
|
[154] |
2023. Understanding Social Robots: Attribution of Intentional Agency to Artificial and Biological Bodies. Artificial Life, 29(3):351–366. MIT Press. DOI: 10.1162/artl_a_00404. Note: Funding: ELLIIT; Excellence Center at Linkoping-Lund in Information Technology; Swedish Research Council (VR) [2022-04602] Fulltext: https://doi.org/10.1162/artl_a_00404 fulltext:print: https://liu.diva-portal.org/smash/get/di... Much research in robotic artificial intelligence (AI) and Artificial Life has focused on autonomous agents as an embodied and situated approach to AI. Such systems are commonly viewed as overcoming many of the philosophical problems associated with traditional computationalist AI and cognitive science, such as the grounding problem (Harnad) or the lack of intentionality (Searle), because they have the physical and sensorimotor grounding that traditional AI was argued to lack. Robot lawn mowers and self-driving cars, for example, more or less reliably avoid obstacles, approach charging stations, and so on—and therefore might be considered to have some form of artificial intentionality or intentional directedness. It should be noted, though, that the fact that robots share physical environments with people does not necessarily mean that they are situated in the same perceptual and social world as humans. For people encountering socially interactive systems, such as social robots or automated vehicles, this poses the nontrivial challenge to interpret them as intentional agents to understand and anticipate their behavior but also to keep in mind that the intentionality of artificial bodies is fundamentally different from their natural counterparts. This requires, on one hand, a “suspension of disbelief †but, on the other hand, also a capacity for the “suspension of belief.†This dual nature of (attributed) artificial intentionality has been addressed only rather superficially in embodied AI and social robotics research. It is therefore argued that Bourgine and Varela’s notion of Artificial Life as the practice of autonomous systems needs to be complemented with a practice of socially interactive autonomous systems, guided by a better understanding of the differences between artificial and biological bodies and their implications in the context of social interactions between people and technology. |
.
|
[153] |
2023. Wavelet gated multiformer for groundwater time series forecasting. Scientific Reports, 13(1):??–??. NATURE PORTFOLIO. DOI: 10.1038/s41598-023-39688-0. Note: Funding: Carl Tryggers Foundation; VFN (Verifiering for Nyttiggorande beslutar) at Linkopings University; Swedish National Infrastructure for Computing (SNIC) [SNIC 2022/22-843] Fulltext: https://doi.org/10.1038/s41598-023-39688... fulltext:print: https://liu.diva-portal.org/smash/get/di... Developing accurate models for groundwater control is paramount for planning and managing life-sustaining resources (water) from aquifer reservoirs. Significant progress has been made toward designing and employing deep-forecasting models to tackle the challenge of multivariate time-series forecasting. However, most models were initially taught only to optimize natural language processing and computer vision tasks. We propose the Wavelet Gated Multiformer, which combines the strength of a vanilla Transformer with the Wavelet Crossformer that employs inner wavelet cross-correlation blocks. The self-attention mechanism (Transformer) computes the relationship between inner time-series points, while the cross-correlation finds trending periodicity patterns. The multi-headed encoder is channeled through a mixing gate (linear combination) of sub-encoders (Transformer and Wavelet Crossformer) that output trending signatures to the decoder. This process improved the model’s predictive capabilities, reducing Mean Absolute Error by 31.26 % compared to the second-best performing transformer-like models evaluated. We have also used the Multifractal Detrended Cross-Correlation Heatmaps (MF-DCCHM) to extract cyclical trends from pairs of stations across multifractal regimes by denoising the pair of signals with Daubechies wavelets. Our dataset was obtained from a network of eight wells for groundwater monitoring in Brazilian aquifers, six rainfall stations, eleven river flow stations, and three weather stations with atmospheric pressure, temperature, and humidity sensors. |
.
|
[152] |
2023. How Children With Mild Intellectual Disability Experience Self-driving Buses: In Support of Agency. Transaction on Transport Sciences, 14(2):21–31. Palacky University Olomouc. DOI: 10.5507/tots.2023.002. Note: Funding: Wallenberg AI, Autonomous Systems and Software Program -Humanities and Society (WASP-HS) Fulltext: https://doi.org/10.5507/tots.2023.002 fulltext:print: https://liu.diva-portal.org/smash/get/di... Emerging technology for public transportation is often not fully aligned with an inclusive design strategy. Many people with intellectual disability experience their needs and desires not being fully considered. Responding to this problem, the purpose of this study is to investigate how children with mild intellectual disability experience self-driving buses. On each bus, a person called \"safety driver\" monitors the ride and takes control if a problematic situation arises. The purpose is also to investigate what roles support persons and safety drivers play. In addition, the research aims to propose improvements in how the design of these self-driving buses can better motivate children with intellectual disability to use them in support of their agency. To address this, we arranged and studied seven rides on self-driving buses, for 16 children diagnosed to have mild intellectual disability, and their support persons. Interviews with the children were held after the rides, and both the rides and interviews were video recorded. The analysis was in part inductive but also employed a theory based on motivation: self-determination theory. For several children, the bus worked as a vehicle for a social sightseeing tour of the local environment, and the current design did not hinder such an experience. Overall, many of the children had a positive experience, but there is room for improvement regarding the design of the buses. Some children expressed curiosity and a few frustrations with how the bus behaved in traffic. For instance, it was difficult for the children to understand why the bus braked for things that were hard for them to perceive. From observation, it appears that the accompanying support person and safety driver played an important role in making children safe and shaping the social environment on the bus. The support persons were also essential for some children to ride the bus at all. The safety driver provided the children with information about how the bus worked. Both the safety driver and the support person had a positive impact on the children's experience. To meet the children's needs and skills, and to improve their motivation for riding the buses again, the buses need to decelerate less abruptly, have easier and consistently designed seatbelts, and communicate what they do, see, and signal more clearly. We argue that further studies at this level of detail are crucial to ensure that new technologies are indeed designed for everyone. |
.
|
[151] |
2023. Student and teacher co-agency when combining CT with arts and design in a cross-curricular project. Computers and Education Open, 4(??):??–??. ELSEVIER. DOI: 10.1016/j.caeo.2023.100132. fulltext:print: https://liu.diva-portal.org/smash/get/di... The technological development has raised awareness for the importance of digital competence and computational thinking (CT) to understand the digital world and has resulted in revised curricula in many countries. In Finland, a new curriculum for grades 1–9 came into force in 2016 introducing digital competence (including programming) to be integrated in other subjects. Most teachers lack prior experience in programming and there is a need for suitable instructional models. This article presents a cross-curricular teaching sequence and the results from a case study conducted in four Finnish schools. Students in grades 4–6 collaboratively worked on a project combining arts, design and CT with other subjects. The results show that students demonstrated several CT abilities while working on their projects, in particular creativity, tinkering and debugging. The findings also indicate that teachers and students learned together (co-agency) and suggest that models like the teaching sequence can help and encourage teachers to integrate programming and CT in a cross-curricular manner. Still, the teachers’ knowledge, ambition level and understanding of the task at hand, as well as the organizational support appear to play a notable role when planning and carrying out projects of this kind. While CT is commonly seen as developed through programming, the teaching sequence seems to have fostered CT abilities through the project as a whole, with programming playing the role of a tool or a glue depending on the time available, and the students’ skill and ambition level. |
.
|
[150] |
2023. Three Levels of AI Transparency. Computer, 56(2):93–100. IEEE COMPUTER SOC. DOI: 10.1109/MC.2022.3213181. Note: Funding Agencies|AI Transparency and Consumer Trust; Wallenberg AI; Autonomous Systems and Software Program-Humanities and Society (WASP-HS) The concept of transparency is fragmented in artificial intelligence (AI) research, often limited to transparency of the algorithm alone. We propose that AI transparency operates on three levels-algorithmic, interaction, and social-all of which need to be considered to build trust in AI. We expand upon these levels using current research directions, and identify research gaps resulting from the conceptual fragmentation of AI transparency highlighted within the context of the three levels. |
.
|
[149] |
2023. Hierarchical goals contextualize local reward decomposition explanations. Neural Computing & Applications, 35(??):16693–16704. Springer London Ltd. DOI: 10.1007/s00521-022-07280-8. Note: Funding Agencies|Orebro University; Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; Federal Ministry for Economic Affairs and Climate [FKZ 20X1905A-D] fulltext:print: https://liu.diva-portal.org/smash/get/di... One-step reinforcement learning explanation methods account for individual actions but fail to consider the agents future behavior, which can make their interpretation ambiguous. We propose to address this limitation by providing hierarchical goals as context for one-step explanations. By considering the current hierarchical goal as a context, one-step explanations can be interpreted with higher certainty, as the agents future behavior is more predictable. We combine reward decomposition with hierarchical reinforcement learning into a novel explainable reinforcement learning framework, which yields more interpretable, goal-contextualized one-step explanations. With a qualitative analysis of one-step reward decomposition explanations, we first show that their interpretability is indeed limited in scenarios with multiple, different optimal policies-a characteristic shared by other one-step explanation methods. Then, we show that our framework retains high interpretability in such cases, as the hierarchical goal can be considered as context for the explanation. To the best of our knowledge, our work is the first to investigate hierarchical goals not as an explanation directly but as additional context for one-step reinforcement learning explanations. |
.
|
2022 | ||
[148] |
2022. Multi-fractal detrended cross-correlation heatmaps for time series analysis. Scientific Reports, 12(1):??–??. NATURE PORTFOLIO. DOI: 10.1038/s41598-022-26207-w. Note: Funding Agencies|Swedish National Infrastructure for Computing (SNIC) [SNIC 2022/22-843] fulltext:print: https://liu.diva-portal.org/smash/get/di... Complex systems in biology, climatology, medicine, and economy hold emergent properties such as non-linearity, adaptation, and self-organization. These emergent attributes can derive from large-scale relationships, connections, and interactive behavior despite not being apparent from their isolated components. It is possible to better comprehend complex systems by analyzing cross-correlations between time series. However, the accumulation of non-linear processes induces multiscale structures, therefore, a spectrum of power-law exponents (the fractal dimension) and distinct cyclical patterns. We propose the Multifractal detrended cross-correlation heatmaps (MF-DCCHM) based on the DCCA cross-correlation coefficients with sliding boxes, a systematic approach capable of mapping the relationships between fluctuations of signals on different scales and regimes. The MF-DCCHM uses the integrated series of magnitudes, sliding boxes with sizes of up to 5% of the entire series, and an average of DCCA coefficients on top of the heatmaps for the local analysis. The heatmaps have shown the same cyclical frequencies from the spectral analysis across different multifractal regimes. Our dataset is composed of sales and inventory from the Brazilian automotive sector and macroeconomic descriptors, namely the Gross Domestic Product (GDP) per capita, Nominal Exchange Rate (NER), and the Nominal Interest Rate (NIR) from the Central Bank of Brazil. Our results indicate cross-correlated patterns that can be directly compared with the power-law spectra for multiple regimes. We have also identified cyclical patterns of high intensities that coincide with the Brazilian presidential elections. The MF-DCCHM uncovers non-explicit cyclic patterns, quantifies the relations of two non-stationary signals (noise effect removed), and has outstanding potential for mapping cross-regime patterns in multiple domains. |
.
|
[147] |
2022. Nucleus Composition in Transition-based Dependency Parsing. Computational linguistics - Association for Computational Linguistics (Print), 48(4):849–886. MIT PRESS. DOI: 10.1162/coli_a_00450. Note: Funding Agencies|Swedish Research Council [2016-01817] fulltext:print: https://liu.diva-portal.org/smash/get/di... Dependency-based approaches to syntactic analysis assume that syntactic structure can be analyzed in terms of binary asymmetric dependency relations holding between elementary syntactic units. Computational models for dependency parsing almost universally assume that an elementary syntactic unit is a word, while the influential theory of Lucien Tesniere instead posits a more abstract notion of nucleus, which may be realized as one or more words. In this article, we investigate the effect of enriching computational parsing models with a concept of nucleus inspired by Tesniere. We begin by reviewing how the concept of nucleus can be defined in the framework of Universal Dependencies, which has become the de facto standard for training and evaluating supervised dependency parsers, and explaining how composition functions can be used to make neural transition-based dependency parsers aware of the nuclei thus defined. We then perform an extensive experimental study, using data from 20 languages to assess the impact of nucleus composition across languages with different typological characteristics, and utilizing a variety of analytical tools including ablation, linear mixed-effects models, diagnostic classifiers, and dimensionality reduction. The analysis reveals that nucleus composition gives small but consistent improvements in parsing accuracy for most languages, and that the improvement mainly concerns the analysis of main predicates, nominal dependents, clausal dependents, and coordination structures. Significant factors explaining the rate of improvement across languages include entropy in coordination structures and frequency of certain function words, in particular determiners. Analysis using dimensionality reduction and diagnostic classifiers suggests that nucleus composition increases the similarity of vectors representing nuclei of the same syntactic type. |
.
|
[146] |
2022. Tractable Parsing for CCGs of Bounded Degree. Computational linguistics - Association for Computational Linguistics (Print), 48(3):593–633. MIT PRESS. DOI: 10.1162/coli_a_00441. Note: Funding Agencies|German Research Foundation (DFG) Research Training Group [GRK 1763]; Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation fulltext:print: https://liu.diva-portal.org/smash/get/di... Unlike other mildly context-sensitive formalisms, Combinatory Categorial Grammar (CCG) cannot be parsed in polynomial time when the size of the grammar is taken into account. Refining this result, we show that the parsing complexity of CCG is exponential only in the maximum degree of composition. When that degree is fixed, parsing can be carried out in polynomial time. Our finding is interesting from a linguistic perspective because a bounded degree of composition has been suggested as a universal constraint on natural language grammar. Moreover, ours is the first complexity result for a version of CCG that includes substitution rules, which are used in practical grammars but have been ignored in theoretical work. |
.
|
[145] |
2022. Accelerating the impact of artificial intelligence in mental healthcare through implementation science. Implementation research and practice, 3(??):??–??. Sage Publications. DOI: 10.1177/26334895221112033. fulltext:print: https://liu.diva-portal.org/smash/get/di... <strong>BACKGROUND:</strong> The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps regarding how to implement and best use AI to add value to mental healthcare services, providers, and consumers. The aim of this paper is to identify challenges and opportunities for AI use in mental healthcare and to describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare.<strong>METHODS:</strong> The paper is based on a selective review of articles concerning AI in mental healthcare and implementation science.<strong>RESULTS:</strong> Research in implementation science has established the importance of considering and planning for implementation from the start, the progression of implementation through different stages, and the appreciation of determinants at multiple levels. Determinant frameworks and implementation theories have been developed to understand and explain how different determinants impact on implementation. AI research should explore the relevance of these determinants for AI implementation. Implementation strategies to support AI implementation must address determinants specific to AI implementation in mental health. There might also be a need to develop new theoretical approaches or augment and recontextualize existing ones. Implementation outcomes may have to be adapted to be relevant in an AI implementation context.<strong>CONCLUSION:</strong> Knowledge derived from implementation science could provide an important starting point for research on implementation of AI in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research. However, when taking advantage of the existing knowledge basis, it is important to also be explorative and study AI implementation in health and mental healthcare as a new phenomenon in its own right since implementing AI may differ in various ways from implementing evidence-based practices in terms of what implementation determinants, strategies, and outcomes are most relevant.<strong>Plain Language Summary:</strong> The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps concerning how to implement and best use AI to add value to mental healthcare services, providers, and consumers. This paper is based on a selective review of articles concerning AI in mental healthcare and implementation science, with the aim to identify challenges and opportunities for the use of AI in mental healthcare and describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. AI offers opportunities for identifying the patients most in need of care or the interventions that might be most appropriate for a given population or individual. AI also offers opportunities for supporting a more reliable diagnosis of psychiatric disorders and ongoing monitoring and tailoring during the course of treatment. However, AI implementation challenges exist at organizational/policy, individual, and technical levels, making it relevant to draw on implementation science knowledge for understanding and facilitating implementation of AI in mental healthcare. Knowledge derived from implementation science could provide an important starting point for research on AI implementation in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research. |
.
|
[144] |
2022. Digital Conceptual History and the Emergence of a Globalized Climate Imaginary. Contributions to the History of Concepts, 17(2):95–122. BERGHAHN JOURNALS. DOI: 10.3167/choc.2022.170205. This article offers an exploratory quantitative analysis of the conceptual career of climate in US English over the period 1800-2010. Our aim is to qualify two, closely related arguments circulating in Environmental Humanities scholarship regarding the concepts history, namely that we only started to think of climate as a global entity after the introduction of general circulation models during the final quarter of the twentieth century, and, second, that climatic change only became an issue of environmental concern once scientists began to approach climate as a global model. While we do not dispute that the computer revolution resulted in a significantly new understanding of climate, our analysis points to a longer process of singularization and growing abstraction starting in the early nineteenth century that might help to nuance and deepen insights developed in environmental history. |
.
|
[143] |
2022. Still w(AI)ting for the automation of teaching: An exploration of machine learning in Swedish primary education using Actor-Network Theory. European Journal of Education, 57(4):584–600. Wiley-Blackwell Publishing Inc.. DOI: 10.1111/ejed.12526. Fulltext: https://doi.org/10.1111/ejed.12526 fulltext:print: https://liu.diva-portal.org/smash/get/di... Machine learning and other artificial intelligence (AI) technologies are predicted to play a transformative role in primary education, where these technologies for automation and personalization are now being introduced to classroom instruction. This article explores the rationales and practices by which machine learning and AI are emerging in schools. We report on ethnographic fieldwork in Sweden, where a machine learning teaching aid in mathematics, the AI Engine, was tried out by 22 teachers and more than 250 primary education students. By adopting an Actor-Network Theory approach, the analysis focuses on the interactions within the network of heterogeneous actors bound by the AI Engine as an obligatory passage point. The findings show how the actions and accounts emerging within the complex ecosystem of human actors compensate for the unexpected and undesirable algorithmic decisions of the AI Engine. We discuss expectations about AI in education, contradictions in how the AI Engine worked and uncertainties about how machine learning algorithms ‘learn’ and predict. These factors contribute to our understanding of the potential of automation and personalisation—a process that requires continued re-negotiations. The findings are presented in the form of a fictional play in two acts, an ethnodrama. The ethnodrama highlights controversies in the use of AI in education, such as the lack of transparency in algorithmic decision-making—and how this can play out in real-life learning contexts. The findings of this study contribute to a better understanding of AI in primary education. |
.
|
[142] |
2022. FOND Planning with Explicit Fairness Assumptions. The journal of artificial intelligence research, 74(??):887–916. AI Access Foundation ; AAAI Press. DOI: 10.1613/jair.1.13599. Note: Funding Agencies|ERC Advanced Grant [885107]; EU [952215]; Knut and Alice Wallenberg (KAW) Foundation under the WASP program fulltext:print: http://liu.diva-portal.org/smash/get/div... We consider the problem of reaching a propositional goal condition in fully-observable non deterministic (FOND) planning under a general class of fairness assumptions that are given explicitly. The fairness assumptions are of the form A/B and say that state trajectories that contain infinite occurrences of an action a from A in a state s and finite occurrence of actions from B, must also contain infinite occurrences of action a in s followed by each one of its possible outcomes. The infinite trajectories that violate this condition are deemed as unfair, and the solutions are policies for which all the fair trajectories reach a goal state. We show that strong and strong-cyclic FOND planning, as well as QNP planning, a planning model introduced recently for generalized planning, are all special cases of FOND planning with fairness assumptions of this form which can also be combined. FOND+ planning, as this form of planning is called, combines the syntax of FOND planning with some of the versatility of LTL for expressing fairness constraints. A sound and complete FOND+ planner is implemented by reducing FOND+ planning to answer set programs, and its performance is evaluated in comparison with FOND and QNP planners, and LTL synthesis tools. Two other FOND+ planners are introduced as well which are more scalable but are not complete. |
.
|
[141] |
2022. Scalar reward is not enough: a response to Silver, Singh, Precup and Sutton (2021). Autonomous Agents and Multi-Agent Systems, 36(2):??–??. Springer. DOI: 10.1007/s10458-022-09575-5. Note: Funding: Flemish Government; National Cancer Institute of the U.S. National Institutes of Health [1R01CA240452-01A1]; Research Foundation Flanders (FWO) [1242021N]; Swedish Governmental Agency for Innovation Systems [NFFP7/2017-04885]; Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; National University of Ireland Galway Hardiman Scholarship; FAPERGS [19/2551-0001277-2]; FAPESP [2020/05165-1] fulltext:print: http://liu.diva-portal.org/smash/get/div... The recent paper “Reward is Enough†by Silver, Singh, Precup and Sutton posits that the concept of reward maximisation is sufficient to underpin all intelligence, both natural and artificial, and provides a suitable basis for the creation of artificial general intelligence. We contest the underlying assumption of Silver et al. that such reward can be scalar-valued. In this paper we explain why scalar rewards are insufficient to account for some aspects of both biological and computational intelligence, and argue in favour of explicitly multi-objective models of reward maximisation. Furthermore, we contend that even if scalar reward functions can trigger intelligent behaviour in specific cases, this type of reward is insufficient for the development of human-aligned artificial general intelligence due to unacceptable risks of unsafe or unethical behaviour. |
.
|
[140] |
2022. A landscape and implementation framework for probabilistic rough sets using PROBLOG. Information Sciences, 593(??):546–576. Elsevier Science Inc. DOI: 10.1016/j.ins.2021.12.062. Note: Funding Agencies|ELLIIT Network Organization for Information and Communication Technology, Sweden; Swedish Foundation for Strategic Research SSF [RIT15-0097]; Guangdong Department of Science and Technology, China [2020A1313030098]; National Science Centre Poland [2017/27/B/ST6/02018] fulltext:print: http://liu.diva-portal.org/smash/get/div... Reasoning about uncertainty is one of the main cornerstones of Knowledge Representation. More recently, combining logic with probability has been of major interest. Rough set methods have been proposed for modeling incompleteness and imprecision based on indiscernibility and its generalizations and there is a large body of work in this direction. More recently, the classical theory has been generalized to include probabilistic rough set methods of which there are also a great variety of proposals. Pragmatic, easily accessible, and easy to use tools for specification and reasoning with this wide variety of methods is lacking. It is the purpose of this paper to fill in that gap where the focus will be on probabilistic rough set methods. A landscape of (probabilistic) rough set reasoning methods and the variety of choices involved in specifying them is surveyed first. While doing this, an abstract generalization of all the considered approaches is derived which subsumes each of the methods. One then shows how, via this generalization, one can specify and reason about any of these methods using PROBLOG, a popular and widely used probabilistic logic programming language based on PROBLOG. The paper also considers new techniques in this context such as the use of probabilistic target sets when defining rough sets and the use of partially specified base relations that are also probabilistic. Additionally, probabilistic approaches using tolerance spaces are proposed. The paper includes a rich set of examples and provides a framework based on a library of generic PROBLOG relations that make specification of any of these methods, straightforward, efficient and compact. Complete, ready to run PROBLOG code is included in the Appendix for all examples considered. |
.
|
[139] |
2022. A practical guide to multi-objective reinforcement learning and planning. Autonomous Agents and Multi-Agent Systems, 36(1):??–??. Springer. DOI: 10.1007/s10458-022-09552-y. Note: Funding: Fonds voor Wetenschappelijk Onderzoek (FWO)FWO [1SA2820N]; Flemish GovernmentEuropean Commission; FWOFWO [iBOF/21/027]; National University of Ireland Galway Hardiman Scholarship; FAPERGSFundacao de Amparo a Ciencia e Tecnologia do Estado do Rio Grande do Sul (FAPERGS) [19/2551-0001277-2]; FAPESPFundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) [2020/05165-1]; Swedish Governmental Agency for Innovation SystemsVinnova [NFFP7/2017-04885]; Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; LIFT - Dutch Research Council (NWO) [019.011]; 2017 Microsoft Research PhD Scholarship Program; 2020 Microsoft Research EMEA PhD Award fulltext:print: http://liu.diva-portal.org/smash/get/div... Real-world sequential decision-making tasks are generally complex, requiring trade-offs between multiple, often conflicting, objectives. Despite this, the majority of research in reinforcement learning and decision-theoretic planning either assumes only a single objective, or that multiple objectives can be adequately handled via a simple linear combination. Such approaches may oversimplify the underlying problem and hence produce suboptimal results. This paper serves as a guide to the application of multi-objective methods to difficult problems, and is aimed at researchers who are already familiar with single-objective reinforcement learning and planning methods who wish to adopt a multi-objective perspective on their research, as well as practitioners who encounter multi-objective decision problems in practice. It identifies the factors that may influence the nature of the desired solution, and illustrates by example how these influence the design of multi-objective decision-making systems for complex problems. |
.
|
[138] |
2022. Partnership on AI, Data, and Robotics. n/a |
.
|
[137] |
2022. Design of simulation-based pilot training systems using machine learning agents. Aeronautical Journal, 126(1300):907–931. Cambridge University Press. DOI: 10.1017/aer.2022.8. Note: Funding Agencies|Swedish Governmental Agency for Innovation SystemsVinnova [NFFP7/2017-04885]; Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; Swedish Research CouncilSwedish Research CouncilEuropean Commission [2020/5-230] fulltext:print: https://liu.diva-portal.org/smash/get/di... The high operational cost of aircraft, limited availability of air space, and strict safety regulations make training of fighter pilots increasingly challenging. By integrating Live, Virtual, and Constructive simulation resources, efficiency and effectiveness can be improved. In particular, if constructive simulations, which provide synthetic agents operating synthetic vehicles, were used to a higher degree, complex training scenarios could be realised at low cost, the need for support personnel could be reduced, and training availability could be improved. In this work, inspired by the recent improvements of techniques for artificial intelligence, we take a user perspective and investigate how intelligent, learning agents could help build future training systems. Through a domain analysis, a user study, and practical experiments, we identify important agent capabilities and characteristics, and then discuss design approaches and solution concepts for training systems to utilise learning agents for improved training value. |
.
|
[136] |
2022. The tree-generative capacity of combinatory categorial grammars. Journal of computer and system sciences (Print), 124(??):214–233. Academic Press Ltd - Elsevier Science Ltd. DOI: 10.1016/j.jcss.2021.10.005. Note: Funding Agencies|Centre for Industrial IT (CENIIT) [15.02]; German Research Foundation (DFG) Research Training Group GRK 1763 Quantitative Logics and AutomataGerman Research Foundation (DFG) fulltext:print: http://liu.diva-portal.org/smash/get/div... The generative capacity of combinatory categorial grammars (CCGs) as generators of tree languages is investigated. It is demonstrated that the tree languages generated by CCGs can also be generated by simple monadic context-free tree grammars. However, the important subclass of pure combinatory categorial grammars cannot even generate all regular tree languages. Additionally, the tree languages generated by combinatory categorial grammars with limited rule degrees are characterized: If only application rules are allowed, then these grammars can generate only a proper subset of the regular tree languages, whereas they can generate exactly the regular tree languages once first-degree composition rules are permitted. (C) 2021 The Author(s). Published by Elsevier Inc. |
.
|
2021 | ||
[135] |
2021. Hastily formed knowledge networks and distributed situation awareness for collaborative robotics. Autonomous Intelligent Systems, 1(1):??–??. Springer. DOI: 10.1007/s43684-021-00016-w. Note: Funding Agencies|ELLIIT Network Organization for Information and Communication Technology, Sweden (Project B09) and the Swedish Foundation for Strategic Research SSF (Smart Systems Project RIT15-0097). The first author is also supported by an RExperts Program Grant 2020A1313030098 from the Guangdong Department of Science and Technology, China in addition to a Sichuan Province International Science and Technology Innovation Cooperation Project Grant 2020YFH0160. Fulltext: https://doi.org/10.1007/s43684-021-00016... fulltext:print: https://liu.diva-portal.org/smash/get/di... In the context of collaborative robotics, distributed situation awareness is essential for supporting collective intelligence in teams of robots and human agents where it can be used for both individual and collective decision support. This is particularly important in applications pertaining to emergency rescue and crisis management. During operational missions, data and knowledge are gathered incrementally and in different ways by heterogeneous robots and humans. We describe this as the creation of Hastily Formed Knowledge Networks (HFKNs). The focus of this paper is the specification and prototyping of a general distributed system architecture that supports the creation of HFKNs by teams of robots and humans. The information collected ranges from low-level sensor data to high-level semantic knowledge, the latter represented in part as RDF Graphs. The framework includes a synchronization protocol and associated algorithms that allow for the automatic distribution and sharing of data and knowledge between agents. This is done through the distributed synchronization of RDF Graphs shared between agents. High-level semantic queries specified in SPARQL can be used by robots and humans alike to acquire both knowledge and data content from team members. The system is empirically validated and complexity results of the proposed algorithms are provided. Additionally, a field robotics case study is described, where a 3D mapping mission has been executed using several UAVs in a collaborative emergency rescue scenario while using the full HFKN Framework. |
.
|
[134] |
2021. WARA-PS: a research arena for public safety demonstrations and autonomous collaborative rescue robotics experimentation. Autonomous Intelligent Systems, 1(1):??–??. Springer Singapore. DOI: 10.1007/s43684-021-00009-9. fulltext:print: https://liu.diva-portal.org/smash/get/di... A research arena (WARA-PS) for sensing, data fusion, user interaction, planning and control of collaborative autonomous aerial and surface vehicles in public safety applications is presented. The objective is to demonstrate scientific discoveries and to generate new directions for future research on autonomous systems for societal challenges. The enabler is a computational infrastructure with a core system architecture for industrial and academic collaboration. This includes a control and command system together with a framework for planning and executing tasks for unmanned surface vehicles and aerial vehicles. The motivating application for the demonstration is marine search and rescue operations. A state-of-art delegation framework for the mission planning together with three specific applications is also presented. The first one concerns model predictive control for cooperative rendezvous of autonomous unmanned aerial and surface vehicles. The second project is about learning to make safe real-time decisions under uncertainty for autonomous vehicles, and the third one is on robust terrain-aided navigation through sensor fusion and virtual reality tele-operation to support a GPS-free positioning system in marine environments. The research results have been experimentally evaluated and demonstrated to industry and public sector audiences at a marine test facility. It would be most difficult to do experiments on this large scale without the WARA-PS research arena. Furthermore, these demonstrator activities have resulted in effective research dissemination with high public visibility, business impact and new research collaborations between academia and industry. |
.
|
[133] |
2021. Router and gateway node placement in wireless mesh networks for emergency rescue scenarios. Autonomous Intelligent Systems, 1(1):??–??. Springer. DOI: 10.1007/s43684-021-00012-0. Note: Funding Agencies|ELLIIT network organization for Information and Communication Technology; Swedish Foundation for Strategic ResearchSwedish Foundation for Strategic Research [RIT 15-0097]; Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation;The 3rd author was also supported by an RExperts Program Grant 2020A1313030098 from the Guangdong Department of Science and Technology, China and a Sichuan Province International Science and Technology Innovation Cooperation Project Grant 2020YFH0160. Fulltext: https://doi.org/10.1007/s43684-021-00012... Link: https://link.springer.com/article/10.100... The focus of this paper is on base functionalities required for UAV-based rapid deployment of an ad hoc communication infrastructure in the initial phases of rescue operations. The main idea is to use heterogeneous teams of UAVs to deploy communication kits that include routers, and are used in the generation of ad hoc Wireless Mesh Networks (WMN). Several fundamental problems are considered and algorithms are proposed to solve these problems. The Router Node Placement problem (RNP) and a generalization of it that takes into account additional constraints arising in actual field usage is considered first. The RNP problem tries to determine how to optimally place routers in a WMN. A new algorithm, the RRT-WMN algorithm, is proposed to solve this problem. It is based in part on a novel use of the Rapidly Exploring Random Trees (RRT) algorithm used in motion planning. A comparative empirical evaluation between the RRT-WMN algorithm and existing techniques such as the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and Particle Swarm Optimization (PSO), shows that the RRT-WMN algorithm has far better performance both in amount of time taken and regional coverage as the generalized RNP problem scales to realistic scenarios. The Gateway Node Placement Problem (GNP) tries to determine how to locate a minimal number of gateway nodes in a WMN backbone network while satisfying a number of Quality of Service (QoS) constraints.Two alternatives are proposed for solving the combined RNP-GNP problem. The first approach combines the RRT-WMN algorithm with a preexisting graph clustering algorithm. The second approach, WMNbyAreaDecomposition, proposes a novel divide-and-conquer algorithm that recursively partitions a target deployment area into a set of disjoint regions, thus creating a number of simpler RNP problems that are then solved concurrently. Both algorithms are evaluated on real-world GIS models of different size and complexity. WMNbyAreaDecomposition is shown to outperform existing algorithms using 73% to 92% fewer router nodes while at the same time satisfying all QoS requirements. |
.
|
[132] |
2021. An empirical study on the contribution of formal and semantic features to the grammatical gender of nouns. Linguistics Vanguard, 7(1):??–??. WALTER DE GRUYTER GMBH. DOI: 10.1515/lingvan-2020-0048. Note: Funding Agencies|IDEXLYON Fellowship Grant [16-IDEX-0005]; University of Lyon Grant NSCO ED 476 [ANR-10-LABX-0081]; French National Research AgencyFrench National Research Agency (ANR) [ANR-11-IDEX-0007] This study conducts an experimental evaluation of two hypotheses about the contributions of formal and semantic features to the grammatical gender assignment of nouns. One of the hypotheses (Corbett and Fraser 2000) claims that semantic features dominate formal ones. The other hypothesis, formulated within the optimal gender assignment theory (Rice 2006), states that form and semantics contribute equally. Both hypotheses claim that the combination of formal and semantic features yields the most accurate gender identification. In this paper, we operationalize and test these hypotheses by trying to predict grammatical gender using only character-based embeddings (that capture only formal features), only context-based embeddings (that capture only semantic features) and the combination of both. We performed the experiment using data from three languages with different gender systems (French, German and Russian). Formal features are a significantly better predictor of gender than semantic ones, and the difference in prediction accuracy is very large. Overall, formal features are also significantly better than the combination of form and semantics, but the difference is very small and the results for this comparison are not entirely consistent across languages. |
.
|
[131] |
2021. The Non-Deterministic Path to Concurrency ? Exploring how Students Understand the Abstractions of Concurrency. Informatics in Education. An International Journal, 20(4):683–715. Vilnius University Press. DOI: 10.15388/infedu.2021.29. Fulltext: https://doi.org/10.15388/infedu.2021.29 fulltext:print: http://liu.diva-portal.org/smash/get/div... Concurrency is often perceived as difficult by students. One reason for this may be due to the fact that abstractions used in concurrent programs leave more situations undefined compared to sequential programs (e.g., in what order statements are executed), which makes it harder to create a proper mental model of the execution environment. Students who aim to explore the abstractions through testing are further hindered by the non-determinism of concurrent programs since even incorrect programs may seem to work properly most of the time. In this paper we aim to explore how students understanding these abstractions by examining 137 solutions to two concurrency questions given on the final exam in two years of an introductory concurrency course. To highlight problematic areas of these abstractions, we present alternative abstractions under which each incorrect solution would be correct. |
.
|
[130] |
2021. Understanding Students? Failure to use Functions as a Tool for Abstraction ? An Analysis of Questionnaire Responses and Lab Assignments in a CS1 Python Course. Informatics in Education. An International Journal, 20(4):583–614. Vilnius University Press. DOI: 10.15388/infedu.2021.26. Fulltext: https://doi.org/10.15388/infedu.2021.26 fulltext:print: https://liu.diva-portal.org/smash/get/di... Controlling complexity through the use of abstractions is a critical part of problem solving in programming. Thus, becoming proficient with procedural and data abstraction through the use of user-defined functions is important. Properly using functions for abstraction involves a number of other core concepts, such as parameter passing, scope and references, which are known to be difficult. Therefore, this paper aims to study students proficiency with these core concepts, and students ability to apply procedural and data abstraction to solve problems. We collected data from two years of an introductory Python course, both from a questionnaire and from two lab assignments. The data shows that students had difficulties with the core concepts, and a number of issues solving problems with abstraction. We also investigate the impact of using a visualization tool when teaching the core concepts. |
.
|
[129] |
2021. Education in Artificial Intelligence K-12. Künstliche Intelligenz, 35(2):127–129. Springer. DOI: 10.1007/s13218-021-00734-6. Note: Funding Agencies: Graz University of Technology fulltext:print: http://liu.diva-portal.org/smash/get/div... |
.
|
[128] |
2021. Human Autonomy in Future Drone Traffic: Joint Human-AI Control in Temporal Cognitive Work. Frontiers in Artificial Intelligence, 4(??):??–??. Frontiers Media S.A.. DOI: 10.3389/frai.2021.704082. Note: Funding: Swedish Transport Administration fulltext:print: http://liu.diva-portal.org/smash/get/div... The roles of human operators are changing due to increased intelligence and autonomy of computer systems. Humans will interact with systems at a more overarching level or only in specific situations. This involves learning new practices and changing habitual ways of thinking and acting, including reconsidering human autonomy in relation to autonomous systems. This paper describes a design case of a future autonomous management system for drone traffic in cities in a key scenario we call The Computer in Brussels. Our approach to designing for human collaboration with autonomous systems builds on scenario-based design and cognitive work analysis facilitated by computer simulations. We use a temporal method, called the Joint Control Framework to describe human and automated work in an abstraction hierarchy labeled Levels of Autonomy in Cognitive Control. We use the Score notation to analyze patterns of temporal developments that span levels of the abstraction hierarchy and discuss implications for human-automation communication in traffic management. We discuss how autonomy at a lower level can prevent autonomy on higher levels, and vice versa. We also discuss the temporal nature of autonomy in minute-to-minute operative work. Our conclusion is that human autonomy in relation to autonomous systems is based on fundamental trade-offs between technological opportunities to automate and values of what human actors find meaningful. |
.
|
[127] |
2021. Ethics, Human Rights, the Intelligent Robot, and its Subsystem for Moral Beliefs. International Journal of Social Robotics, 13(4):557–567. Springer. DOI: 10.1007/s12369-019-00540-z. fulltext:print: https://liu.diva-portal.org/smash/get/di... The Universal Declaration of Human Rights specifies a number of properties that characterize human beings, such as dignity, conscience, and several others. In this article we focus on these properties and on how they have been defined in the history of philosophy. We show how they can be interpreted in terms of a prototypical architecture for an intelligent robot, and how the robot can be provided with several aspects of ethical capability in this way. The key idea is to provide the robot with a Moral Belief System that cooperates with, and moderates the robots capability of planning and action. |
.
|
[126] |
2021. Three Interviews About K-12 AI Education in America, Europe, and Singapore. As the impact and importance of artificial intelligence (AI) grows, there is a growing trend to teach AI in primary and secondary education (K-12). To provide an international perspective, we have conducted three interviews with practitioners and policy makers from AI4K12 in the US (D. Touretzky, C. Gardner-McCune, and D. Seehorn), from Singapore (L. Liew) and from the European Commission (F. Benini). |
.
|
[125] |
2021. A Differentiated Discussion About AI Education K-12. Künstliche Intelligenz, 35(2):131–137. Springer Nature. DOI: 10.1007/s13218-021-00724-8. Note: Funding Agencies|Graz University of Technology fulltext:print: http://liu.diva-portal.org/smash/get/div... AI Education for K-12 and in particular AI literacy gained huge interest recently due to the significantly influence in daily life, society, and economy. In this paper we discuss this topic of early AI education along four dimensions: (1) formal versus informal education, (2) cooperation of researchers in AI and education, (3) the level of education, and (4) concepts and tools. |
.
|
[124] | ![]() |
2021. Enhancing Lattice-Based Motion Planning With Introspective Learning and Reasoning. IEEE Robotics and Automation Letters, 6(3):4385–4392. Institute of Electrical and Electronics Engineers (IEEE). DOI: 10.1109/LRA.2021.3068550. Note: Funding: Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation; National Graduate School in Computer Science (CUGS), Sweden; Excellence Center at Linkoping-Lund for Information Technology (ELLIIT); TAILOR Project - EU Horizon 2020 research and innovation programme [952215]; Knut and Alice Wallenberg FoundationKnut & Alice Wallenberg Foundation [KAW 2019.0350] Fulltext: https://doi.org/10.1109/LRA.2021.3068550 fulltext:print: https://liu.diva-portal.org/smash/get/di... Lattice-based motion planning is a hybrid planning method where a plan is made up of discrete actions, while simultaneously also being a physically feasible trajectory. The planning takes both discrete and continuous aspects into account, for example action pre-conditions and collision-free action-duration in the configuration space. Safe motion planning rely on well-calibrated safety-margins for collision checking. The trajectory tracking controller must further be able to reliably execute the motions within this safety margin for the execution to be safe. In this work we are concerned with introspective learning and reasoning about controller performance over time. Normal controller execution of the different actions is learned using machine learning techniques with explicit uncertainty quantification, for safe usage in safety-critical applications. By increasing the model accuracy the safety margins can be reduced while maintaining the same safety as before. Reasoning takes place to both verify that the learned models stays safe and to improve collision checking effectiveness in the motion planner using more accurate execution predictions with a smaller safety margin. The presented approach allows for explicit awareness of controller performance under normal circumstances, and detection of incorrect performance in abnormal circumstances. Evaluation is made on the nonlinear dynamics of a quadcopter in 3D using simulation. |
.
[123] | ![]() |
2021. Elementary Students First Approach to Computational Thinking and Programming. Education Sciences, 11(2):??–??. MDPI. DOI: 10.3390/educsci11020080. Note: Funding Agencies|Marcus and Amalia Wallenberg Foundation [MAW 2017.0096] fulltext:print: https://liu.diva-portal.org/smash/get/di... Digital competence and programming are actively highlighted areas in education worldwide. They are becoming part of curricula all over the world, including the Swedish elementary school curriculum, Children are expected to develop computational thinking through programming activities, mainly in mathematics-which are supposed to be based on both proven experience and scientific grounds. Both are lacking in the lower grades of elementary school. This article gives unique insight into pupils learning during the first programming lessons based on a group of Swedish pupils experiences when entering school. The goal of the article is to inform education policy and practice. The large interdisciplinary, longitudinal research project studies approximately 1500 students aged 6-16 and their teachers over three years, using video documentation, questionnaires, and focus group interviews. This article reports on empirical data collected during the first year in one class with 30 pupils aged 6-7 years. The social semiotic, multimodal theoretical framework \"Design for Learning\" is used to investigate potential signs of learning in pupils multimodal representations when they, for example, use block programming in the primary and secondary transformation unit. We show that young pupils have positive attitudes to programming and high self-efficacy, and that pupils signs of learning in programming are multimodal and often visible in social interactions. |
.
[122] |
2021. Rough set reasoning using answer set programs. International Journal of Approximate Reasoning, 130(March):126–149. Elsevier. DOI: 10.1016/j.ijar.2020.12.010. Note: Funding: ELLIIT Network Organization for Information and Communication Technology, Sweden; Swedish Foundation for Strategic Research SSF(Smart Systems Project) [RIT15-0097]; Jinan University (Zhuhai Campus); National Science Centre PolandNational Science Centre, Poland [2017/27/B/ST6/02018] Fulltext: https://doi.org/10.1016/j.ijar.2020.12.0... fulltext:print: http://liu.diva-portal.org/smash/get/div... Reasoning about uncertainty is one of the main cornerstones of Knowledge Representation. Formal representations of uncertainty are numerous and highly varied due to different types of uncertainty intended to be modeled such as vagueness, imprecision and incompleteness. There is a rich body of theoretical results that has been generated for many of these approaches. It is often the case though, that pragmatic tools for reasoning with uncertainty lag behind this rich body of theoretical results. Rough set theory is one such approach for modeling incompleteness and imprecision based on indiscernibility and its generalizations. In this paper, we provide a pragmatic tool for constructively reasoning with generalized rough set approximations that is based on the use of Answer Set Programming (Asp). We provide an interpretation of answer sets as (generalized) approximations of crisp sets (when possible) and show how to use Asp solvers as a tool for reasoning about (generalized) rough set approximations situated in realistic knowledge bases. The paper includes generic Asp templates for doing this and also provides a case study showing how these techniques can be used to generate reducts for incomplete information systems. Complete, ready to run clingo Asp code is provided in the Appendix, for all programs considered. These can be executed for validation purposes in the clingo Asp solver. |
.
|
[121] |
2021. A framework for describing interaction between human operators and autonomous, automated, and manual control systems. Cognition, Technology & Work, 23(??):381–401. SPRINGER LONDON LTD. DOI: 10.1007/s10111-020-00637-w. Note: Funding Agencies|Linkoping University - Swedish Transport Administration; Air Navigation Services of Sweden fulltext:print: https://liu.diva-portal.org/smash/get/di... This paper addresses how to describe critical episodes of interaction between human operators and autonomous, automated, and manual control systems. The first part of the paper poses three questions: (1) what levels of cognitive control are important to include in a descriptive framework for joint human-autonomy in process control; (2) how should one describe temporal developments in joint socio-technical systems; and (3) how does one analyse communication and control at the system joints. The paper proceeds by proposing a new framework for description and analysis, the Joint Control Framework (JCF), with a simple notation, the Score (JCF-S). It allows descriptions of the three previously mentioned aspects through three analytical activities: process mapping (PM), analysis of Levels of Autonomy in Cognitive Control (LACC), and temporal descriptions of human-machine interaction (T-HMI) through the Score notation. This facilitates analyses across cases and domains. The framework is discussed based on an analysis of two episodes; one work episode (from an air traffic control tower simulator); and one work procedure (from an unmanned traffic management system design concept). |
.
|
2020 | ||
[120] |
2020. g-C3N4/WTe2 Hybrid Electrocatalyst for Efficient Hydrogen Evolution Reaction. The Journal of Physical Chemistry C, 124(16):8726–8735. American Chemical Society (ACS). DOI: 10.1021/acs.jpcc.9b11982. Recent experiments have highlighted the efficiency of nonprecious metal-based hybrid structures, such as g-C<sub>3</sub>N<sub>4</sub>/MoS<sub>2</sub> and g-C<sub>3</sub>N<sub>4</sub>/graphene for hydrogen evolution reaction (HER). This work focuses on the interface effects of such hybrid heterostructures that could lead to the enhanced catalytic activity of g-C<sub>3</sub>N<sub>4</sub>. We have concentrated on the hybrid electrocatalysts with the architecture g-C<sub>3</sub>N<sub>4</sub>/X (X = WTe<sub>2</sub>, MoS<sub>2</sub>, and graphene), where the interface plays an important role in the overall HER. These promising candidates have been assessed using three main factors extracted from density functional theory calculations, namely: (i) the free energy of hydrogen adsorption on the catalytic site Δ<em>G</em><sub>H</sub>, (ii) Schottky barrier potentials, and (iii) induced charge polarization across the interface. We have found that particularly g-C<sub>3</sub>N<sub>4</sub>/WTe<sub>2</sub> displays a suitable combination of the investigated properties standing out as a potential electrocatalyst for efficient hydrogen evolution reaction. Furthermore, the electronic structure fingerprints controlling the HER thermodynamics have been investigated. In particular, the N–H bonds have been found to display strong s–p hybridization and, additionally, Δ<em>G</em><sub>H</sub> decreases as the center of N p-band approaches the Fermi energy. This is also a relevant result in understanding HER mechanisms of organic compounds. |
.
|
[119] |
2020. Development of computational thinking, digital competence and 21stcentury skills when learning programming in K-9. Teachers around the world have started teaching programming at the K-9 level, some due to the formal introduction of programming in the national curriculum, others without such pressure and on their own initiative. In this study, we attempted to understand which skills – both CT-related and general – are developed among pupils in the process of working with programming in schools. To do so, we interviewed 19 Swedish teachers who had been teaching programming for a couple of years on their own initiative. The teachers were selected based on their experience in teaching programming. Our thematic analysis of these interviews shed light on what skills teachers perceive pupils develop when programming. This led us to identify three themes related to CT skills and five themes related to general skills. The CT skills identified corresponded well with and were thus thematically structured according to the dimensions of CT proposed in the framework of Brennan and Resnick, namely computational concepts, computational practices and computational perspectives. In addition to the CT skills, our thematic analysis also resulted in the identification of general skills related to digital competency and 21st century skills, namely cognitive skills and attitudes, language skills, collaborative skills and attitudes and creative problem-solving skills and attitudes. |
.
|
[118] |
2020. On the Probability and Cost of Ignorance, Inconsistency, Nonsense and More. Journal of Multiple-Valued Logic and Soft Computing, 34(5-6):423–450. Old City Publishing. Note: Funding agencies:Â National Science Centre PolandNational Science Center, PolandNational Science Centre, Poland [2017/27/B/ST6/02018] Journal home page: https://www.oldcitypublishing.com/journa... Ignorance, inconsistency, nonsense and similar phenomena are omnipresent in everyday reasoning. They have been intensively studied, especially in the area of multiple-valued logics. Therefore we develop a framework for belief bases, combining multiple-valued and probabilistic reasoning, with the main focus on the way belief bases are actually used and accessed through queries.As an implementation tool we use a probabilistic programming language PROBLOG. Though based on distribution semantics with the independence assumption, we show how its constructs can successfully be used in implementing the considered logics and belief bases. In particular, we develop a technique for shifting probabilistic dependencies to the level of symbolic parts of belief bases.We also discuss applications of the framework in reasoning with Likert-type scales, widely exploited in questionnaire-based experimental research in psychology, economics, sociology, politics, public opinion measurements, and related areas. |
.
|
[117] |
2020. A Paraconsistent ASP-like Language with Tractable Model Generation. Journal of Applied Logics - IfCoLog Journal of Logic and Applications, 7(3):361–389. College Publications. Note: Funding agencies:Â This work has been supported by grant 2017/27/B/ST6/02018 of the National Science Centre Poland. Link to article: http://www.collegepublications.co.uk/dow... fulltext:print: http://liu.diva-portal.org/smash/get/div... Answer Set Programming (ASP) is nowadays a dominant rule-based knowledge representation tool. Though existing ASP variants enjoy efficient implementations, generating an answer set remains intractable. The goal of this research is to define a new ASP-like rule language, 4SP, with tractable model generation. The language combines ideas of ASP and a paraconsistent rule language 4QL. Though 4SP shares the syntax of ASP and for each program all its answer sets are among 4SP models, the new language differs from ASP in its logical foundations, the intended methodology of its use and complexity of computing models. As we show in the paper, 4QL can be seen as a paraconsistent counterpart of ASP programs stratified with respect to default negation. Although model generation for 4QL programs is tractable, dropping stratification makes it intractable for both 4QL and ASP. To retain tractability while allowing non-stratified programs, in 4SP we introduce trial expressions interlacing programs with hypotheses as to the truth values of default negations. This allows us to develop a model generation algorithm with deterministic polynomial time complexity. We also show relationships among 4SP, ASP and 4QL. |
.
|
[116] | ![]() |
2020. An anytime algorithm for optimal simultaneous coalition structure generation and assignment. Autonomous Agents and Multi-Agent Systems, 34(1):??–??. SPRINGER. DOI: 10.1007/s10458-020-09450-1. Note: Funding Agencies|Linkoping University; Wallenberg AI, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation fulltext:print: http://liu.diva-portal.org/smash/get/div... An important research problem in artificial intelligence is how to organize multiple agents, and coordinate them, so that they can work together to solve problems. Coordinating agents in a multi-agent system can significantly affect the systems performance-the agents can, in many instances, be organized so that they can solve tasks more efficiently, and consequently benefit collectively and individually. Central to this endeavor is coalition formation-the process by which heterogeneous agents organize and form disjoint groups (coalitions). Coalition formation often involves finding a coalition structure (an exhaustive set of disjoint coalitions) that maximizes the systems potential performance (e.g., social welfare) through coalition structure generation. However, coalition structure generation typically has no notion of goals. In cooperative settings, where coordination of multiple coalitions is important, this may generate suboptimal teams for achieving and accomplishing the tasks and goals at hand. With this in mind, we consider simultaneously generating coalitions of agents and assigning the coalitions to independent alternatives (e.g., tasks/goals), and present an anytime algorithm for the simultaneous coalition structure generation and assignment problem. This combinatorial optimization problem hasmany real-world applications, including forming goal-oriented teams. To evaluate the presented algorithms performance, we present five methods for synthetic problem set generation, and benchmark the algorithm against the industry-grade solver CPLEXusing randomized data sets of varying distribution and complexity. To test its anytime-performance, we compare the quality of its interim solutions against those generated by a greedy algorithm and pure random search. Finally, we also apply the algorithm to solve the problem of assigning agents to regions in a major commercial strategy game, and show that it can be used in game-playing to coordinate smaller sets of agents in real-time. |
.
[115] | ![]() |
2020. Incremental Reasoning in Probabilistic Signal Temporal Logic. International Journal of Approximate Reasoning, 119(??):325–352. Elsevier. DOI: 10.1016/j.ijar.2020.01.009. Note: Funding agencies: National Graduate School in Computer Science, Sweden (CUGS); Swedish Research Council (VR) Linnaeus Center CADICSSwedish Research Council; ELLIIT Excellence Center at Linkoping-Lund for Information Technology; Wallenberg AI, Autonomous Systems and Softwar Fulltext: https://doi.org/10.1016/j.ijar.2020.01.0... fulltext:print: http://liu.diva-portal.org/smash/get/div... Robot safety is of growing concern given recent developments in intelligent autonomous systems. For complex agents operating in uncertain, complex and rapidly-changing environments it is difficult to guarantee safety without imposing unrealistic assumptions and restrictions. It is therefore necessary to complement traditional formal verification with monitoring of the running system after deployment. Runtime verification can be used to monitor that an agent behaves according to a formal specification. The specification can contain safety-related requirements and assumptions about the environment, environment-agent interactions and agent-agent interactions. A key problem is the uncertain and changing nature of the environment. This necessitates requirements on how probable a certain outcome is and on predictions of future states. We propose Probabilistic Signal Temporal Logic (ProbSTL) by extending Signal Temporal Logic with a sub-language to allow statements over probabilities, observations and predictions. We further introduce and prove the correctness of the incremental stream reasoning technique progression over well-formed formulas in ProbSTL. Experimental evaluations demonstrate the applicability and benefits of ProbSTL for robot safety. |
.
2019 | ||
[114] |
2019. A paraconsistent approach to actions in informationally complex environments. Annals of Mathematics and Artificial Intelligence, 86(4):231–255. SPRINGER. DOI: 10.1007/s10472-019-09627-9. Note: Funding Agencies|Polish National Science Centre [2015/19/B/ST6/02589]; ELLIIT Network Organization for Information and Communication Technology; Swedish Foundation for Strategic Research FSR (SymbiKBot Project) fulltext:print: http://liu.diva-portal.org/smash/get/div... Contemporary systems situated in real-world open environments frequently have to cope with incomplete and inconsistent information that typically increases complexity of reasoning and decision processes. Realistic modeling of such informationally complex environments calls for nuanced tools. In particular, incomplete and inconsistent information should neither trivialize nor stop both reasoning or planning. The paper introduces ACTLOG, a rule-based four-valued language designed to specify actions in a paraconsistent and paracomplete manner. ACTLOG is an extension of 4QL(Bel), a language for reasoning with paraconsistent belief bases. Each belief base stores multiple world representations. In this context, ACTLOGs action may be seen as a belief bases transformer. In contrast to other approaches, ACTLOG actions can be executed even when the underlying belief base contents is inconsistent and/or partial. ACTLOG provides a nuanced action specification tools, allowing for subtle interplay among various forms of nonmonotonic, paraconsistent, paracomplete and doxastic reasoning methods applicable in informationally complex environments. Despite its rich modeling possibilities, it remains tractable. ACTLOG permits for composite actions by using sequential and parallel compositions as well as conditional specifications. The framework is illustrated on a decontamination case study known from the literature. |
.
|
[113] |
2019. Paraconsistent Reasoning in Cops and Robber Game with Uncertain Information: A Simulation-Based Analysis. International Journal of Uncertainty Fuzziness and Knowledge-Based Systems, 27(3):429–455. WORLD SCIENTIFIC PUBL CO PTE LTD. DOI: 10.1142/S021848851950020X. Note: Funding Agencies|Polish National Science Centre [2012/05/B/ST6/03094, 2015/19/B/ST6/02589] We apply a non-classical four-valued logic in the process of reasoning regarding strategies for cops in a modified game of \"Cops and Robber\" played on a graph. We extend the game by introducing uncertainty in a form of random failures of detecting devices. This is realized by allowing that a robber can be detected in a node only with the given probability P-A. Additionally, with the probability P-F, cops can be given a false-positive, i.e., they are informed that the robber is located at some node, whereas it is located somewhere else. Consequently, non-zero P-F introduces a measurement noise into the system. All the cops have access to information provided by the detectors and can communicate with each other, so they can coordinate the search. By adjusting the number of detectors,P-A, and P-F we can achieve a smooth transition between the two well-known variants of the game: \"with fully visible robber\" and \"with invisible robber\". We compare a simple probabilistic strategy for cops with the non-parametric strategy based on reasoning with a four-valued paraconsistent logic. It is shown that this novel approach leads to a good performance, as measured by the required mean catch-time. We conclude that this type of reasoning can be applied in real-world applications where there is no knowledge about the underlying source of errors which is particularly useful in robotics. |
.
|
[112] |
2019. Special issue on stream reasoning. n/a |
.
|
[111] | ![]() |
2019. Efficient Autonomous Exploration Planning of Large Scale 3D-Environments. IEEE Robotics and Automation Letters, 4(2):1699–1706. Institute of Electrical and Electronics Engineers (IEEE). DOI: 10.1109/LRA.2019.2897343. fulltext:postprint: https://liu.diva-portal.org/smash/get/di... Exploration is an important aspect of robotics, whether it is for mapping, rescue missions or path planning in an unknown environment. Frontier Exploration planning (FEP) and Receding Horizon Next-Best-View planning (RH-NBVP) are two different approaches with different strengths and weaknesses. FEP explores a large environment consisting of separate regions with ease, but is slow at reaching full exploration due to moving back and forth between regions. RH-NBVP shows great potential and efficiently explores individual regions, but has the disadvantage that it can get stuck in large environments not exploring all regions. In this work we present a method that combines both approaches, with FEP as a global exploration planner and RH-NBVP for local exploration. We also present techniques to estimate potential information gain faster, to cache previously estimated gains and to exploit these to efficiently estimate new queries. |
.
2018 | ||
[110] |
2018. Code ABC MOOC for Math Teachers. Communications in Computer and Information Science, 865(??):66–96. Springer. DOI: 10.1007/978-3-319-94640-5_4. Computing is the latest add-on to enhance the K-12 curricula of many countries, with the purpose of closing the digital skills gap. The revised Finnish Curriculum 2014 integrates computing mainly into math. Consequently, Finland needs to train math teachers to teach computing at elementary level. This study describes the Python and Racket tracks of the Code ABC MOOC that introduce programming basics for math teachers. Their suitability for math is compared based on the course content and feedback. The results show that conceptually the functional paradigm of Racket approaches math more closely, in particular algebra. In addition, Racket is generally regarded as more challenging in terms of syntax and e.g. for utilizing recursion as an iteration mechanism. Math teachers also rank its suitability higher because the content and exercises of the track are specifically tailored for their subject. |
.
|
[109] |
2018. Paraconsistent Rule-Based Reasoning with Graded Truth Values. Journal of Applied Logics - IfCoLog Journal, 5(1):185–220. College Publications. Link: http://www.collegepublications.co.uk/dow... Modern artificial systems, such as cooperative traffic systems or swarm robotics, are made of multiple autonomous agents, each handling uncertain, partial and potentially inconsistent information, used in their reasoning and decision making. Graded reasoning, being a suitable tool for addressing phenomena related to such circumstances, is investigated in the literature in many contexts – from graded modal logics to various forms of approximate reasoning. In this paper we first introduce a family of many-valued paraconsistent logics parametrised by a number of truth/falsity/inconsistency grades allowing one to handle multiple truth-values at the desired level of accuracy. Second, we define a corresponding family of rule-based languages with graded truth-values as first-class citizens, enjoying tractable query evaluation. In addition, we introduce introspection operators allowing one to resolve inconsistencies and/or lack of information in a non-monotonic manner. We illustrate and discuss the use of the framework in an autonomous robot scenario. |
.
|
[108] |
2018. The Low-Cost Evolution of AI in Domestic Floor Cleaning Robots. This article discusses AI methods deployed on domestic floor cleaning robots in the recent past and the way in which those methods are changing today. Formerly, innovations were tightly coupled with a price point customers were willing to pay. Today, there is a substantial increase in the AI found in these systems, driven by new challenges and scalable infrastructures. |
.
|
2017 | ||
[107] | ![]() |
2017. Optimal scheduling for replacing perimeter guarding unmanned aerial vehicles. Annals of Operations Research, 249(1):163–174. Springer. DOI: 10.1007/s10479-016-2169-5. fulltext:postprint: http://liu.diva-portal.org/smash/get/div... Guarding the perimeter of an area in order to detect potential intruders is an important task in a variety of security-related applications. This task can in many circumstances be performed by a set of camera-equipped unmanned aerial vehicles (UAVs). Such UAVs will occasionally require refueling or recharging, in which case they must temporarily be replaced by other UAVs in order to maintain complete surveillance of the perimeter. In this paper we consider the problem of scheduling such replacements. We present optimal replacement strategies and justify their optimality. |
.
2016 | ||
[106] | ![]() |
2016. Cognitive robotics in JOURNAL OF EXPERIMENTAL and THEORETICAL ARTIFICIAL INTELLIGENCE, vol 28, issue 5, pp 779-780. Journal of experimental and theoretical artificial intelligence (Print), 28(5):779–780. TAYLOR & FRANCIS LTD. DOI: 10.1080/0952813X.2016.1218649. n/a |
.
[105] |
2016. Editorial: Special Issue on Safety, Security, and Rescue Robotics (SSRR), Part 2. n/a |
.
|
[104] |
2016. Editorial: Special Issue on Safety, Security, and Rescue Robotics (SSRR), Part 1. n/a |
.
|
[103] | ![]() |
2016. Efficient Processing of Simple Temporal Networks with Uncertainty: Algorithms for Dynamic Controllability Verification. Acta Informatica, 53(6-8):723–752. Springer Publishing Company. DOI: 10.1007/s00236-015-0248-8. fulltext:postprint: http://liu.diva-portal.org/smash/get/div... Temporal formalisms are essential for reasoning about actions that are carried out over time. The exact durations of such actions are generally hard to predict. In temporal planning, the resulting uncertainty is often worked around by only considering upper bounds on durations, with the assumption that when an action happens to be executed more quickly, the plan will still succeed. However, this assumption is often false: If we finish cooking too early, the dinner will be cold before everyone is ready to eat. Using <em>Simple Temporal Networks with Uncertainty (STNU)</em>, a planner can correctly take both lower and upper duration bounds into account. It must then verify that the plans it generates are executable regardless of the actual outcomes of the uncertain durations. This is captured by the property of <em>dynamic controllability</em> (DC), which should be verified incrementally during plan generation. Recently a new incremental algorithm for verifying dynamic controllability was proposed: <em>EfficiantIDC</em>, which can verify if an STNU that is DC remains DC after the addition or tightening of a constraint (corresponding to a new action being added to a plan). The algorithm was shown to have a worst case complexity of <em>O</em>(n<sup>4</sup>) for each addition or tightening. This can be amortized over the construction of a whole STNU for an amortized complexity in <em>O</em>(n<sup>3</sup>). In this paper we improve the <em>EfficientIDC</em> algorithm in a way that prevents it from having to reprocess nodes. This improvement leads to a lower worst case complexity in <em>O</em>(n<sup>3</sup>). |
.
[102] | ![]() |
2016. A Modeling Framework for Troubleshooting Automotive Systems. Applied Artificial Intelligence, 30(3):257–296. Taylor & Francis. DOI: 10.1080/08839514.2016.1156955. Note: The published article is a shorter version than the version in manuscript form. The status of this article was earlier Manuscript.Funding agencies: Scania CV AB; FFI - Strategic Vehicle Research and Innovation; Excellence Center at Linkoping and Lund in Information Technology (ELLIIT); Research Council (VR) Linnaeus Center CADICS fulltext:postprint: http://liu.diva-portal.org/smash/get/div... This article presents a novel framework for modeling the troubleshooting process for automotive systems such as trucks and buses. We describe how a diagnostic model of the troubleshooting process can be created using event-driven, nonstationary, dynamic Bayesian networks. Exact inference in such a model is in general not practically possible. Therefore, we evaluate different approximate methods for inference based on the Boyen–Koller algorithm. We identify relevant model classes that have particular structure such that inference can be made with linear time complexity. We also show how models created using expert knowledge can be tuned using statistical data. The proposed learning mechanism can use data that is collected from a heterogeneous fleet of modular vehicles that can consist of different components. The proposed framework is evaluated both theoretically and experimentally on an application example of a fuel injection system. |
.
2015 | ||
[101] |
2015. Querying Flying Robots and Other Things: Ontology-supported stream reasoning. , 22(2):44–47. Association for Computing Machinery (ACM). Note: DOI does not work: 10.1145/2845155 Link to publication: http://xrds.acm.org/article.cfm?aid=2845... A discussion on the role of ontologies and stream reasoning in Internet of Things applications. |
.
|
[100] |
2015. Towards richer rule languages with polynomial data complexity for the Semantic Web. Data & Knowledge Engineering, 96-97(??):57–77. ELSEVIER SCIENCE BV. DOI: 10.1016/j.datak.2015.04.005. We introduce a Horn description logic called Horn-DL, which is strictly and essentially richer than Horn-Reg(1), Horn-SHTQ and Horn-SROIQ, while still has PTime data complexity. In comparison with Horn-SROIQ, Horn-DL additionally allows the universal role and assertions of the form irreflexive(s), -s(a, b), a b. More importantly, in contrast to all the well-known Horn fragments epsilon L, DL-Lite, DLP, Horn-SHIQ, and Horn-SROIQ of description logics, Horn-DL allows a form of the concept constructor \"universal restriction\" to appear at the left hand side of terminological inclusion axioms. Namely, a universal restriction can be used in such places in conjunction with the corresponding existential restriction. We develop the first algorithm with PTime data complexity for checking satisfiability of Horn-DL knowledge bases. |
.
|
[99] |
2015. Paraconsistent semantics of speech acts. This paper discusses an implementation of four speech acts: assert, concede, request and challenge in a paraconsistent framework. A natural four-valued model of interaction yields multiple new cognitive situations. They are analyzed in the context of communicative relations, which partially replace the concept of trust. These assumptions naturally lead to six types of situations, which often require performing conflict resolution and belief revision. The particular choice of a rule-based, DATALOC. like query language 4QL as a four-valued implementation framework ensures that, in contrast to the standard two-valued approaches, tractability of the model is achieved. |
.
|
2014 | ||
[98] |
2014. Guided Integrated Remote and Workshop Troubleshooting of Heavy Trucks. International Journal of Commercial Vehicles, 7(1):25–36. SAE International. DOI: 10.4271/2014-01-0284. When a truck or bus suffers from a breakdown it is important that the vehicle comes back on the road as soon as possible. In this paper we present a prototype diagnostic decision support system capable of automatically identifying possible causes of a failure and propose recommended actions on how to get the vehicle back on the road as cost efficiently as possible.This troubleshooting system is novel in the way it integrates the remote diagnosis with the workshop diagnosis when providing recommendations. To achieve this integration, a novel planning algorithm has been developed that enables the troubleshooting system to guide the different users (driver, help-desk operator, and mechanic) through the entire troubleshooting process.In this paper we formulate the problem of integrated remote and workshop troubleshooting and present a working prototype that has been implemented to demonstrate all parts of the troubleshooting system. |
.
|
[97] |
2014. A Horn Fragment with PTime Data Complexity of Regular Description Logic with Inverse. VNU Journal of Computer Science and Communication Engineering, 30(4):14–28. Link to publication: http://www.jcsce.vnu.edu.vn/index.php/jc... We study a Horn fragment called Horn-RegI of the regular description logic with inverse RegI, which extends the description logic ALC with inverse roles and regular role inclusion axioms characterized by finite automata. In contrast to the well-known Horn fragments EL, DL-Lite, DLP, Horn-SHIQ and Horn-SROIQ of description logics, Horn-RegI allows a form of the concept constructor \"universal restriction\" to appear at the left hand side of terminological inclusion axioms, while still has PTIME data complexity. Namely, a universal restriction can be used in such places in conjunction with the corresponding existential restriction. We provide an algorithm with PTIME data complexity for checking satisfiability of Horn-RegI knowledge bases. |
.
|
[96] | ![]() |
2014. Evaluation of a Light-weight Lidar and a Photogrammetric System for Unmanned Airborne Mapping Applications: [Bewertung eines Lidar-systems mit geringem Gewicht und eines photogrammetrischen Systems für Anwendungen auf einem UAV]. Photogrammetrie - Fernerkundung - Geoinformation, ??(4):287–298. E. Schweizerbart'sche Verlagsbuchhandlung. DOI: 10.1127/1432-8364/2014/0223. Link to article: http://www.ingentaconnect.com/content/sc... This paper presents a comparison of two light-weight and low-cost airborne mapping systems. One is based on a lidar technology and the other on a video camera. The airborne lidar system consists of a high-precision global navigation satellite system (GNSS) receiver, a microelectromechanical system (MEMS) inertial measurement unit, a magnetic compass and a low-cost lidar scanner. The vision system is based on a consumer grade video camera. A commercial photogrammetric software package is used to process the acquired images and generate a digital surface model. The two systems are described and compared in terms of hardware requirements and data processing. The systems are also tested and compared with respect to their application on board of an unmanned aerial vehicle (UAV). An evaluation of the accuracy of the two systems is presented. Additionally, the multi echo capability of the lidar sensor is evaluated in a test site covered with dense vegetation. The lidar and the camera systems were mounted and tested on-board an industrial unmanned helicopter with maximum take-off weight of around 100 kilograms. The presented results are based on real flight-test data. |
.
[95] |
2014. WORL: a nonmonotonic rule language for the semantic web. We develop a new Web ontology rule language, called WORL, which combines a variant of OWL 2 RL with eDatalog ¬ . We allow additional features like negation, the minimal number restriction and unary external checkable predicates to occur at the left-hand side of concept inclusion axioms. Some restrictions are adopted to guarantee a translation into eDatalog ¬ . We also develop the well-founded semantics and the stable model semantics for WORL as well as the standard semantics for stratified WORL (SWORL) via translation into eDatalog ¬ . Both WORL with respect to the well-founded semantics and SWORL with respect to the standard semantics have PTime data complexity. In contrast to the existing combined formalisms, in WORL and SWORL negation in concept inclusion axioms is interpreted using nonmonotonic semantics. |
.
|
[94] |
2014. Editorial Material: A perspective on the early history of artificial intelligence in Europe. n/a |
.
|
[93] |
2014. Unmanned aerial vehicle-aided communications system for disaster recovery. Journal of Network and Computer Applications, 41(??):27–36. Elsevier. DOI: 10.1016/j.jnca.2013.10.002. After natural disasters such as earthquakes, floods, hurricanes, tornados and fires, providing emergency management schemes which mainly rely on communications systems is essential for rescue operations. To establish an emergency communications system during unforeseen events such as natural disasters, we propose the use of a team of unmanned aerial vehicles (UAVs). The proposed system is a post-disaster solution and can be used whenever and wherever required. Each UAV in the team has an onboard computer which runs three main subsystems responsible for end-to-end communication, formation control and autonomous navigation. The onboard computer and the low-level controller of the UAV cooperate to accomplish the objective of providing local communications infrastructure. In this study, the subsystems running on each UAV are explained and evaluated by simulation studies and field tests using an autonomous helicopter. While the simulation studies address the efficiency of the end-to-end communication subsystem, the field tests evaluate the accuracy of the navigation subsystem. The results of the field tests and the simulation studies show that the proposed system can be successfully used in case of disasters to establish an emergency communications system. |
.
|
2013 | ||
[92] |
2013. Automated Generation of Logical Constraints on Approximation Spaces Using Quantifier Elimination. Fundamenta Informaticae, 127(1-4):135–149. IOS Press. DOI: 10.3233/FI-2013-900. Note: Funding Agencies|Swedish Research Council (VR) Linnaeus Center CADICS||ELLIIT Excellence Center at Linkoping-Lund in Information Technology||CUAS project||SSF, the Swedish Foundation for Strategic Research|| This paper focuses on approximate reasoning based on the use of approximation spaces. Approximation spaces and the approximated relations induced by them are a generalization of the rough set-based approximations of Pawlak. Approximation spaces are used to define neighborhoods around individuals and rough inclusion functions. These in turn are used to define approximate sets and relations. In any of the approaches, one would like to embed such relations in an appropriate logical theory which can be used as a reasoning engine for specific applications with specific constraints. We propose a framework which permits a formal study of the relationship between properties of approximations and properties of approximation spaces. Using ideas from correspondence theory, we develop an analogous framework for approximation spaces. We also show that this framework can be strongly supported by automated techniques for quantifier elimination. |
.
|
[91] |
2013. How an agent might think. Logic journal of the IGPL (Print), 21(3):515–535. Oxford University Press (OUP): Policy A - Oxford Open Option A. DOI: 10.1093/jigpal/jzs051. The current article is devoted to extensions of the rule query language 4QL proposed by MaÅ‚uszyÅ„ski and SzaÅ‚as. 4QL is a Datalog<sup>¬¬</sup>-like language, allowing one to use rules with negation in heads and bodies of rules. It is based on a simple and intuitive semantics and provides uniform tools for lightweight versions of well-known forms of non-monotonic reasoning. In addition, 4QL is tractable w.r.t. data complexity and captures PTime queries. In the current article we relax most of restrictions of 4QL, obtaining a powerful but still tractable query language 4QL<sup>+</sup>. In its development we mainly focused on its pragmatic aspects: simplicity, tractability and generality. In the article we discuss our approach and choices made, define a new, more general semantics and investigate properties of 4QL<sup>+</sup>. |
.
|
[90] | ![]() |
2013. Toward rich geometric map for SLAM: online detection of planets in 2D LIDAR. Journal of Automation, Mobile Robotics & Intelligent Systems, 7(1):35–41. Link to article: http://www.jamris.org/archive.php Rich geometric models of the environment are needed for robots to carry out their missions. However a robot operating in a large environment would require a compact representation. In this article, we present a method that relies on the idea that a plane appears as a line segment in a 2D scan, and that by tracking those lines frame after frame, it is possible to estimate the parameters of that plane. The method is divided in three steps: fitting line segments on the points of the 2D scan, tracking those line segments in consecutive scan and estimating the parameters with a graph based SLAM (Simultaneous Localisation And Mapping) algorithm. |
.
[89] | ![]() |
2013. High-level Mission Specification and Planning for Collaborative Unmanned Aircraft Systems using Delegation. Automated specification, generation and execution of high level missions involving one or more heterogeneous unmanned aircraft systems is in its infancy. Much previous effort has been focused on the development of air vehicle platforms themselves together with the avionics and sensor subsystems that implement basic navigational skills. In order to increase the degree of autonomy in such systems so they can successfully participate in more complex mission scenarios such as those considered in emergency rescue that also include ongoing interactions with human operators, new architectural components and functionalities will be required to aid not only human operators in mission planning, but also the unmanned aircraft systems themselves in the automatic generation, execution and partial verification of mission plans to achieve mission goals. This article proposes a formal framework and architecture based on the unifying concept of delegation that can be used for the automated specification, generation and execution of high-level collaborative missions involving one or more air vehicles platforms and human operators. We describe an agent-based software architecture, a temporal logic based mission specification language, a distributed temporal planner and a task specification language that when integrated provide a basis for the generation, instantiation and execution of complex collaborative missions on heterogeneous air vehicle systems. A prototype of the framework is operational in a number of autonomous unmanned aircraft systems developed in our research lab. |
.
[88] | ![]() |
2013. Stream-Based Hierarchical Anchoring. Autonomous systems situated in the real world often need to recognize, track, and reason about various types of physical objects. In order to allow reasoning at a symbolic level, one must create and continuously maintain a correlation between symbols denoting physical objects and sensor data being collected about them, a process called anchoring.In this paper we present a stream-based hierarchical anchoring framework. A classification hierarchy is associated with expressive conditions for hypothesizing the type and identity of an object given streams of temporally tagged sensor data. The anchoring process constructs and maintains a set of object linkage structures representing the best possible hypotheses at any time. Each hypothesis can be incrementally generalized or narrowed down as new sensor data arrives. Symbols can be associated with an object at any level of classification, permitting symbolic reasoning on different levels of abstraction. The approach is integrated in the DyKnow knowledge processing middleware and has been applied to an unmanned aerial vehicle traffic monitoring application. |
.
[87] |
2013. RoboCup Rescue Robot and Simulation Leagues. The AI Magazine, 34(1):??–??. AAAI Press. Link to journal: http://www.aaai.org/ojs/index.php/aimaga... fulltext:preprint: https://liu.diva-portal.org/smash/get/di... The RoboCup Rescue Robot and Simulation competitions have been held since 2000. The experience gained during these competitions has increased the maturity level of the field, which allowed deploying robots after real disasters (e.g. Fukushima Daiichi nuclear disaster). This article provides an overview of these competitions and highlights the state of the art and the lessons learned. |
.
|
[86] | ![]() |
2013. Study of efficiency of USAR operations with assistive technologies. Advanced Robotics, 27(5):337–350. DOI: 10.1080/01691864.2013.763723. Note: Funding Agencies|German Federal Ministry of Education and Research|13N9759|German Federal Agency for Technical Relief (THW)||RIF e.V.||JT-electronic GmbH||carat robotic innovation GmbH||Berlin- Oberspree Sondermaschinenbau GmbH (BOS)||SEEBA|| fulltext:postprint: https://liu.diva-portal.org/smash/get/di... This paper presents presents a study on eciency of Urban Search and Rescue (USAR) missions that has been carried out within the framework of the German research project I-LOV. After three years of development, first field tests have been carried out in 2011 by professionals such as the Rapid Deployment Unit for Salvage Operations Abroad (SEEBA). We present results from evaluating search teams in simulated USAR scenarios equipped with newly developed technical search means and digital data input terminals developed in the I-LOV project. In particular, USAR missions assisted by the “bioradarâ€, a ground-penetrating radar system for the detection of humanoid movements, a semi-active video probe of more than 10 m length for rubble pile exploration, a snake-like rescue robot, and the decision support system FRIEDAA were evaluated and compared with conventional USAR missions. Results of this evaluation indicate that the developed technologies represent an advantages for USAR missions, which are discussed in this paper. |
.
[85] | ![]() |
2013. A Frontier-Void-Based Approach for Autonomous Exploration in 3D. Advanced Robotics, 27(6):459–468. Taylor and Francis. DOI: 10.1080/01691864.2013.763720. Note: Funding Agencies|Deutsche Forschungsgemeinschaft in the Transregional Collaborative Research Center|SFB/TR8| fulltext:postprint: http://liu.diva-portal.org/smash/get/div... We consider the problem of an autonomous robot searching for objects in unknown 3d space. Similar to the well known frontier-based exploration in 2d, the problem is to determine a minimal sequence of sensor viewpoints until the entire search space has been explored. We introduce a novel approach that combines the two concepts of voids, which are unexplored volumes in 3d, and frontiers, which are regions on the boundary between voids and explored space. Our approach has been evaluated on a mobile platform equipped with a manipulator searching for victims in a simulated USAR setup. First results indicate the real-world capability and search efficiency of the proposed method. |
.
2012 | ||
[84] |
2012. Modeling and inference for troubleshooting with interventions applied to a heavy truck auxiliary braking system. Engineering applications of artificial intelligence, 25(4):705–719. Elsevier. DOI: 10.1016/j.engappai.2011.02.018. Computer assisted troubleshooting with external interventions is considered. The work is motivated by the task of repairing an automotive vehicle at lowest possible expected cost. The main contribution is a decision theoretic troubleshooting system that is developed to handle external interventions. In particular, practical issues in modeling for troubleshooting are discussed, the troubleshooting system is described, and a method for the efficient probability computations is developed. The troubleshooting systems consists of two parts; a planner that relies on AO* search and a diagnoser that utilizes Bayesian networks (BN). The work is based on a case study of an auxiliary braking system of a modern truck. Two main challenges in troubleshooting automotive vehicles are the need for disassembling the vehicle during troubleshooting to access parts to repair, and the difficulty to verify that the vehicle is fault free. These facts lead to that probabilities for faults and for future observations must be computed for a system that has been subject to external interventions that cause changes in the dependency structure. The probability computations are further complicated due to the mixture of instantaneous and non-instantaneous dependencies. To compute the probabilities, we develop a method based on an algorithm, updateBN, that updates a static BN to account for the external interventions. |
.
|
2011 | ||
[83] | ![]() |
2011. Large scale multiple robot visual mapping with heterogeneous landmarks in semi-structured terrain. This paper addresses the cooperative localization and visual mapping problem with multiple heterogeneous robots. The approach is designed to deal with the challenging large semi-structured outdoors environments in which aerial/ground ensembles are to evolve. We propose the use of heterogeneous visual landmarks, points and line segments, to achieve effective cooperation in such environments. A large-scale SLAM algorithm is generalized to handle multiple robots, in which a global graph maintains the relative relationships between a series of local sub-maps built by the different robots. The key issue when dealing with multiple robots is to find the link between them, and to integrate these relations to maintain the overall geometric consistency; the events that introduce these links on the global graph are described in detail. Monocular cameras are considered as the primary extereoceptive sensor. In order to achieve the undelayed initialization required by the bearing-only observations, the well-known inverse-depth parametrization is adopted to estimate 3D points. Similarly, to estimate 3D line segments, we present a novel parametrization based on anchored Plücker coordinates, to which extensible endpoints are added. Extensive simulations show the proposed developments, and the overall approach is illustrated using real-data taken with a helicopter and a ground rover. |
.
[82] |
2011. Converse-PDL with Regular Inclusion Axioms: A Framework for MAS Logics. <em>In this paper we study automated reasoning in the modal logic CPDLreg which is a combination of CPDL (Propositional Dynamic Logic with Converse) and REGc (Regular Grammar Logic with Converse). The logic CPDL is widely used in many areas, including program verification, theory of action and change, and knowledge representation. On the other hand, the logic REGc is applicable in reasoning about epistemic states and ontologies (via Description Logics). The modal logic CPDLreg can serve as a technical foundation for reasoning about agency. Even very rich multi-agent logics can be embedded into CPDLreg via a suitable translation. Therefore, CPDLreg can serve as a test bed to implement and possibly verify new ideas without providing specific automated reasoning techniques for the logic in question. This process can to a large extent be automated. The idea of embedding various logics into CPDLreg is illustrated on a rather advanced logic TEAMLOGK designed to specify teamwork in multi-agent systems. Apart from defining informational and motivational attitudes of groups of agents, TEAMLOGK allows for grading agents' beliefs, goals and intentions. The current paper is a companion to our paper (Dunin-Kęplicz et al., 2010a). The main contribution are proofs of soundness and completeness of the tableau calculus for CPDLreg provided in (Dunin-Kęplicz et al., 2010a).</em> |
.
|
[81] |
2011. ExpTime Tableau Decision Procedures for Regular Grammar Logics with Converse. Studia Logica: An International Journal for Symbolic Logic, 98(3):387–428. Springer Berlin/Heidelberg. DOI: 10.1007/s11225-011-9341-3. Grammar logics were introduced by Fariñas del Cerro and Penttonen in 1988 and have been widely studied. In this paper we consider regular grammar logics with converse (<em>REG</em> <sup><em>c</em> </sup>logics) and present sound and complete tableau calculi for the general satisfiability problem of <em>REG</em> <sup><em>c</em> </sup>logics and the problem of checking consistency of an ABox w.r.t. a TBox in a <em>REG</em> <sup><em>c</em> </sup>logic. Using our calculi we develop ExpTime (optimal) tableau decision procedures for the mentioned problems, to which various optimization techniques can be applied. We also prove a new result that the data complexity of the instance checking problem in <em>REG</em> <sup><em>c</em></sup>logics is coNP-complete. |
.
|
[80] |
2011. Logical Foundations and Complexity of 4QL, a Query Language with Unrestricted Negation. The paper discusses properties of 4QL, a DATALOG¬¬-like query language, originally outlined by Małuszy´nski and Szałas (Małuszy´nski & Szałas, 2011). 4QL allows one to use rules with negation in heads and bodies of rules. It is based on a simple and intuitive semantics and provides uniform tools for “lightweight†versions of known forms of nonmonotonic reasoning. Negated literals in heads of rules may naturally lead to inconsistencies. On the other hand, rules do not have to attach meaning to some literals. Therefore 4QL is founded on a four-valued semantics, employing the logic introduced in (Małuszy´nski et al., 2008; Vitória et al., 2009) with truth values: ‘true’, ‘false’, ‘inconsistent’ and ‘unknown’. In addition, 4QL is tractable w.r.t. data complexity and captures PTIME queries. Even though DATALOG¬¬ is known as a concept for the last 30 years, to our best knowledge no existing approach enjoys these properties. In the current paper we:<ul><li>investigate properties of well-supported models of 4QL</li><li>prove the correctness of the algorithm for computing well-supported models</li><li>show that 4QL has PTIME data complexity and captures PTIME.</li></ul> |
.
|
[79] |
2011. Reports of the AAAI 2011 Spring Symposia. The AI Magazine, 32(3):119–127. AAAI Press. The Association for the Advancement of Artificial Intelligence presented the 2011 Spring Symposium Series Monday through Wednesday, March 21-23, 2011, at Stanford University. This report summarizes the eight symposia. |
.
|
[78] | ![]() |
2011. From systems to logic in the early development of nonmonotonic reasoning. This note describes how the notion of nonmonotonic reasoning emerged in Artificial Intelligence from the mid-1960s to 1980. It gives particular attention to the interplay between three kinds of activities: design of high-level programming systems for AI, design of truth-maintenance systems, and the development of nonmonotonic logics. This was not merely a development from logic to implementation: in several cases there was a development from a system design to a corresponding logic. The article concludes with some reflections on the roles and relationships between logicist theory and system design in AI, and in particular in Knowledge Representation. |
.
2010 | ||
[77] | ![]() |
2010. Exercising Moral Copyright for Evolving Publications. ScieCom Info, 6(3):??–??. Svenskt Resurscentrum för Vetenskaplig Kommunikation. Link: http://www.sciecom.org/ojs/index.php/sci... |
.
[76] |
2010. Tableaux with Global Caching for Checking Satisfiability of a Knowledge Base in the Description Logic SH. Transactions on Computational Collective Intelligence, 1(1):21–38. Springer. ISBN: 978-3-642-15033-3. DOI: 10.1007/978-3-642-15034-0_2. Description logics (DLs) are a family of knowledge representation languages which can be used to represent the terminological knowledge of an application domain in a structured and formally well-understood way. DLs can be used, for example, for conceptual modeling or as ontology languages. In fact, OWL (Web Ontology Language), recommended by W3C, is based on description logics. In the current paper we give the first direct ExpTime (optimal) tableau decision procedure, which is not based on transformation or on the pre-completion technique, for checking satisfiability of a knowledge base in the description logic <em>SH</em><img src=\"http://www.springerlink.com/jsMath/fonts/cmsy10/alpha/120/char53.png\" /><img src=\"http://www.springerlink.com/jsMath/fonts/cmsy10/alpha/120/char48.png\" />. Our procedure uses sound global caching and can be implemented as an extension of the highly optimized tableau prover TGC to obtain an efficient program for the mentioned satisfiability problem. |
.
|
[75] | ![]() |
2010. Defeasible inheritance with doubt index and its axiomatic characterization. This article introduces and uses a representation of defeasible inheritance networks where links in the network are viewed as propositions, and where defeasible links are tagged with a quantitative indication of the proportion of exceptions, called the doubt index. This doubt index is used for restricting the length of the chains of inference. The representation also introduces the use of defeater literals that disable the chaining of subsumption links. The use of defeater literals replaces the use of negative defeasible inheritance links, expressing \"most A are not B\". The new representation improves the expressivity significantly. Inference in inheritance networks is defined by a combination of axioms that constrain the contents of network extensions, a heuristic restriction that also has that effect, and a nonmonotonic operation of minimizing the set of defeater literals while retaining consistency. We introduce an underlying semantics that defines the meaning of literals in a network, and prove that the axioms are sound with respect to this semantics. We also discuss the conditions for obtaining completeness. Traditional concepts, assumptions and issues in research on nonmonotonic or defeasible inheritance are reviewed in the perspective of this approach. |
.
[74] |
2010. Checking Consistency of an ABox w.r.t. Global Assumptions in PDL. We reformulate Pratts tableau decision procedure of checking satisfiability of a set of formulas in PDL. Our formulation is simpler and its implementation is more direct. Extending the method we give the first Ex PT m E (optimal) tableau decision procedure not based on transformation for checking consistency of an ABox w.r.t. a TBox in PDL (here, PDL is treated as a description logic). We also prove a new result that the data complexity of the instance checking problem in PDL is coNP-complete. |
.
|
[73] | ![]() |
2010. Optimal placement of UV-based communications relay nodes. Journal of Global Optimization, 48(4):511–531. Springer. DOI: 10.1007/s10898-010-9526-8. Note: The original publication is available at www.springerlink.com:Oleg Burdakov, Patrick Doherty, Kaj Holmberg and Per-Magnus Olsson, Optimal placement of UV-based communications relay nodes, 2010, Journal of Global Optimization, (48), 4, 511-531.http://dx.doi.org/10.1007/s10898-010-9526-8Copyright: Springer Science Business Mediahttp://www.springerlink.com/ fulltext:postprint: http://liu.diva-portal.org/smash/get/div... We consider a constrained optimization problem with mixed integer and real variables. It models optimal placement of communications relay nodes in the presence of obstacles. This problem is widely encountered, for instance, in robotics, where it is required to survey some target located in one point and convey the gathered information back to a base station located in another point. One or more unmanned aerial or ground vehicles (UAVs or UGVs) can be used for this purpose as communications relays. The decision variables are the number of unmanned vehicles (UVs) and the UV positions. The objective function is assumed to access the placement quality. We suggest one instance of such a function which is more suitable for accessing UAV placement. The constraints are determined by, firstly, a free line of sight requirement for every consecutive pair in the chain and, secondly, a limited communication range. Because of these requirements, our constrained optimization problem is a difficult multi-extremal problem for any fixed number of UVs. Moreover, the feasible set of real variables is typically disjoint. We present an approach that allows us to efficiently find a practically acceptable approximation to a global minimum in the problem of optimal placement of communications relay nodes. It is based on a spatial discretization with a subsequent reduction to a shortest path problem. The case of a restricted number of available UVs is also considered here. We introduce two label correcting algorithms which are able to take advantage of using some peculiarities of the resulting restricted shortest path problem. The algorithms produce a Pareto solution to the two-objective problem of minimizing the path cost and the number of hops. We justify their correctness. The presented results of numerical 3D experiments show that our algorithms are superior to the conventional Bellman-Ford algorithm tailored to solving this problem. |
.
[72] | ![]() |
2010. FlexDx: A Reconfigurable Diagnosis Framework. Engineering applications of artificial intelligence, 23(8):1303–1313. Elsevier. DOI: 10.1016/j.engappai.2010.01.004. fulltext:postprint: http://liu.diva-portal.org/smash/get/div... Detecting and isolating multiple faults is a computationally expensive task. It typically consists of computing a set of tests and then computing the diagnoses based on the test results. This paper describes FlexDx, a reconfigurable diagnosis framework which reduces the computational burden while retaining the isolation performance by only running a subset of all tests that is sufficient to find new conflicts. Tests in FlexDx are thresholded residuals used to indicate conflicts in the monitored system. Special attention is given to the issues introduced by a reconfigurable diagnosis framework. For example, tests are added and removed dynamically, tests are partially performed on historic data, and synchronous and asynchronous processing are combined. To handle these issues FlexDx has been implemented using DyKnow, a stream-based knowledge processing middleware framework. Concrete methods for each component in the FlexDx framework are presented. The complete approach is exemplified on a dynamic system which clearly illustrates the complexity of the problem and the computational gain of the proposed approach. |
.
[71] |
2010. A Framework for Graded Beliefs, Goals and Intentions. In natural language we often use graded concepts, reflecting different intensity degrees of certain features. Whenever such concepts appear in a given real-life context, they need to be appropriately expressed in its models. In this paper, we provide a framework which allows for extending the BGI model of agency by grading beliefs, goals and intentions. We concentrate on TEAMLOG [6, 7, 8, 9, 12] and provide a complexity-optimal decision method for its graded version TEAMLOG(K) by translating it into CPDLreg (propositional dynamic logic with converse and \"inclusion axioms\" characterized by regular languages). We also develop a tableau calculus which leads to the first EXPTIME (optimal) tableau decision procedure for CPDLreg. As CPDLreg is suitable for expressing complex properties of graded operators, the procedure can also be used as a decision tool for other multiagent formalisms. |
.
|
[70] |
2010. A Layered Rule-Based Architecture for Approximate Knowledge Fusion. COMPUTER SCIENCE AND INFORMATION SYSTEMS, 7(3):617–642. COMSIS CONSORTIUM. DOI: 10.2298/CSIS100209015D. In this paper we present a framework for fusing approximate knowledge obtained from various distributed, heterogenous knowledge sources. This issue is substantial in modeling multi-agent systems, where a group of loosely coupled heterogeneous agents cooperate in achieving a common goal. In paper [5] we have focused on defining general mechanism for knowledge fusion. Next, the techniques ensuring tractability of fusing knowledge expressed as a Horn subset of propositional dynamic logic were developed in [13,16]. Propositional logics may seem too weak to be useful in real-world applications. On the other hand, propositional languages may be viewed as sublanguages of first-order logics which serve as a natural tool to define concepts in the spirit of description logics [2]. These notions may be further used to define various ontologies, like e. g. those applicable in the Semantic Web. Taking this step, we propose a framework, in which our Horn subset of dynamic logic is combined with deductive database technology. This synthesis is formally implemented in the framework of HSPDL architecture. The resulting knowledge fusion rules are naturally applicable to real-world data. |
.
|
[69] | ![]() |
2010. Relay Positioning for Unmanned Aerial Vehicle Surveillance. The international journal of robotics research, 29(8):1069–1087. Sage Publications. DOI: 10.1177/0278364910369463. When unmanned aerial vehicles (UAVs) are used for surveillance, information must often be transmitted to a base station in real time. However, limited communication ranges and the common requirement of free line of sight may make direct transmissions from distant targets impossible. This problem can be solved using relay chains consisting of one or more intermediate relay UAVs. This leads to the problem of positioning such relays given known obstacles, while taking into account a possibly mission-specific quality measure. The maximum quality of a chain may depend strongly on the number of UAVs allocated. Therefore, it is desirable to either generate a chain of maximum quality given the available UAVs or allow a choice from a spectrum of Pareto-optimal chains corresponding to different trade-offs between the number of UAVs used and the resulting quality. In this article, we define several problem variations in a continuous three-dimensional setting. We show how sets of Pareto-optimal chains can be generated using graph search and present a new label-correcting algorithm generating such chains significantly more efficiently than the best-known algorithms in the literature. Finally, we present a new dual ascent algorithm with better performance for certain tasks and situations. |
.
[68] |
2010. Tractable approximate knowledge fusion using the Horn fragment of serial propositional dynamic logic. International Journal of Approximate Reasoning, 51(3):346–362. Elsevier. DOI: 10.1016/j.ijar.2009.11.002. In this paper we investigate a technique for fusing approximate knowledge obtained from distributed, heterogeneous information sources. This issue is substantial, e.g., in modeling multiagent systems, where a group of loosely coupled heterogeneous agents cooperate in achieving a common goal. Information exchange, leading ultimately to knowledge fusion, is a natural and vital ingredient of this process. We use a generalization of rough sets and relations [30], which depends on allowing arbitrary similarity relations. The starting point of this research is [6], where a framework for knowledge fusion in multiagent systems is introduced. Agents individual perceptual capabilities are represented by similarity relations, further aggregated to express joint capabilities of teams, This aggregation, expressing a shift from individual to social level of agents activity, has been formalized by means of dynamic logic. The approach of Doherty et al. (2007) [6] uses the full propositional dynamic logic, which does not guarantee tractability of reasoning. Our idea is to adapt the techniques of Nguyen [26-28] to provide an engine for tractable approximate database querying restricted to a Horn fragment of serial dynamic logic. We also show that the obtained formalism is quite powerful in applications. |
.
|
[67] |
2010. A case-based approach to dialogue systems. Journal of experimental and theoretical artificial intelligence (Print), 22(1):23–51. Taylor & Francis. DOI: 10.1080/09528130902723708. We describe an approach to integrate dialogue management, machine-learning and action planning in a system for dialogue between a human and a robot. Case-based techniques are used because they permit life-long learning from experience and demand little prior knowledge and few static hand-written structures. This approach has been developed through the work on an experimental dialogue system, called CEDERIC, that is connected to an unmanned aerial vehicle (UAV). A single case base and case-based reasoning engine is used both for understanding and for planning actions by the UAV. Dialogue experiments both with experienced and novice users, where the users have solved tasks by dialogue with this system, showed very adequate success rates. |
.
|
[66] | ![]() |
2010. Bridging the sense-reasoning gap: DyKnow - Stream-based middleware for knowledge processing. Engineering autonomous agents that display rational and goal-directed behavior in dynamic physical environments requires a steady flow of information from sensors to high-level reasoning components. However, while sensors tend to generate noisy and incomplete quantitative data, reasoning often requires crisp symbolic knowledge. The gap between sensing and reasoning is quite wide, and cannot in general be bridged in a single step. Instead, this task requires a more general approach to integrating and organizing multiple forms of information and knowledge processing on different levels of abstraction in a structured and principled manner. We propose knowledge processing middleware as a systematic approach to organizing such processing. Desirable properties are presented and motivated. We argue that a declarative stream-based system is appropriate for the required functionality and present DyKnow, a concrete implemented instantiation of stream-based knowledge processing middleware with a formal semantics. Several types of knowledge processes are defined and motivated in the context of a UAV traffic monitoring application. In the implemented application, DyKnow is used to incrementally bridge the sense-reasoning gap and generate partial logical models of the environment over which metric temporal logical formulas are evaluated. Using such formulas, hypotheses are formed and validated about the type of vehicles being observed. DyKnow is also used to generate event streams representing for example changes in qualitative spatial relations, which are used to detect traffic violations expressed as declarative chronicles. |
.
2009 | ||
[65] |
2009. Annotation Theories over Finite Graphs. Studia Logica: An International Journal for Symbolic Logic, 93(2-3):147–180. Springer. DOI: 10.1007/s11225-009-9220-3. In the current paper we consider theories with vocabulary containing a number of binary and unary relation symbols. Binary relation symbols represent labeled edges of a graph and unary relations represent unique annotations of the graph’s nodes. Such theories, which we call <em>annotation theories</em>, can be used in many applications, including the formalization of argumentation, approximate reasoning, semantics of logic programs, graph coloring, etc. We address a number of problems related to annotation theories over finite models, including satisfiability, querying problem, specification of preferred models and model checking problem. We show that most of considered problems are NPTime- or co-NPTime-complete. In order to reduce the complexity for particular theories, we use second-order quantifier elimination. To our best knowledge none of existing methods works in the case of annotation theories. We then provide a new second-order quantifier elimination method for stratified theories, which is successful in the considered cases. The new result subsumes many other results, including those of [2, 28, 21]. |
.
|
[64] |
2009. Modelling and Reasoning with Paraconsistent Rough Sets. We present a language for defining paraconsistent rough sets and reasoning about them. Our framework relates and brings together two major fields: rough sets [23] and paraconsistent logic programming [9]. To model inconsistent and incomplete information we use a four-valued logic. The language discussed in this paper is based on ideas of our previous work [21, 32, 22] developing a four-valued framework for rough sets. In this approach membership function, set containment and set operations are four-valued, where logical values are t (true), f (false), i (inconsistent) and u (unknown). We investigate properties of paraconsistent rough sets as well as develop a paraconsistent rule language, providing basic computational machinery for our approach. |
.
|
[63] | ![]() |
2009. A Temporal Logic-based Planning and Execution Monitoring Framework for Unmanned Aircraft Systems. Research with autonomous unmanned aircraft systems is reaching a new degree of sophistication where targeted missions require complex types of deliberative capability integrated in a practical manner in such systems. Due to these pragmatic constraints, integration is just as important as theoretical and applied work in developing the actual deliberative functionalities. In this article, we present a temporal logic-based task planning and execution monitoring framework and its integration into a fully deployed rotor-based unmanned aircraft system developed in our laboratory. We use a very challenging emergency services application involving body identification and supply delivery as a vehicle for showing the potential use of such a framework in real-world applications. TALplanner, a temporal logic-based task planner, is used to generate mission plans. Building further on the use of TAL (Temporal Action Logic), we show how knowledge gathered from the appropriate sensors during plan execution can be used to create state structures, incrementally building a partial logical model representing the actual development of the system and its environment over time. We then show how formulas in the same logic can be used to specify the desired behavior of the system and its environment and how violations of such formulas can be detected in a timely manner in an execution monitor subsystem. The pervasive use of logic throughout the higher level deliberative layers of the system architecture provides a solid shared declarative semantics that facilitates the transfer of knowledge between different modules. |
.
[62] |
2009. Voting by Eliminating Quantifiers. Studia Logica: An International Journal for Symbolic Logic, 92(3):365–379. Springer. DOI: 10.1007/s11225-009-9200-7. Mathematical theory of voting and social choice has attracted much attention. In the general setting one can view social choice as a method of aggregating individual, often conflicting preferences and making a choice that is the best compromise. How preferences are expressed and what is the “best compromise†varies and heavily depends on a particular situation. The method we propose in this paper depends on expressing individual preferences of voters and specifying properties of the resulting ranking by means of first-order formulas. Then, as a technical tool, we use methods of second-order quantifier elimination to analyze and compute results of voting. We show how to specify voting, how to compute resulting rankings and how to verify voting protocols. |
.
|
[61] | ![]() |
2009. Vision-Based Unmanned Aerial Vehicle Navigation Using Geo-Referenced Information. EURASIP Journal on Advances in Signal Processing, 2009(387308):1–18. Hindawi Publishing Corporation. DOI: 10.1155/2009/387308. This paper investigates the possibility of augmenting an Unmanned Aerial Vehicle (UAV) navigation system with a passive video camera in order to cope with long-term GPS outages. The paper proposes a vision-based navigation architecture which combines inertial sensors, visual odometry, and registration of the on-board video to a geo-referenced aerial image. The vision-aided navigation system developed is capable of providing high-rate and drift-free state estimation for UAV autonomous navigation without the GPS system. Due to the use of image-to-map registration for absolute position calculation, drift-free position performance depends on the structural characteristics of the terrain. Experimental evaluation of the approach based on offline flight data is provided. In addition the architecture proposed has been implemented on-board an experimental UAV helicopter platform and tested during vision-based autonomous flights. |
.
2008 | ||
[60] | ![]() |
2008. Extending the concept of publication: Factbases and knowledgebases. Learned Publishing, 21(2):123–131. Association of Learned and Professional Society Publishers. DOI: 10.1087/095315108X288893. The concept of a 'publication' no longer applies only to printed works, information technology has extended its application to several other types of works. This article describes a facility called the Common Knowledge Library that publishes modules of formally structured information representing facts and knowledge of various kinds. Publications of this new type have some characteristics in common with databases, and others in common with software modules, however, they also share some important characteristics with traditional publications. A framework for citation of previous work is important in order to provide an incentive for contributors of such modules. Peer review - the traditional method of quality assurance for scientific articles - must also be applied, although in a modified form, for fact and knowledge modules. The construction of the Common Knowledge Library is a cumulative process, new contributions are obtained by interpreting the contents of existing knowledge sources on the Internet, and the existing contents of the Library are an important resource for that interpretation process. |
.
[59] |
2008. Towards Incorporating Background Theories into Quantifier Elimination. Journal of applied non-classical logics, 18(2-3):325–340. Éditions Hermès-Lavoisier. DOI: 10.3166/jancl.18.325-340. In the paper we present a technique for eliminating quantifiers of arbitrary order, in particular of first-order. Such a uniform treatment of the elimination problem has been problematic up to now, since techniques for eliminating first-order quantifiers do not scale up to higher-order contexts and those for eliminating higher-order quantifiers are usually based on a form of monotonicity w.r.t implication (set inclusion) and are not applicable to the first-order case. We make a shift to arbitrary relations \"ordering\" the underlying universe. This allows us to incorporate background theories into higher-order quantifier elimination methods which, up to now, has not been achieved. The technique we propose subsumes many other results, including the Ackermann's lemma and various forms of fixpoint approaches when the \"ordering\" relations are interpreted as implication and reveals the common principle behind these approaches. |
.
|
[58] | ![]() |
2008. Automatic fitting procedures for EPR spectra of disordered systems: matrix diagonalization and perturbation methods applied to fluorocarbon radicals. Spectrochimica Acta Part A - Molecular and Biomolecular Spectroscopy, 69(5):1294–1300. Elsevier. DOI: 10.1016/j.saa.2007.09.040. Note: Original publication: A. Lund, P. Andersson, J. Eriksson, J. Hallin, T. Johansson, R. Jonsson, H. Löfgren, C. Paulin and A. Tell, Automatic fitting procedures for EPR spectra of disordered systems: matrix diagonalization and perturbation methods applied to fluorocarbon radicals, 2008, Spectrochimica Acta Part A, (69), 5, 1294-1300. http://dx.doi.org/10.1016/j.saa.2007.09.040. Copyright: Elsevier B.V., http://www.elsevier.com/ fulltext:postprint: http://liu.diva-portal.org/smash/get/div... Two types of automatic fitting procedures for EPR spectra of disordered systems have been developed, one based on matrix diagonalisation of a general spin Hamiltonian, the other on 2<sup>nd</sup> order perturbation theory. The first program is based on a previous Fortran code complemented with a newly written interface in Java to provide user-friendly in- and output. The second is intended for the special case of free radicals with several relatively weakly interacting nuclei, in which case the general method becomes slow. A least squares’ fitting procedure utilizing analytical or numerical derivatives of the theoretically calculated spectrum with respect to the g-and hyperfine structure (hfs) tensors was used to refine those parameters in both cases. ‘Rigid limit’ ESR spectra from radicals in organic matrices and in polymers, previously studied experimentally at low temperature, were analysed by both methods. Fluoro-carbon anion radicals could be simulated, quite accurately with the exact method, whereas automatic fitting on e.g. the c-C<sub>4</sub>F<sub>8</sub><sup>-</sup> anion radical is only feasible with the 2<sup>nd</sup> order approximative treatment. Initial values for the <sup>19</sup>F hfs tensors estimated by DFT calculations were quite close to the final. For neutral radicals of the type XCF<sub>2</sub>CF<sub>2</sub>• the refinement of the hfs tensors by the exact method worked better than the approximate. The reasons are discussed. The ability of the fitting procedures to recover the correct magnetic parameters of disordered systems was investigated by fittings to synthetic spectra with known hfs tensors. The exact and the approximate methods are concluded to be complementary, one being general, but limited to relatively small systems, the other being a special treatment, suited for S=½ systems with several moderately large hfs. |
.
2007 | ||
[57] | ![]() |
2007. Vision-Based SLAM: Stereo and Monocular Approaches. International Journal of Computer Vision, 74(3):343–364. Kluwer Academic Publishers. DOI: 10.1007/s11263-007-0042-3. Building a spatially consistent model is a key functionality to endow a mobile robot with autonomy. Without an initial map or an absolute localization means, it requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides data from which stable features can be extracted and matched as the robot moves. But it does not directly provide 3D information, which is a difficulty for estimating the geometry of the environment. This article presents two approaches to the SLAM problem using vision: one with stereovision, and one with monocular images. Both approaches rely on a robust interest point matching algorithm that works in very diverse environments. The stereovision based approach is a classic SLAM implementation, whereas the monocular approach introduces a new way to initialize landmarks. Both approaches are analyzed and compared with extensive experimental results, with a rover and a blimp. |
.
[56] | ![]() |
2007. Communication between agents with heterogeneous perceptual capabilities. In real world applications robots and software agents often have to be equipped with higher level cognitive functions that enable them to reason, act and perceive in changing, incompletely known and unpredictable environments. One of the major tasks in such circumstances is to fuse information from various data sources. There are many levels of information fusion, ranging from the fusing of low level sensor signals to the fusing of high level, complex knowledge structures. In a dynamically changing environment even a single agent may have varying abilities to perceive its environment which are dependent on particular conditions. The situation becomes even more complex when different agents have different perceptual capabilities and need to communicate with each other. In this paper, we propose a framework that provides agents with the ability to fuse both low and high level approximate knowledge in the context of dynamically changing environments while taking account of heterogeneous and contextually limited perceptual capabilities. To model limitations on an agent's perceptual capabilities we introduce the idea of partial tolerance spaces. We assume that each agent has one or more approximate databases where approximate relations are represented using lower and upper approximations on sets. Approximate relations are generalizations of rough sets. It is shown how sensory and other limitations can be taken into account when constructing and querying approximate databases for each respective agent. Complex relations inherit the approximativeness of primitive relations used in their definitions. Agents then query these databases and receive answers through the filters of their perceptual limitations as represented by (partial) tolerance spaces and approximate queries. The techniques used are all tractable. © 2005 Elsevier B.V. All rights reserved. |
.
[55] | ![]() |
2007. Second-order quantifier elimination in higher-order contexts with applications to the semantical analysis of conditionals. Studia Logica: An International Journal for Symbolic Logic, 87(1):37–50. Springer. DOI: 10.1007/s11225-007-9075-4. Second-order quantifier elimination in the context of classical logic emerged as a powerful technique in many applications, including the correspondence theory, relational databases, deductive and knowledge databases, knowledge representation, commonsense reasoning and approximate reasoning. In the current paper we first generalize the result of Nonnengart and Szalas [17] by allowing second-order variables to appear within higher-order contexts. Then we focus on a semantical analysis of conditionals, using the introduced technique and Gabbay's semantics provided in [10] and substantially using a third-order accessibility relation. The analysis is done via finding correspondences between axioms involving conditionals and properties of the underlying third-order relation. © 2007 Springer Science+Business Media B.V. |
.
[54] | ![]() |
2007. A correspondence framework between three-valued logics and similarity-based approximate reasoning. Fundamenta Informaticae, 75(1-4):179–193. IOS Press. This paper focuses on approximate reasoning based on the use of similarity spaces. Similarity spaces and the approximated relations induced by them are a generalization of the rough set-based approximations of Pawlak [17, 18]. Similarity spaces are used to define neighborhoods around individuals and these in turn are used to define approximate sets and relations. In any of the approaches, one would like to embed such relations in an appropriate logic which can be used as a reasoning engine for specific applications with specific constraints. We propose a framework which permits a formal study of the relationship between approximate relations, similarity spaces and three-valued logics. Using ideas from correspondence theory for modal logics and constraints on an accessibility relation, we develop an analogous framework for three-valued logics and constraints on similarity relations. In this manner, we can provide a tool which helps in determining the proper three-valued logical reasoning engine to use for different classes of approximate relations generated via specific types of similarity spaces. Additionally, by choosing a three-valued logic first, the framework determines what constraints would be required on a similarity relation and the approximate relations induced by it. Such information would guide the generation of approximate relations for specific applications. |
.
2006 | ||
[53] |
2006. Systems - Opening up the process. n/a |
.
|
[52] | ![]() |
2006. Improving heuristics through relaxed search - An analysis of TP4 and HSP*a in the 2004 planning competition. The journal of artificial intelligence research, 25(??):233–267. AAAI Press. DOI: 10.1613/jair.1885. The h(m) admissible heuristics for (sequential and temporal) regression planning are defined by a parameterized relaxation of the optimal cost function in the regression search space, where the parameter m offers a trade-off between the accuracy and computational cost of the heuristic. Existing methods for computing the h(m) heuristic require time exponential in m, limiting them to small values (m <= 2). The h(m) heuristic can also be viewed as the optimal cost function in a relaxation of the search space: this paper presents relaxed search, a method for computing this function partially by searching in the relaxed space. The relaxed search method, because it compute h(m) only partially, is computationally cheaper and therefore usable for higher values of m. The (complete) h(2) heuristic is combined with partial hm heuristics , for m = 3, ... computed by relaxed search, resulting in a more accurate heuristic. This use of the relaxed search method to improve on the h(2) heuristic is evaluated by comparing two optimal temporal planners: TP4, which does not use it, and HSP*(a), which uses it but is otherwise identical to TP4. The comparison is made on the domains used in the 2004 International Planning Competition, in which both planners participated. Relaxed search is found to be cost effective in some of these domains, but not all. Analysis reveals a characterization of the domains in which relaxed search can be expected to be cost effective, in terms of two measures on the original and relaxed search spaces. In the domains where relaxed search is cost effective, expanding small states is computationally cheaper than expanding large states and small states tend to have small successor states. |
.
[51] |
2006. A flexible runtime system for image processing in a distributed computational environment for an unmanned aerial vehicle. International Journal of Pattern Recognition and Artificial Intelligence, 20(5):763–780. DOI: 10.1142/S0218001406004867. A runtime system for implementation of image processing operations is presented. It is designed for working in a flexible and distributed environment related to the software architecture of a newly developed UAV system. The software architecture can be characterized at a coarse scale as a layered system, with a deliberative layer at the top, a reactive layer in the middle, and a processing layer at the bottom. At a finer scale each of the three levels is decomposed into sets of modules which communicate using CORBA, allowing system development and deployment on the UAV to be made in a highly flexible way. Image processing takes place in a dedicated module located in the process layer, and is the main focus of the paper. This module has been designed as a runtime system for data flow graphs, allowing various processing operations to be created online and on demand by the higher levels of the system. The runtime system is implemented in Java, which allows development and deployment to be made on a wide range of hardware/software configurations. Optimizations for particular hardware platforms have been made using Java's native interface. |
.
|
[50] |
2006. Probabilistic roadmap based path planning for an autonomous unmanned helicopter. Journal of Intelligent & Fuzzy Systems, 17(4):395–405. IOS Press. The emerging area of intelligent unmanned aerial vehicle (UAV) research has shown rapid development in recent years and offers a great number of research challenges for artificial intelligence. For both military and civil applications, there is a desire to develop more sophisticated UAV platforms where the emphasis is placed on development of intelligent capabilities. Imagine a mission scenario where a UAV is supplied with a 3D model of a region containing buildings and road structures and is instructed to fly to an arbitrary number of building structures and collect video streams of each of the building's respective facades. In this article, we describe a fully operational UAV platform which can achieve such missions autonomously. We focus on the path planner integrated with the platform which can generate collision free paths autonomously during such missions. Both probabilistic roadmap-based (PRM) and rapidly exploring random trees-based (RRT) algorithms have been used with the platform. The PRM-based path planner has been tested together with the UAV platform in an urban environment used for UAV experimentation. |
.
|
[49] | ![]() |
2006. A knowledge processing middleware framework and its relation to the JDL data fusion model. Journal of Intelligent & Fuzzy Systems, 17(4):335–351. IOS Press. Any autonomous system embedded in a dynamic and changing environment must be able to create qualitative knowledge and object structures representing aspects of its environment on the fly from raw or preprocessed sensor data in order to reason qualitatively about the environment and to supply such state information to other nodes in the distributed network in which it is embedded. These structures must be managed and made accessible to deliberative and reactive functionalities whose successful operation is dependent on being situationally aware of the changes in both the robotic agent's embedding and internal environments. DyKnow is a knowledge processing middleware framework which provides a set of functionalities for contextually creating, storing, accessing and processing such structures. The framework is implemented and has been deployed as part of a deliberative/reactive architecture for an autonomous unmanned aerial vehicle. The architecture itself is distributed and uses real-time CORBA as a communications infrastructure. We describe the system and show how it can be used to create more abstract entity and state representations of the world which can then be used for situation awareness by an unmanned aerial vehicle in achieving mission goals. We also show that the framework is a working instantiation of many aspects of the JDL data fusion model. |
.
[48] |
2006. Second-order Reasoning in Description Logics. Journal of applied non-classical logics, 16(3 - 4):517–530. Éditions Hermès-Lavoisier. DOI: 10.3166/jancl.16.517-530. Description logics refer to a family of formalisms concentrated around concepts, roles and individuals. They belong to the most frequently used knowledge representation formalisms and provide a logical basis to a variety of well known paradigms. The main reasoning tasks considered in the area of description logics are those reducible to subsumption. On the other hand, any knowledge representation system should be equipped with a more advanced reasoning machinery. Therefore in the current paper we make a step towards integrating description logics with second-order reasoning. One of the important motivations behind introducing second-order formalism follows from the fact that many forms of commonsense and nonmonotonic reasoning used in AI can be modelled within the second-order logic. To achieve our goal we first extend description logics with a possibility to quantify over concepts. Since one of the main criticisms against the use of second-order formalisms is their complexity, we next propose second-order quantifier elimination techniques applicable to a large class of description logic formulas. Finally we show applications of the techniques, in particular in reasoning with circumscribed concepts and approximated terminological formulas. |
.
|
[47] | ![]() |
2006. Approximate Databases: A support tool for approximate reasoning. Journal of applied non-classical logics, 16(1-2):87–118. Éditions Hermès-Lavoisier. DOI: 10.3166/jancl.16.87-117. Note: Special issue on implementation of logics This paper describes an experimental platform for approximate knowledge databases called the Approximate Knowledge Database (AKDB), based on a semantics inspired by rough sets. The implementation is based upon the use of a standard SQL database to store logical facts, augmented with several query interface layers implemented in JAVA through which extensional, intensional and local closed world nonmonotonic queries in the form of crisp or approximate logical formulas can be evaluated tractably. A graphical database design user interface is also provided which simplifies the design of databases, the entering of data and the construction of queries. The theory and semantics for AKDBs is presented in addition to application examples and details concerning the database implementation. |
.
2005 | ||
[46] |
2005. Enactive Cognitive Science: Part 1, Background and Research Themes. Enactive cognitive science is an approach to the study of mind that seeks to explain how the structures and mechanisms of autonomous cognitive systems can arise and participate in the generation and maintenance of viable perceiver-dependent worlds -- rather than more conventional cognitivist efforts, such as the attempt to explain cognition in terms of the \"recovery\" of (pre-given, timeless) features of The (objectively-existing and accessible) World. As such, enactive cognitive science is resonant with radical constructivism. And as with other scientific efforts conducted within a constructivist orientation, it is broadly \"conventional\" in its scientific methodology. That is, there is a strong emphasis on testable hypotheses, empirical observation, supportable mechanisms and models, rigorous experimental methods, acceptable criteria of validation, and the like. Nonetheless, this approach to cognitive science does also raise a number of specific questions about the scope of amenable phenomena (e.g. meaning, consciousness, etc.) -- and it also raises questions of whether such a perspective requires an expansion of what is typically considered within the purview of scientific method (e.g. the role of the observer/scientist). This paper is a brief introduction to enactive cognitive science: a description of some of the main research concerns; some examples of how such concerns have been realized in actual research; some of its research methods and proposed explanatory mechanisms and models; some of the potential as both a theoretical and applied science; and several of the major open research questions. |
.
|
2004 | ||
[45] | ![]() |
2004. DyKnow: An approach to middleware for knowledge processing. Journal of Intelligent & Fuzzy Systems, 15(1):3–13. IOS Press. Any autonomous system embedded in a dynamic and changing environment must be able to create qualitative knowledge and object structures representing aspects of its environment on the fly from raw or preprocessed sensor data in order to reason qualitatively about the environment. These structures must be managed and made accessible to deliberative and reactive functionalities which are dependent on being situationally aware of the changes in both the robotic agent's embedding and internal environment. DyKnow is a software framework which provides a set of functionalities for contextually accessing, storing, creating and processing such structures. The system is implemented and has been deployed in a deliberative/reactive architecture for an autonomous unmanned aerial vehicle. The architecture itself is distributed and uses real-time CORBA as a communications infrastructure. We describe the system and show how it can be used in execution monitoring and chronicle recognition scenarios for UAV applications. |
.
[44] | ![]() |
2004. Elaboration tolerance through object-orientation. Although many formalisms for reasoning about action and change have been proposed in the literature, any concrete examples provided in such articles have primarily consisted of tiny domains that highlight some particular aspect or problem. However, since some of the classical problems are now completely or partially solved and since powerful tools are becoming available, it is now necessary to start modeling more complex domains. This article presents a methodology for handling such domains in a systematic manner using an object-oriented framework and provides several examples of the elaboration tolerance exhibited by the resulting models. (C) 2003 Elsevier B.V. All rights reserved. |
.
[43] | ![]() |
2004. A fuzzy gain-scheduler for the attitude control of an unmanned helicopter. IEEE transactions on fuzzy systems, 12(4):502–515. IEEE Computer Society. DOI: 10.1109/TFUZZ.2004.832539. In this paper, we address the design of an attitude controller that achieves stable, and robust aggressive maneuverability for an unmanned helicopter. The controller proposed is in the form of a fuzzy gain-scheduler, and is used for stable and robust altitude, roll, pitch, and yaw control. The controller is obtained from a realistic nonlinear multiple-input-multiple-output model of a real unmanned helicopter platform, the APID-MK3. The results of this work are illustrated by extensive simulation, showing that the objective of aggressive, and robust maneuverability has been achieved. © 2004 IEEE. |
.
[42] |
2004. A Fuzzy Flight Controller Combining Linguistic and Model-based Fuzzy Control. In this paper we address the design of a fuzzy flight controller that achieves stable and robust -aggressive- manoeuvrability for an unmanned helicopter. The fuzzy flight controller proposed consists of a combination of a fuzzy gain scheduler and linguistic (Mamdani-type) controller. The fuzzy gain scheduler is used for stable and robust altitude, roll, pitch, and yaw control. The linguistic controller is used to compute the inputs to the fuzzy gain scheduler, i.e., desired values for roll, pitch, and yaw at given desired altitude and horizontal velocities. The flight controller is obtained and tested in simulation using a realistic nonlinear MIMO model of a real unmanned helicopter platform, the APID-MK |
.
|
[41] | ![]() |
2004. Issues in Designing Physical Agents for Dynamic Real-Time Environments: World Modeling, Planning, Learning, and Communicating. The AI Magazine, 25(2):137–138. AAAI Press. This article discusses a workshop held in conjunction with the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI-03), held in Acapulco, Mexico, on 11 August 2003. |
.
2003 | ||
[40] | ![]() |
2003. TALplanner in the Third International Planning Competition: Extensions and control rules. The journal of artificial intelligence research, 20(??):343–377. AAAI Press. DOI: 10.1613/jair.1189. TALplanner is a forward-chaining planner that relies on domain knowledge in the shape of temporal logic formulas in order to prune irrelevant parts of the search space. TALplanner recently participated in the third International Planning Competition, which had a clear emphasis on increasing the complexity of the problem domains being used as benchmark tests and the expressivity required to represent these domains in a planning system. Like many other planners, TALplanner had support for some but not all aspects of this increase in expressivity, and a number of changes to the planner were required. After a short introduction to TALplanner, this article describes some of the changes that were made before and during the competition. We also describe the process of introducing suitable domain knowledge for several of the competition domains. |
.
[39] |
2003. Preface. Fundamenta Informaticae, 57(2-4):i–iii. IOS Press. |
.
|
[38] | ![]() |
2003. Formalizing defeasible logic in CAKE. Fundamenta Informaticae, 57(2-3):193–213. IOS Press. Due to its efficiency, defeasible logic is one of the most interesting non-monotonic formalisms. Unfortunately, the logic has one major limitation: it does not properly deal with cyclic defeasible rules. In this paper, we provide a new variant of defeasible logic, using CAKE method. The resulting formalism is tractable and properly deals with circular defeasible rules. |
.
[37] | ![]() |
2003. Towards a framework for approximate ontologies. Fundamenta Informaticae, 57(2-4):147–165. IOS Press. Currently, there is a great deal of interest in developing tools for the generation and use of ontologies on the WWW. These knowledge structures are considered essential to the success of the semantic web, the next phase in the evolution of the WWW. Much recent work with ontologies assumes that the concepts used as building blocks are crisp as opposed to approximate. It is a premise of this paper that approximate concepts and ontologies will become increasingly more important as the semantic web becomes a reality. We propose a framework for specifying, generating and using approximate ontologies. More specifically, (1) a formal framework for defining approximate concepts, ontologies and operations on approximate concepts and ontologies is presented. The framework is based on intuitions from rough set theory, (2) algorithms for automatically generating approximate ontologies from traditional crisp ontologies or from large data sets together with additional knowledge are presented. The knowledge will generally be related to similarity measurements between individual objects in the data sets, or constraints of a logical nature which rule out particular constellations of concepts and dependencies in generated ontologies. The techniques for generating approximate ontologies are parameterizable. The paper provides specific instantiations and examples. |
.
[36] |
2003. 2003 AAAI Spring Symposium Series. The AI Magazine, 24(3):131–140. AAAI Press. The American Association for Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, presented the 2003 Spring Symposium Series, Monday through Wednesday, 24–26 March 2003, at Stanford University. The titles of the eight symposia were Agent-Mediated Knowledge Management, Computational Synthesis: From Basic Building Blocks to High- Level Functions, Foundations and Applications of Spatiotemporal Reasoning (FASTR), Human Interaction with Autonomous Systems in Complex Environments, Intelligent Multimedia Knowledge Management, Logical Formalization of Commonsense Reasoning, Natural Language Generation in Spoken and Written Dialogue, and New Directions in Question-Answering Motivation. |
.
|
2001 | ||
[35] | ![]() |
2001. On the Design of Software Individuals. Electronic Transactions on Artifical Intelligence, 5(??):??–??. Linköpings Universitet. |
.
[34] | ![]() |
2001. TALPLANNER - A temporal logic-based planner. The AI Magazine, 22(3):95–102. AAAI Press. TALPLANNER is a forward-chaining planner that utilizes domain-dependent knowledge to control search in the state space generated by action invocation. The domain-dependent control knowledge, background knowledge, plans, and goals are all represented, using,formulas in, a temporal logic called TAL, which has been developed independently as a formalism for specifying agent narratives and reasoning about them. In the Fifth International Artificial Intelligence Planning and Scheduling Conference planning competition, TALPLANNER exhibited impressive performance, winning the Outstanding Performance Award in the Domain-Dependent Planning Competition. In this article, we provide an overview of TALPLANNER. |
.
2000 | ||
[33] | ![]() |
2000. Defining and Certifying Electronic Publication in Science. Learned Publishing, 13(4):251–258. Association of Learned and Professional Society Publishers. Link to article: http://www.ida.liu.se/ext/caisor/archive... |
.
[32] | ![]() |
2000. Towards efficient universal planning: A randomized approach. One of the most widespread approaches to reactive planning is Schoppers' universal plans. We propose a stricter definition of universal plans which guarantees a weak notion of soundness, not present in the original definition, and isolate three different types of completeness that capture different behaviors exhibited by universal plans. We show that universal plans which run in polynomial time and are of polynomial size cannot satisfy even the weakest type of completeness unless the polynomial hierarchy collapses. By relaxing either the polynomial time or the polynomial space requirement, the construction of universal plans satisfying the strongest type of completeness becomes trivial. As an alternative approach, we study randomized universal planning. By considering a randomized version of completeness and a restricted (but nontrivial) class of problems, we show that there exists randomized universal plans running in polynomial time and using polynomial space which are sound and complete for the restricted class of problems. We also report experimental results on this approach to planning, showing that the performance of a randomized planner is not easily compared to that of a deterministic planner. |
.
[31] | ![]() |
2000. TALplanner: A temporal logic based forward chaining planner. Annals of Mathematics and Artificial Intelligence, 30(1-4):119–169. Springer. DOI: 10.1023/A:1016619613658. We present TALplanner, a forward-chaining planner based on the use of domain-dependent search control knowledge represented as formulas in the Temporal Action Logic (TAL). TAL is a narrative based linear metric time logic used for reasoning about action and change in incompletely specified dynamic environments. TAL is used as the formal semantic basis for TALplanner, where a TAL goal narrative with control formulas is input to TALplanner which then generates a TAL narrative that entails the goal and control formulas. The sequential version of TALplanner is presented. The expressivity of plan operators is then extended to deal with an interesting class of resource types. An algorithm for generating concurrent plans, where operators have varying durations and internal state, is also presented. All versions of TALplanner have been implemented. The potential of these techniques is demonstrated by applying TALplanner to a number of standard planning benchmarks in the literature. |
.
[30] |
2000. The PMA and relativizing minimal change for action update. Fundamenta Informaticae, 44(1-2):95–131. IOS Press. Recently, a great deal of progress has been made using nonmonotonic temporal logics to formalize reasoning about action and change. In particular, much focus has been placed on the proper representation of non-deterministic actions and the indirect effects of actions. For the latter the use of causal or fluent dependency rule approaches has been dominant. Although much recent effort has also been spent applying the belief revision/update (BR/U) approach to the action and change domain, there has been less progress in dealing with nondeterministic update and indirect effects represented as integrity constraints. We demonstrate that much is to be gained by cross-fertilization between the two paradigms and we show this in the following manner. We first propose a generalization of the PMA, called the modified MPMA which uses intuitions from the TL paradigm to permit representation of nondeterministic update and the use of integrity constraints interpreted as causal or fluent dependency rules. We provide several syntactic characterizations of MPMA, one of which is in terms of a simple temporal logic and provide a representation theorem showing equivalence between the two. In constructing the MPMA, we discovered a syntactic anomaly which we call the redundant atom anomaly that many TL approaches suffer from. We provide a method for avoiding the problem which is equally applicable across paradigms. We also describe a syntactic characterization of MPMA in terms of Dijkstra semantics. We set up a framework for future generalization of the BR/U approach and conclude with a formal comparison of related approaches. |
.
|
[29] | ![]() |
2000. Tackling the qualification problem using fluent dependency constraints. In the area of formal reasoning about action and change, one of the fundamental representation problems is providing concise modular and incremental specifications of action types and world models, where instantiations of action types are invoked by agents such as mobile robots. Provided the preconditions to the action are true, their invocation results in changes to the world model concomitant with the goal-directed behavior of the agent. One particularly difficult class of related problems, collectively called the qualification problem, deals with the need to find a concise incremental and modular means of characterizing the plethora of exceptional conditions that might qualify an action, but generally do not, without having to explicitly enumerate them in the preconditions to an action. We show how fluent dependency constraints together with the use of durational fluents can be used to deal with problems associated with action qualification using a temporal logic for action and change called TAL-Q. We demonstrate the approach using action scenarios that combine solutions to the frame, ramification, and qualification problems in the context of actions with duration, concurrent actions, nondeterministic actions, and the use of both Boolean and non-Boolean fluents. The circumscription policy used for the combined problems is reducible to the first-order case. |
.
1999 | ||
[28] | ![]() |
1999. Reasoning about Concurrent Interaction. Journal of logic and computation (Print), 9(5):623–650. Oxford University Press. DOI: 10.1093/logcom/9.5.623. In this paper we present TAL-C, a logic of action and change for worlds with action concurrency. TAL-C has a first-order semantics and proof theory. It builds on an existing logic TAL, which includes the use of dependency laws for dealing with ramification. It is demonstrated how TAL-C can represent a number of phenomena related to action concurrency: action duration, how the effects of one action interferes with or enables another action, synergistic effects of concurrent actions, conflicting and cumulative effect interactions, and resource conflicts. A central idea is that actions are not described as having effects that directly alter the world state. Instead, actions produce influences, and the way these influences alter the world state are described in specialized influence laws. Finally, we address how TAL-C narratives can be written to support modularity. |
.
[27] |
1999. Reasoning about action in polynomial time. Although many formalisms for reasoning about action exist, surprisingly few approaches have taken computational complexity into consideration. The contributions of this article are the following: a temporal logic with a restriction for which deciding satisfiability is tractable, a tractable extension for reasoning about action, and NP-completeness results for the unrestricted problems. Many interesting reasoning problems can be modelled, involving nondeterminism, concurrency and memory of actions. The reasoning process is proved to be sound and complete. (C) 1999 Published by Elsevier Science B.V. All rights reserved. |
.
|
[26] | ![]() |
1999. Meta-queries on deductive databases. We introduce the notion of a meta-query on relational databases and a technique which can be used to represent and solve a number of interesting problems from the area of knowledge representation using logic. The technique is based on the use of quantifier elimination and may also be used to query relational databases using a declarative query language called SHQL (Semi-Horn Query Language), introduced in [6]. SHQL is a fragment of classical first-order predicate logic and allows us to define a query without supplying its explicit definition. All SHQL queries to the database can be processed in polynomial time (both on the size of the input query and the size of the database). We demonstrate the use of the technique in problem solving by structuring logical puzzles from the Knights and Knaves domain as SHQL meta-queries on relational databases. We also provide additional examples demonstrating the flexibility of the technique. We conclude with a description of a newly developed software tool, The Logic Engineer, which aids in the description of algorithms using transformation and reduction techniques such as those applied in the meta-querying approach. |
.
[25] | ![]() |
1999. Declarative PTIME queries for relational databases using quantifier elimination. Journal of logic and computation (Print), 9(5):737–758. Oxford University Press. DOI: 10.1093/logcom/9.5.737. In this paper, we consider the problem of expressing and computing queries on relational deductive databases in a purely declarative query language, called SHQL (Semi-Horn Query Language). Assuming the relational databases in question are ordered, we show that all SHQL queries are computable in PTIME (polynomial time) and the whole class of PTIME queries is expressible in SHQL. Although similar results have been proven for fixpoint languages and extensions to datalog, the claim is that SHQL has the advantage of being purely declarative, where the negation operator is interpreted as classical negation, mixed quantifiers may be used and a query is simply a restricted first-order theory not limited by the rule-based syntactic restrictions associated with logic programs in general. We describe the PTIME algorithm used to compute queries in SHQL which is based in part on quantifier elimination techniques and also consider extending the method to incomplete relational databases using intuitions related to circumscription techniques. |
.
1998 | ||
[24] | ![]() |
1998. (TAL) temporal action logics: Language specification and tutorial. Electronic Transactions on Artifical Intelligence, 2(3-4):273–306. Link: http://www.ep.liu.se/ej/etai/1998/009/ The purpose of this article is to provide a uniform, lightweight language specication and tutorial for a class of temporal logics for reasoning about action and change that has been developed by our group during the period 1994-1998. The class of logics are collected under the name TAL, an acronym for Temporal Action Logics. TAL has its origins and inspiration in the work with Features and Fluents (FF) by Sandewall, but has diverged from the methodology and approach through the years. We first discuss distinctions and compatibility with FF, move on to the lightweight language specication, and then present a tutorial in terms of an excursion through the different parts of a relatively complex narrative defined using TAL. We conclude with an annotated list of published work from our group. The article tries to strike a reasonable balance between detail and readability, making a number of simplications regarding narrative syntax and translation to a base logical language. Full details are available in numerous technical reports and articles which are listed in the final section of this article. |
.
[23] |
1998. General domain circumscription and its effective reductions. We first define general domain circumscription (GDC) and provide it with a semantics. GDC subsumes existing domain circumscription proposals in that it allows varying of arbitrary predicates, functions, or constants, to maximize the minimization of the domain of a theory. We then show that for the class of semi-universal theories without function symbols, that the domain circumscription of such theories can be constructively reduced to logically equivalent first-order theories by using an extension of the DLS algorithm, previously proposed by the authors for reducing second-order formulas. We also show that for a certain class of domain circumscribed theories, that any arbitrary second-order circumscription policy applied to these theories is guaranteed to be reducible to a logically equivalent first-order theory. In the case of semi-universal theories with functions and arbitrary theories which are not separated, we provide additional results, which although not guaranteed to provide reductions in all cases, do provide reductions in some cases. These results are based on the use of fixpoint reductions. |
.
|
1997 | ||
[22] |
1997. Rolling Bearing Simulation on MIMD Computers. The international journal of high performance computing applications, 11(4):299–313. Sage Publications. DOI: 10.1177/109434209701100404. Fulltext: https://doi.org/10.1177/1094342097011004... Rolling bearing simulations are very computationally in tensive. Serial simulations may take weeks to execute, and there is a need to use the potential of parallel comput ing. The specific structure of the rolling bearing problem is used to develop suitable scheduling strategies. The authors discuss the system of stiff ordinary differential equations arising from a bearing model and show how to numerically solve these ordinary differential equations on parallel computers. Benchmarking results are presented for two test cases on three platforms. |
.
|
[21] | ![]() |
1997. Computing circumscription revisited: A reduction algorithm. Journal of automated reasoning, 18(3):297–336. Kluwer Academic Publishers. DOI: 10.1023/A:1005722130532. In recent years, a great deal of attention has been devoted to logics of common-sense reasoning. Among the candidates proposed, circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic reasoning, but difficult to apply in practice. The major reason for this is the second-order nature of circumscription axioms and the difficulty in finding proper substitutions of predicate expressions for predicate variables. One solution to this problem is to compile, where possible, second-order formulas into equivalent first-order formulas. Although some progress has been made using this approach, the results are not as strong as one might desire and they are isolated in nature. In this article, we provide a general method that can be used in an algorithmic manner to reduce certain circumscription axioms to first-order formulas. The algorithm takes as input an arbitrary second-order formula and either returns as output an equivalent first-order formula, or terminates with failure. The class of second-order formulas, and analogously the class of circumscriptive theories that can be reduced, provably subsumes those covered by existing results. We demonstrate the generality of the algorithm using circumscriptive theories with mixed quantifiers (some involving Skolemization), variable constants, nonseparated formulas, and formulas with n-ary predicate variables. In addition, we analyze the strength of the algorithm, compare it with existing approaches, and provide formal subsumption results. |
.
1996 | ||
[20] |
1996. On Natural Deduction in First-Order Fixpoint Logics. In the current paper we present a powerful technique of obtaining natural deduction proof systems for first-order fixpoint logics. The term fixpoint logics refers collectively to a class of logics consisting of modal logics with modalities definable at meta-level by fixpoint equations on formulas. The class was found very interesting as it contains most logics of programs with e.g. dynamic logic, temporal logic and the ¯-calculus among them. In this paper we present a technique that allows us to derive automatically natural deduction systems for modal logics from fixpoint equations defining the modalities |
.
|
[19] | ![]() |
1996. A reduction result for circumscribed semi-horn formulas. Circumscription has been perceived as an elegant mathematical technique for modeling nonmonotonic and commonsense reasoning, but difficult to apply in practice due to the use of second-order formulas. One proposal for dealing with the computational problems is to identify classes of first-order formulas whose circumscription can be shown to be equivalent to a first-order formula. In previous work, we presented an algorithm which reduces certain classes of second-order circumscription axioms to logically equivalent first-order formulas. The basis for the algorithm is an elimination lemma due to Ackermann. In this paper, we capitalize on the use of a generalization of Ackermann's Lemma in order to deal with a subclass of universal formulas called <em>semi-Horn formulas</em>. Our results subsume previous results by Kolaitis and Papadimitriou regarding a characterization of circumscribed definite logic programs which are first-order expressible. The method for distinguishing which formulas are reducible is based on a boundedness criterion. The approach we use is to first reduce a circumscribed semi-Horn formula to a fixpoint formula which is reducible if the formula is bounded, otherwise not. In addition to a number of other extensions, we also present a fixpoint calculus which is shown to be sound and complete for bounded fixpoint formulas. |
.
1994 | ||
[18] |
1994. On an Automated Translation of Modal Proof Rules into Formulas of the Classical Logic. Journal of Applied Non-Classical Logics, 4(2):119–127. Éditions Hermès-Lavoisier. In this paper we study correspondences between modal proof rules and the classical logic. The method we apply is based on an Ackermann's technique of eliminating second-order quantifiers from formulas. We show that the process of finding suitable correspondences can be reduced to a few simple steps. Moreover, the whole technique can be fully mechanized. We thus provide the reader with a powerful tool, useful in automated translations between modal logics and the classical logic. |
.
|
1993 | ||
[17] |
1993. On the Correspondence between Modal and Classical Logic: An Automated Approach. Journal of logic and computation (Print), 3(6):605–620. Oxford University Press. DOI: 10.1093/logcom/3.6.605. The current paper is devoted to automated techniques in the correspondence theory. The theory we deal with concerns the problem of finding classical first-order axioms corresponding to propositional modal schemas. Given a modal schema and a semantics base method of translating propositional modal formulae into classical first-order ones, we try to derive automatically classica first-order formulae characterizing precisely the class of frames validating the schema. The technique we consider can, in many cases, be easily applied even without computer support. Although we mainly concentrate on Kripke semantics, the technique we apply is much more general, as it is based on elimination of second-order quantifiers from formulae. We show many examples of application of the method. These can also serve as new, automated proofs of considered correspondences. |
.
|
[16] |
1993. Fuzzy if-then-unless rules and their implementation. International Journal of Uncertainty Fuzziness and Knowledge-Based Systems, 1(2):167–182. World Scientific. DOI: 10.1142/S0218488593000097. We consider the possibility of generalizing the notion of a fuzzy If-Then rule to take into account its context dependent nature. We interpret fuzzy rules as modeling a forward directed causal relationship between the antecedent and the conclusion, which applies in most contexts, but on occasion breaks down in exceptional contexts. The default nature of the rule is modeled by augmenting the original If-Then rule with an exception part. We then consider the proper semantic correlate to such an addition and propose a ternary relation which satisfies a number of intuitive constraints described in terms of a number of inference rules. In the rest of the paper, we consider implementational issues arising from the unless extension and propose the use of reason maintenance systems, in particular TMS's, where a fuzzy If-Then-Unless rule is encoded into a dependency net. We verify that the net satisfies the constraints stated in the inference schemes and conclude with a discussion concerning the integration of qualitative IN-OUT labelings of the TMS with quantitative degree of membership labelings for the variables in question. |
.
|
1992 | ||
[15] |
1992. Axiomatizing Fixpoint Logics. |
.
|
[14] |
1992. NML-3 - A non-monotonic logic with explicit defaults. Journal of applied non-classical logics, 2(1):9–48. Éditions Hermès-Lavoisier. |
.
|
1991 | ||
[13] |
1991. On Strictly Arithmetical Completeness in Logics of Programs. We introduce and discuss a notion of strictly arithmetical completeness related to relative completeness of Cook (1978) and arithmetical completeness of Harel (1978). We present a powerful technique of obtaining strictly arithmetical axiomatizations of logics of programs. Given a model-theoretic semantics of a logic and a set of formulae defining (in a metalanguage) its nonclassical connectives, we automatically derive strictly arithmetically complete and sound proof systems for this logic. As examples of application of the technique we obtain new axiomatizations of algorithmic logic, (concurrent) dynamic logic and temporal logic. |
.
|
1989 | ||
[12] |
1989. On Temporal Logic for Distributed Systems and its Application to Processes Communicating by Interrupts. Fundamenta Informaticae, 12(2):191–204. IOS Press. |
.
|
1988 | ||
[11] |
1988. Towards the Temporal Approach to Abstract Data Types. Fundamenta Informaticae, 11(1):49–64. IOS Press. |
.
|
[10] |
1988. An Incompleteness Result in Process Algebra. |
.
|
[9] |
1988. Propositional Description of Finite Cause-Effect Structures. An alternative method of describing semantics of cause-effect structures is presented. It is based on a model of discrete dynamic systems. The model is similar to a condition-event Petri net, differing in the way restrictions on the alterability of actions are imposed. |
.
|
[8] |
1988. Incompleteness of First-Order Temporal Logic with Until. The results presented in this paper concern the axiomatizability problem of first-order temporal logic with linear and discrete time. We show that the logic is incomplete, i.e., it cannot be provided with a finitistic and complete proof system. We show two incompleteness theorems. Although the first one is weaker (it assumes some first-order signature), we decided to present it, for its proof is much simpler and contains an interesting fact that finite sets are characterizable by means of temporal formulas. The second theorem shows that the logic is incomplete independently of any particular signature. |
.
|
1987 | ||
[7] |
1987. A Complete Axiomatic Characterization of First-Order Temporal Logic of Linear Time. As shown in (Szalas, 1986, 1986, 1987) there is no finitistic and complete axiomatization of First-Order Temporal Logic of linear and discrete time. In this paper we give an infinitary proof system for the logic. We prove that the proof system is sound and complete. We also show that any syntactically consistent temporal theory has a model. As a corollary we obtain that the Downward Theorem of Skolem, Lowenheim and Tarski holds in the case of considered logic. |
.
|
[6] |
1987. Arithmetical Axiomatization of First-Order Temporal Logic. |
.
|
1986 | ||
[5] |
1986. Concerning the Semantic Consequence Relation in First-Order Temporal Logic. In this paper we consider the first-order temporal logic with linear and discrete time. We prove that the set of tautologies of this logic is not arithmetical (i.e., it is neither <em>Σ</em><sup>0</sup><sub><em>n</em></sub> nor <em>Π</em><sup>0</sup><sub><em>n</em></sub> for any natural number <em>n</em>). Thus we show that there is no finitistic and complete axiomatization of the considered logic. |
.
|
1985 | ||
[4] |
1985. A Note on PCI: Distributed Processes Communicating by Interrupts. |
.
|
[3] |
1985. Exception Handling in Parallel Computations. |
.
|
1984 | ||
[2] |
1984. On an Application of Algorithmic Theory of Stacks. Fundamenta Informaticae, 7(3):378–388. IOS Press. |
.
|
1981 | ||
[1] |
1981. Algorithmic Logic with Recursive Functions. Fundamenta Informaticae, 4(4):975–995. IOS Press. |
.
Page responsible: Patrick Doherty
Last updated: 2014-04-30