Hide menu

AIICS Publications: PhD and Licentiate Theses

Hide abstracts BibTeX entries
2024
[43] Fredrik Präntare. 2024.
Dividing the Indivisible: Algorithms, Empirical Advances, and Complexity Results for Value-Maximizing Combinatorial Assignment Problems.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #2382. Linköping University Electronic Press. 167 pages. ISBN: 9789180756006, 9789180756013.
Note: Funding: MIRAI; the Wallenberg AI, Autonomous Systems and Software Program; and the Knut and Alice Wallenberg Foundation.

Allocating resources, goods, agents (e.g., humans), expertise, production, and assets is one of the most influential and enduring cornerstone challenges at the intersection of artificial intelligence, operations research, politics, and economics. At its core—as highlighted by a number of seminal works [181, 164, 125, 32, 128, 159, 109, 209, 129, 131]—is a timeless question: <em>How can we best allocate indivisible entities—such as objects, items, commodities, jobs, or personnel—so that the outcome is as valuable as possible, be it in terms of expected utility, fairness, or overall societal welfare? </em>This thesis confronts this inquiry from multiple algorithmic viewpoints, focusing on the <em>value-maximizing combinatorial assignment problem:</em> the optimization challenge of partitioning a set of indivisibles among alternatives to maximize a given notion of value. To exemplify, consider a scenario where an international aid organization is responsible for distributing medical resources, such as ventilators and vaccines, and allocating medical personnel, including doctors and nurses, to hospitals during a global health crisis. These resources and personnel—inherently indivisible and non-fragmentable—necessitate an allocation process designed to optimize utility and fairness. Rather than using manual interventions and ad-hoc methods, which often lack precision and scalability, a rigorously developed and demonstrably performant approach can often be more desirable. With this type of challenge in mind, our thesis begins through the lens of computational complexity theory, commencing with an initial insight: In general, under prevailing complexity-theoretic assumptions (P ≠ NP), it is impossible to develop an efficient method guaranteeing a value-maximizing allocation that is better than “arbitrarily badâ€, even under severely constraining limitations and simplifications. This inapproximability result not only underscores the problem’s complexity but also sets the stage for our ensuing work, wherein we develop novel algorithms and concise representations for utilitarian, egalitarian, and Nash welfare maximization problems, aimed at maximizing average, equitable, and balanced utility, respectively. For example, we introduce the <em>synergy hypergraph</em>—a hypergraph-based characterization of utilitarian combinatorial assignment—which allows us to prove several new state-of-the-art complexity results to help us better understand how hard the problem is. We then provide efficient approximation algorithms and (non-trivial) exponential-time algorithms for many hard cases. In addition, we explore complexity bounds for generalizations with <em>interdependent effects between allocations,</em> known as <em>externalities </em>in economics. Natural applications in team formation, resource allocation, and combinatorial auctions are also discussed; and a novel “bootstrapped†dynamic-programming method is introduced. We then transition from theory to practice as we shift our focus to the utilitarian variant of the problem—an incarnation of the problem particularly applicable to many real-world scenarios. For this variation, we achieve substantial empirical algorithmic improvements over existing methods, including industry-grade solvers. This work culminates in the development of a new hybrid algorithm that combines dynamic programming with branch-and-bound techniques that is demonstrably faster than all competing methods in finding both optimal and near-optimal allocations across a wide range of experiments. For example, it solves one of our most challenging problem sets in just 0.25% of the time required by the previous best methods, representing an improvement of approximately 2.6 orders of magnitude in processing speed. Additionally, we successfully integrate and commercialize our algorithm into <em>Europa Universalis IV</em>—one of the world’s most popular strategy games, with a player base exceeding millions. In this dynamic and challenging setting, our algorithm efficiently manages complex strategic agent interactions, highlighting its potential to improve computational efficiency and decision-making in real-time, multi-agent scenarios. This also represents one of the first instances where a combinatorial assignment algorithm has been applied in a commercial context. We then introduce and evaluate several highly efficient <em>heuristic </em>algorithms. These algorithms—while lacking provable quality guarantees—employ general-purpose heuristic and random-sampling techniques to significantly outperform existing methods in both speed and quality in large-input scenarios. For instance, in one of our most challenging problem sets, involving a thousand indivisibles, our best algorithm generates outcomes that are 99.5% of the expected optimal in just seconds. This performance is particularly noteworthy when compared to state-of-the-art industry-grade solvers, which struggle to produce any outcomes under similar conditions. Further advancing our work, we employ novel machine learning techniques to generate new heuristics that outperform the best hand-crafted ones. This approach not only showcases the potential of machine learning in combinatorial optimization but also sets a new standard for combinatorial assignment heuristics to be used in real-world scenarios demanding rapid, high-quality decisions, such as in logistics, real-time tactics, and finance. In summary, this thesis bridges many gaps between the theoretical and practical aspects of combinatorial assignment problems such as those found in coalition formation, combinatorial auctions, welfare-maximizing resource allocation, and assignment problems. It deepens the understanding of the computational complexities involved and provides effective and improved solutions for longstanding real-world challenges across various sectors—providing new algorithms applicable in fields ranging from artificial intelligence to logistics, finance, and digital entertainment, while simultaneously paving the way for future work in computational problem-solving and optimization.

[42] Jenny Kunz. 2024.
Understanding Large Language Models: Towards Rigorous and Targeted Interpretability Using Probing Classifiers and Self-Rationalisation.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #2364. Linköping University Electronic Press. 81 pages. ISBN: 9789180754705, 9789180754712.
DOI: 10.3384/9789180754712.
Fulltext: https://doi.org/10.3384/9789180754712
preview image: https://liu.diva-portal.org/smash/get/di...

Large language models (LLMs) have become the base of many natural language processing (NLP) systems due to their performance and easy adaptability to various tasks. However, much about their inner workings is still unknown. LLMs have many millions or billions of parameters, and large parts of their training happen in a self-supervised fashion: They simply learn to predict the next word, or missing words, in a sequence. This is effective for picking up a wide range of linguistic, factual and relational information, but it implies that it is not trivial what exactly is learned, and how it is represented within the LLM. In this thesis, I present our work on methods contributing to better understanding LLMs. The work can be grouped into two approaches. The first lies within the field of interpretability, which is concerned with understanding the internal workings of the LLMs. Specifically, we analyse and refine a tool called probing classifiers that inspects the intermediate representations of LLMs, focusing on what roles the various layers of the neural model play. This helps us to get a global understanding of how information is structured in the model. I present our work on assessing and improving the probing methodologies. We developed a framework to clarify the limitations of past methods, showing that all common controls are insufficient. Based on this, we proposed more restrictive probing setups by creating artificial distribution shifts. We developed new metrics for the evaluation of probing classifiers that move the focus from the overall information that the layer contains to differences in information content across the LLM. The second approach is concerned with explainability, specifically with self-rationalising models that generate free-text explanations along with their predictions. This is an instance of local understandability: We obtain justifications for individual predictions. In this setup, however, the generation of the explanations is just as opaque as the generation of the predictions. Therefore, our work in this field focuses on better understanding the properties of the generated explanations. We evaluate the downstream performance of a classifier with explanations generated by different model pipelines and compare it to human ratings of the explanations. Our results indicate that the properties that increase the downstream performance differ from those that humans appreciate when evaluating an explanation. Finally, we annotate explanations generated by an LLM for properties that human explanations typically have and discuss the effects those properties have on different user groups. While a detailed understanding of the inner workings of LLMs is still unfeasible, I argue that the techniques and analyses presented in this work can help to better understand LLMs, the linguistic knowledge they encode and their decision-making process. Together with knowledge about the models’ architecture, training data and training objective, such techniques can help us develop a robust high-level understanding of LLMs that can guide decisions on their deployment and potential improvements.

2023
[41] Johan Källström. 2023.
Reinforcement Learning for Improved Utility of Simulation-Based Training.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #2351. Linköping University Electronic Press. 168 pages. ISBN: 9789180753661, 9789180753678.
DOI: 10.3384/9789180753678.
Note: 2023-11-02: The thesis was first published online. The online published version reflects the printed version.2023-11-15: The PDF-file has been replaced by a new file from LiU-Print to enable speech-to-text functionality and to allow text copying. Before this date the PDF has been downloaded 40 times.Funding: This work was partially supported by the Swedish Governmental Agency for Innovation Systems (grant NFFP7/2017-04885), and the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The computations were enabled by the resources provided by the Swedish National Infrastructure for Computing (SNIC) at Tetralith/NSC partially funded by the Swedish Research Council through grant agreement no. 2020/5-230, as well as the supercomputing resource Berzelius provided by the National Supercomputer Centre at Linköping University and the Knut and Alice Wallenberg foundation.
Fulltext: https://doi.org/10.3384/9789180753678
preview image: https://liu.diva-portal.org/smash/get/di...

Team training in complex domains often requires a substantial number of resources, e.g. vehicles, machines, and role-players. For this reason, it may be difficult to realise efficient and effective training scenarios in a real-world setting. Instead, part of the training can be conducted in synthetic, computer-generated environments. In these environments trainees can operate simulators instead of real vehicles, while synthetic actors can replace human role-players to increase the complexity of the simulated scenario at low operating cost. However, constructing behaviour models for synthetic actors is challenging, especially for the end users, who typically do not have expertise in artificial intelligence. In this dissertation, we study how machine learning can be used to simplify the construction of intelligent agents for simulation-based training. A simulation-based air combat training system is used as case study. The contributions of the dissertation are divided into two parts. The first part aims at improving the understanding of reinforcement learning in the domain of simulation-based training. First, a user-study is conducted to identify important capabilities and characteristics of learning agents that are intended to support training of fighter pilots. It is identified that one of the most important capabilities of learning agents in the context of simulation-based training is that their behaviour can be adapted to different phases of training, as well as to the training needs of individual human trainees. Second, methods for learning how to coordinate with other agents are studied in simplified training scenarios, to investigate how the design of the agent’s observation space, action space, and reward signal affects the performance of learning. It is identified that temporal abstractions and hierarchical reinforcement learning can improve the efficiency of learning, while also providing support for modelling of doctrinal behaviour. In more complex settings, curriculum learning and related methods are expected to help find novel tactics even when sparse, abstract reward signals are used. Third, based on the results from the user study and the practical experiments, a system concept for a user-adaptive training system is developed to support further research. The second part of the contributions focuses on methods for utility-based multi-objective reinforcement learning, which incorporates knowledge of the user’s utility function in the search for policies that balance multiple conflicting objectives. Two new agents for multi-objective reinforcement learning are proposed: the Tunable Actor (T-Actor) and the Multi-Objective Dreamer (MO-Dreamer). T-Actor provides decision support to instructors by learning a set of Pareto optimal policies, represented by a single neural network conditioned on objective preferences. This enables tuning of the agent’s behaviour to fit trainees’ current training needs. Experimental evaluations in gridworlds and in the target system show that T-Actor reduces the number of training steps required for learning. MO-Dreamer adapts online to changes in users’ utility, e.g. changes in training needs. It does so by learning a model of the environment, which it can use for anticipatory rollouts with a diverse set of utility functions to explore which policy to follow to optimise the return for a given set of objective preferences. An experimental evaluation shows that MO-Dreamer outperforms prior model-free approaches in terms of experienced regret, for frequent as well as sparse changes in utility. Overall, the research conducted in this dissertation contributes to improved knowledge about how to apply machine learning methods to construction of simulation-based training environments. While our focus was on air combat training, the results are general enough to be applicable in other domains.

[40] Mariusz Wzorek. 2023.
Selected Functionalities for Autonomous Intelligent Systems in Public Safety Scenarios.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #2322. Linköping University Electronic Press. 69 pages. ISBN: 9789180751957, 9789180751964.
DOI: 10.3384/9789180751964.
Note: Funding: This work has been supported by the ELLIIT Network Organization for Information and Communication Technology, Sweden (Project B09), and Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, in addiâ€tion to the sources already acknowledged in the individual papers.
Fulltext: https://doi.org/10.3384/9789180751964
preview image: https://liu.diva-portal.org/smash/get/di...

The public safety and security application domain is an important research area that provides great benefits to society. Within this application domain, governmental and nonâ€governmental agencies, such as blue light organizations (e.g., police or firefighters), are often tasked with essential lifeâ€saving activities when responding to fallouts of natural or manâ€made disasters, such as earthquakes, floods, or hurricanes. Recent technological advances in artificial intelligence and robotics offer novel tools that first responder teams can use to shorten response times and improve the effectiveness of rescue efforts. Modern first responder teams are increasingly being supported by autonomous intelligent systems such as ground robots or Unmanned Aerial Vehicles (UAVs). However, even though many commercial systems are available and used in real deployments, many important research questions still need to be answered. These relate to both autonomous intelligent system design and development in addition to how such systems can be used in the context of public safety applications. This thesis presents a collection of functionalities for autonomous intelligent systems in public safety scenarios. Contributions in this thesis are divided into two parts. In Part 1, we focus on the design of navigation frameworks for UAVs for solving the problem of autonomous navigation in dynamic or changing environments. In particular, we present several novel ideas for integrating motion planning, control, and perception functionalities within robotic architectures to solve navigation tasks efficiently. In Part 2, we concentrate on an important service that autonomous intelligent systems can offer to first responder teams. Specifically, we focus on base functionalities required for UAVâ€based rapid ad hoc communication infrastructure deployment in the initial phases of rescue operations. The main idea is to use heterogeneous teams of UAVs to deploy communication nodes that include routers and are used to establish ad hoc Wireless Mesh Networks (WMNs). We consider fundamental problems related to WMN network design, such as calculating node placements, and propose efficient novel algorithms to solve these problems. Considerable effort has been put into applying the developed techniques in real systems and scenarios. Thus, the approaches presented in this thesis have been validated through extensive simulations and realâ€world experimentation with various UAV systems. Several contributions presented in the thesis are generic and can be adapted to other autonomous intelligent system types and application domains other than public safety and security.

[39] Filip Strömbäck. 2023.
Teaching and Learning Concurrent Programming in the Shared Memory Model.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #2280. Linköping University Electronic Press. 121 pages. ISBN: 9789180750004, 9789180750011.
DOI: 10.3384/9789180750011.
Note: Funding agencies: The work in this thesis is partly funded by the Graduate School in Computer Science (CUGS).
Fulltext: https://doi.org/10.3384/9789180750011
preview image: http://liu.diva-portal.org/smash/get/div...

The performance of computational devices is steadily increasing. Recently, the main contributor to the increasing performance has been an increasing number of cores rather than increased performance for individual cores. This trend is not only visible in high-end de-vices, such as powerful workstations, but also in low-end devices such as smartphones and many embedded devices. Due to this trend, and to the ubiquity of multicore systems, it is increasingly important for computer science students to have at least some familiarity with working with concurrent programs, as they are likely to encounter such programs in some shape or form in their future professional careers. In this thesis, we use the term concurrent programming to emphasize the focus on concurrency in a programming context, as opposed to concurrency in isolation. Prior work has shown that students struggle with learning concurrent programming. This is not surprising in and of itself, as concurrency is generally considered to be a difficult topic. To help students learn concurrent programming, it is important to consider why they find the topic difficult. As such, the first part of this thesis aims to gain better insights into these difficulties. This is done partly by studying prior work in the literature and partly by conducting new research. The results show that the struggles are not only due to the difficulty involved in understanding the new concepts related to concurrency. Rather, a large portion of students’ struggles can be attributed to the fact that the non-determinism in concurrent programs requires a more formal approach to programming. The transition from an informal approach, where students are able to rely on ad-hoc testing, to a more formal approach is difficult in and of itself, but it also highlights problems in students’ understanding of fundamental programming skills. For example, without a detailed enough understanding of scope, aliasing and references it is difficult to reason about what data is shared between threads. In light of these results, the remainder of this thesis describes ways of helping students learn concurrent programming. This is achieved by developing a program visualization tool, called Progvis, which is capable of visualizing the behavior of concurrent programs written in the C language. Progvis is unique in the sense that it focuses not only on concurrency, but also illustrates how concurrency interacts with the fundamental concepts that were found to be problematic. As such, it is able to give students a complete and accurate picture of the visualized program’s behavior. To further help students, Progvis also includes a model-checker that can automatically find concurrency issues, thereby helping students to easily see whether their reasoning about the program’s behavior was correct or not. The results suggest that the visualizations generated by Progvis have the potential to help students learn concurrent programming. A small pilot study showed that students who used Progvis were able to solve concurrency problems faster and arrived at better solutions compared to students who only used traditional tools. To evaluate long-term effects on learning, Progvis was also integrated into a course on concurrency and operating systems. Comparing the students’ performance on the final exam showed that the students who had used Progvis were more likely to correctly associate their synchronization with the associated data compared to students who had not used Progvis during the course. Even though no significant increase was observed in students’ overall performance, these results suggest that using Progvis does indeed have a positive impact on students’ learning.

2022
[38] Mattias Tiger. 2022.
Safety-Aware Autonomous Systems: Preparing Robots for Life in the Real World.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #2262. Linköping University Electronic Press. 244 pages. ISBN: 9789179295011, 9789179295028.
DOI: 10.3384/9789179295028.
Fulltext: https://doi.org/10.3384/9789179295028
preview image: http://liu.diva-portal.org/smash/get/div...

Realâ€world autonomous systems are expected to be increasingly deployed and operating in realâ€world environments over the coming decades. Autonomous systems such as AIâ€enabled robotic systems and intelligent transportation systems, will alleviate mundane human work, provide new services, and facilitate a smarter and more flexible infrastructure. The real†world environments affected include workplaces, public spaces, and homes. To ensure safe operations, in for example the vicinity of people, it is paramount that the autonomous systems are explainable, behave predictable, and can handle that the real world is ever changing and only partially observable. To deal with a dynamic and changing environment, consistently and safely, it is necessary to have sound uncertainty management. Explicit uncertainty quantification is fundamental to providing probabilistic safety guarantees that can also be monitored during runtime to ensure safety in new situations. It is further necessary for wellâ€grounded prediction and classification uncertainty, for achieving task effectiveness with high robustness and for dealing with unknown unknowns, such as world model divergence, using anomaly detection. This dissertation focuses on the notion of motion in terms of trajectories, from recognizing – to anticipating – to generating – to monitoring that it fulfills expectations such as predictability or other safety constraints during runtime. Efficiency, effectiveness, and safety are competing qualities, and in safety-critical applications the required degree of safety makes it very challenging to reach useful levels of efficiency and effectiveness. To this end, a holistic perspective on agent motion in complex and dynamic environments is investigated. This work leverage synergies in wellâ€founded formalized interactions and integration between learning, reasoning, and interaction, and demonstrate jointly efficient, effective, and safe capabilities for autonomous systems in safetyâ€critical situations.

2020
[37] Olov Andersson. 2020.
Learning to Make Safe Real-Time Decisions Under Uncertainty for Autonomous Robots.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #2051. Linköping University Electronic Press. 55 pages. ISBN: 9789179298890.
DOI: 10.3384/diss.diva-163419.
Fulltext: https://doi.org/10.3384/diss.diva-163419
preview image: http://liu.diva-portal.org/smash/get/div...

Robots are increasingly expected to go beyond controlled environments in laboratories and factories, to act autonomously in real-world workplaces and public spaces. Autonomous robots navigating the real world have to contend with a great deal of uncertainty, which poses additional challenges. Uncertainty in the real world accrues from several sources. Some of it may originate from imperfect internal models of reality. Other uncertainty is inherent, a direct side effect of partial observability induced by sensor limitations and occlusions. Regardless of the source, the resulting decision problem is unfortunately computationally intractable under uncertainty. This poses a great challenge as the real world is also dynamic. It will not pause while the robot computes a solution. Autonomous robots navigating among people, for example in traffic, need to be able to make split-second decisions. Uncertainty is therefore often neglected in practice, with potentially catastrophic consequences when something unexpected happens. The aim of this thesis is to leverage recent advances in machine learning to compute safe real-time approximations to decision-making under uncertainty for real-world robots. We explore a range of methods, from probabilistic to deep learning, as well as different combinations with optimization-based methods from robotics, planning and control. Driven by applications in robot navigation, and grounded in experiments with real autonomous quadcopters, we address several parts of this problem. From reducing uncertainty by learning better models, to directly approximating the decision problem itself, all the while attempting to satisfy both the safety and real-time requirements of real-world autonomy.

2019
[36] Full text  Daniel de Leng. 2019.
Robust Stream Reasoning Under Uncertainty.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #2006. Linköping University Electronic Press. 207 pages. ISBN: 9789176850138.
DOI: 10.3384/diss.diva-157633.
Fulltext: https://doi.org/10.3384/diss.diva-157633
preview image: http://liu.diva-portal.org/smash/get/div...

Vast amounts of data are continually being generated by a wide variety of data producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, the ability to make sense of these streams of data through reasoning is of great importance. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in physical environments. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and their refinement an important problem.Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this work, we integrate techniques for logic-based stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over uncertain streaming data and the problem of robustly managing streaming data and their refinement.The main contributions of this work are (1) a logic-based temporal reasoning technique based on path checking under uncertainty that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt to situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in a case study on run-time adaptive reconfiguration. The results show that the proposed system - by combining reasoning over and reasoning about streams - can robustly perform stream reasoning, even when the availability of streaming resources changes.

2017
[35] Full text  Daniel de Leng. 2017.
Spatio-Temporal Stream Reasoning with Adaptive State Stream Generation.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1783. Linköping University Electronic Press. 133 pages. ISBN: 9789176854761.
DOI: 10.3384/lic.diva-138645.
Note: The series name Linköping Studies in Science and Technology Licentiate Thesis is inocorrect. The correct series name is Linköping Studies in Science and Technology Thesis.
cover: http://liu.diva-portal.org/smash/get/div...
preview image: http://liu.diva-portal.org/smash/get/div...

A lot of today's data is generated incrementally over time by a large variety of producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, making sense of these streams of data through reasoning is challenging. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in a physical environment. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and its refinement an important problem.Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this thesis, we integrate techniques for logic-based spatio-temporal stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over streaming data and the problem of robustly managing streaming data and its refinement.The main contributions of this thesis are (1) a logic-based spatio-temporal reasoning technique that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt in situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in the context of a case study on run-time adaptive reconfiguration. The results show that the proposed system – by combining reasoning over and reasoning about streams – can robustly perform spatio-temporal stream reasoning, even when the availability of streaming resources changes.

[34] Olov Andersson. 2017.
Methods for Scalable and Safe Robot Learning.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1780. Linköping University Electronic Press. 37 pages. ISBN: 9789176854907.
DOI: 10.3384/lic.diva-138398.
Fulltext: https://doi.org/10.3384/lic.diva-138398
cover: http://liu.diva-portal.org/smash/get/div...
preview image: http://liu.diva-portal.org/smash/get/div...

Robots are increasingly expected to go beyond controlled environments in laboratories and factories, to enter real-world public spaces and homes. However, robot behavior is still usually engineered for narrowly defined scenarios. To manually encode robot behavior that works within complex real world environments, such as busy work places or cluttered homes, can be a daunting task. In addition, such robots may require a high degree of autonomy to be practical, which imposes stringent requirements on safety and robustness. \setlength{\parindent}{2em}\setlength{\parskip}{0em}The aim of this thesis is to examine methods for automatically learning safe robot behavior, lowering the costs of synthesizing behavior for complex real-world situations. To avoid task-specific assumptions, we approach this from a data-driven machine learning perspective. The strength of machine learning is its generality, given sufficient data it can learn to approximate any task. However, being embodied agents in the real-world, robots pose a number of difficulties for machine learning. These include real-time requirements with limited computational resources, the cost and effort of operating and collecting data with real robots, as well as safety issues for both the robot and human bystanders.While machine learning is general by nature, overcoming the difficulties with real-world robots outlined above remains a challenge. In this thesis we look for a middle ground on robot learning, leveraging the strengths of both data-driven machine learning, as well as engineering techniques from robotics and control. This includes combing data-driven world models with fast techniques for planning motions under safety constraints, using machine learning to generalize such techniques to problems with high uncertainty, as well as using machine learning to find computationally efficient approximations for use on small embedded systems.We demonstrate such behavior synthesis techniques with real robots, solving a class of difficult dynamic collision avoidance problems under uncertainty, such as induced by the presence of humans without prior coordination. Initially using online planning offloaded to a desktop CPU, and ultimately as a deep neural network policy embedded on board a 7 quadcopter.

2015
[33] Full text  Håkan Warnquist. 2015.
Troubleshooting Trucks: Automated Planning and Diagnosis.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #1691. Linköping University Electronic Press. 79 pages. ISBN: 978-91-7685-993-3.
DOI: 10.3384/diss.diva-119445.
cover: http://liu.diva-portal.org/smash/get/div...
preview image: http://liu.diva-portal.org/smash/get/div...

This thesis considers computer-assisted troubleshooting of heavy vehicles such as trucks and buses. In this setting, the person that is troubleshooting a vehicle problem is assisted by a computer that is capable of listing possible faults that can explain the problem and gives recommendations of which actions to take in order to solve the problem such that the expected cost of restoring the vehicle is low. To achieve this, such a system must be capable of solving two problems: the diagnosis problem of finding which the possible faults are and the decision problem of deciding which action should be taken.The diagnosis problem has been approached using Bayesian network models. Frameworks have been developed for the case when the vehicle is in the workshop only and for remote diagnosis when the vehicle is monitored during longer periods of time.The decision problem has been solved by creating planners that select actions such that the expected cost of repairing the vehicle is minimized. New methods, algorithms, and models have been developed for improving the performance of the planner.The theory developed has been evaluated on models of an auxiliary braking system, a fuel injection system, and an engine temperature control and monitoring system.

[32] Full text  Mikael Nilsson. 2015.
Efficient Temporal Reasoning with Uncertainty.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1722. Linköping University Electronic Press. 116 pages. ISBN: 9789176859919.
DOI: 10.3384/lic.diva-119409.
cover: http://liu.diva-portal.org/smash/get/div...
preview image: http://liu.diva-portal.org/smash/get/div...

Automated Planning is an active area within Artificial Intelligence. With the help of computers we can quickly find good plans in complicated problem domains, such as planning for search and rescue after a natural disaster. When planning in realistic domains the exact duration of an action generally cannot be predicted in advance. Temporal planning therefore tends to use upper bounds on durations, with the explicit or implicit assumption that if an action happens to be executed more quickly, the plan will still succeed. However, this assumption is often false. If we finish cooking <em>too early</em>, the dinner will be cold before everyone is at home and can eat. Simple Temporal Networks with Uncertainty (STNUs) allow us to model such situations. An STNU-based planner must verify that the temporal problems it generates are executable, which is captured by the property of <em>dynamic controllability</em> (DC). If a plan is not dynamically controllable, adding actions cannot restore controllability. Therefore a planner should verify after each action addition whether the plan remains DC, and if not, backtrack. Verifying dynamic controllability of a full STNU is computationally intensive. Therefore, <em>incremental</em> DC verification algorithms are needed.We start by discussing two existing algorithms relevant to the thesis. These are the very first DC verification algorithm called MMV (by <strong>M</strong>orris, <strong>M</strong>uscettola and <strong>V</strong>idal) and the <em>incremental DC</em> verification algorithm called FastIDC, which is based on MMV.We then show that FastIDC is not sound, sometimes labeling networks as dynamically controllable when they are not. We analyze the algorithm to pinpoint the cause and show how the algorithm can be modified to correctly and efficiently detect uncontrollable networks.In the next part we use insights from this work to re-analyze the MMV algorithm. This algorithm is pseudo-polynomial and was later subsumed by first an n<sup>5</sup> algorithm and then an n<sup>4</sup> algorithm. We show that the basic techniques used by MMV can in fact be used to create an n<sup>4</sup> algorithm for verifying dynamic controllability, with a new termination criterion based on a deeper analysis of MMV. This means that there is now a comparatively easy way of implementing a highly efficient dynamic controllability verification algorithm. From a theoretical viewpoint, understanding MMV is important since it acts as a building block for all subsequent algorithms that verify dynamic controllability. In our analysis we also discuss a change in MMV which reduces the amount of regression needed in the network substantially.In the final part of the thesis we show that the FastIDC method can result in traversing part of a temporal network multiple times, with constraints slowly tightening towards their final values. As a result of our analysis we then present a new algorithm with an improved traversal strategy that avoids this behavior. The new algorithm, EfficientIDC, has a time complexity which is lower than that of FastIDC. We prove that it is sound and complete.

2011
[31] Full text  Piotr Rudol. 2011.
Increasing Autonomy of Unmanned Aircraft Systems Through the Use of Imaging Sensors.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1510. Linköping University Electronic Press. 96 pages. ISBN: 9789173930345.
cover: http://liu.diva-portal.org/smash/get/div...

The range of missions performed by Unmanned Aircraft Systems (UAS) has been steadily growing in the past decades thanks to continued development in several disciplines. The goal of increasing the autonomy of UAS's is widening the range of tasks which can be carried out without, or with minimal, external help. This thesis presents methods for increasing specific aspects of autonomy of UAS's operating both in outdoor and indoor environments where cameras are used as the primary sensors.First, a method for fusing color and thermal images for object detection, geolocation and tracking for UAS's operating primarily outdoors is presented. Specifically, a method for building saliency maps where human body locations are marked as points of interest is described. Such maps can be used in emergency situations to increase the situational awareness of first responders or a robotic system itself. Additionally, the same method is applied to the problem of vehicle tracking. A generated stream of geographical locations of tracked vehicles increases situational awareness by allowing for qualitative reasoning about, for example, vehicles overtaking, entering or leaving crossings.Second, two approaches to the UAS indoor localization problem in the absence of GPS-based positioning are presented. Both use cameras as the main sensors and enable autonomous indoor ight and navigation. The first approach takes advantage of cooperation with a ground robot to provide a UAS with its localization information. The second approach uses marker-based visual pose estimation where all computations are done onboard a small-scale aircraft which additionally increases its autonomy by not relying on external computational power.

[30] Full text  Mariusz Wzorek. 2011.
Selected Aspects of Navigation and Path Planning in Unmanned Aircraft Systems.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1509. Linköping University Electronic Press. 108 pages. ISBN: 9789173930376.
cover: http://liu.diva-portal.org/smash/get/div...

Unmanned aircraft systems (UASs) are an important future technology with early generations already being used in many areas of application encompassing both military and civilian domains. This thesis proposes a number of integration techniques for combining control-based navigation with more abstract path planning functionality for UASs. These techniques are empirically tested and validated using an RMAX helicopter platform used in the UASTechLab at Linköping University. Although the thesis focuses on helicopter platforms, the techniques are generic in nature and can be used in other robotic systems.At the control level a navigation task is executed by a set of control modes. A framework based on the abstraction of hierarchical concurrent state machines for the design and development of hybrid control systems is presented. The framework is used to specify reactive behaviors and for sequentialisation of control modes. Selected examples of control systems deployed on UASs are presented. Collision-free paths executed at the control level are generated by path planning algorithms.We propose a path replanning framework extending the existing path planners to allow dynamic repair of flight paths when new obstacles or no-fly zones obstructing the current flight path are detected. Additionally, a novel approach to selecting the best path repair strategy based on machine learning technique is presented. A prerequisite for a safe navigation in a real-world environment is an accurate geometrical model. As a step towards building accurate 3D models onboard UASs initial work on the integration of a laser range finder with a helicopter platform is also presented.Combination of the techniques presented provides another step towards building comprehensive and robust navigation systems for future UASs.

[29] Full text  David Landén. 2011.
Complex Task Allocation for Delegation: From Theory to Practice.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1506. Linköping University Electronic Press. 140 pages. ISBN: 9789173930482.
cover: http://liu.diva-portal.org/smash/get/div...

The problem of determining who should do what given a set of tasks and a set of agents is called the task allocation problem. The problem occurs in many multi-agent system applications where a workload of tasks should be shared by a number of agents. In our case, the task allocation problem occurs as an integral part of a larger problem of determining if a task can be delegated from one agent to another.Delegation is the act of handing over the responsibility for something to someone. Previously, a theory for delegation including a delegation speech act has been specified. The speech act specifies the preconditions that must be fulfilled before the delegation can be carried out, and the postconditions that will be true afterward. To actually use the speech act in a multi-agent system, there must be a practical way of determining if the preconditions are true. This can be done by a process that includes solving a complex task allocation problem by the agents involved in the delegation.In this thesis a constraint-based task specification formalism, a complex task allocation algorithm for allocating tasks to unmanned aerial vehicles and a generic collaborative system shell for robotic systems are developed. The three components are used as the basis for a collaborative unmanned aircraft system that uses delegation for distributing and coordinating the agents' execution of complex tasks.

[28] Full text  Håkan Warnquist. 2011.
Computer-Assisted Troubleshooting for Efficient Off-board Diagnosis.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1490. Linköping University Electronic Press. 169 pages. ISBN: 9789173931519.
cover: http://liu.diva-portal.org/smash/get/div...

This licentiate thesis considers computer-assisted troubleshooting of complex products such as heavy trucks. The troubleshooting task is to find and repair all faulty components in a malfunctioning system. This is done by performing actions to gather more information regarding which faults there can be or to repair components that are suspected to be faulty. The expected cost of the performed actions should be as low as possible.The work described in this thesis contributes to solving the troubleshooting task in such a way that a good trade-off between computation time and solution quality can be made. A framework for troubleshooting is developed where the system is diagnosed using non-stationary dynamic Bayesian networks and the decisions of which actions to perform are made using a new planning algorithm for Stochastic Shortest Path Problems called Iterative Bounding LAO*.It is shown how the troubleshooting problem can be converted into a Stochastic Shortest Path problem so that it can be efficiently solved using general algorithms such as Iterative Bounding LAO*. New and improved search heuristics for solving the troubleshooting problem by searching are also presented in this thesis.The methods presented in this thesis are evaluated in a case study of an auxiliary hydraulic braking system of a modern truck. The evaluation shows that the new algorithm Iterative Bounding LAO* creates troubleshooting plans with a lower expected cost faster than existing state-of-the-art algorithms in the literature. The case study shows that the troubleshooting framework can be applied to systems from the heavy vehicles domain.

[27] Full text  Per-Magnus Olsson. 2011.
Positioning Algorithms for Surveillance Using Unmanned Aerial Vehicles.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1476. Linköping University Electronic Press. 140 pages. ISBN: 9789173932004.
cover: http://liu.diva-portal.org/smash/get/div...

Surveillance is an important application for unmanned aerial vehicles (UAVs). The sensed information often has high priority and it must be made available to human operators as quickly as possible. Due to obstacles and limited communication range, it is not always possible to transmit the information directly to the base station. In this case, other UAVs can form a relay chain between the surveillance UAV and the base station. Determining suitable positions for such UAVs is a complex optimization problem in and of itself, and is made even more difficult by communication and surveillance constraints.To solve different variations of ï¬nding positions for UAVs for surveillance of one target, two new algorithms have been developed. One of the algorithms is developed especially for ï¬nding a set of relay chains offering different trade-offs between the number of UAVsand the quality of the chain. The other algorithm is tailored towards ï¬nding the highest quality chain possible, given a limited number of available UAVs.Finding the optimal positions for surveillance of several targets is more difficult. A study has been performed, in order to determine how the problems of interest can besolved. It turns out that very few of the existing algorithms can be used due to the characteristics of our speciï¬c problem. For this reason, an algorithm for quickly calculating positions for surveillance of multiple targets has been developed. This enables calculation of an initial chain that is immediately made available to the user, and the chain is then incrementally optimized according to the user’s desire.

2009
[26] Cyrille Berger. 2009.
Perception de la géométrie de l'environment pour la navigation autonome.
PhD Thesis. Université de Toulouse. 164 pages.

Le but de la recherche en robotique mobile est de donner aux robots la capacité d'accomplir des missions dans un environnement qui n'est pas parfaitement connu. Mission, qui consiste en l'exécution d'un certain nombre d'actions élémentaires (déplacement, manipulation d'objets...) et qui nécessite une localisation précise, ainsi que la construction d'un bon modèle géométrique de l'environnement, a partir de l'exploitation de ses propres capteurs, des capteurs externes, de l'information provenant d'autres robots et de modèle existant, par exemple d'un système d'information géographique. L'information commune est la géométrie de l'environnement. La première partie du manuscrit couvre les différentes méthodes d'extraction de l'information géométrique. La seconde partie présente la création d'un modèle géométrique en utilisant un graphe, ainsi qu'une méthode pour extraire de l'information du graphe et permettre au robot de se localiser dans l'environnement.

[25] Full text  Gianpaolo Conte. 2009.
Vision-Based Localization and Guidance for Unmanned Aerial Vehicles.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #1260. Linköping University Electronic Press. 174 pages. ISBN: 978-91-7393-603-3.
cover: http://liu.diva-portal.org/smash/get/div...

The thesis has been developed as part of the requirements for a PhD degree at the Artificial Intelligence and Integrated Computer System division (AIICS) in the Department of Computer and Information Sciences at Linköping University.The work focuses on issues related to Unmanned Aerial Vehicle (UAV) navigation, in particular in the areas of guidance and vision-based autonomous flight in situations of short and long term GPS outage.The thesis is divided into two parts. The first part presents a helicopter simulator and a path following control mode developed and implemented on an experimental helicopter platform. The second part presents an approach to the problem of vision-based state estimation for autonomous aerial platforms which makes use of geo-referenced images for localization purposes. The problem of vision-based landing is also addressed with emphasis on fusion between inertial sensors and video camera using an artificial landing pad as reference pattern. In the last chapter, a solution to a vision-based ground object geo-location problem using a fixed-wing micro aerial vehicle platform is presented.The helicopter guidance and vision-based navigation methods developed in the thesis have been implemented and tested in real flight-tests using a Yamaha Rmax helicopter. Extensive experimental flight-test results are presented.

[24] Full text  Fredrik Heintz. 2009.
DyKnow: A Stream-Based Knowledge Processing Middleware Framework.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #1240. Linköping University Electronic Press. 258 pages. ISBN: 9789173936965.
cover: http://liu.diva-portal.org/smash/get/div...

As robotic systems become more and more advanced the need to integrate existing deliberative functionalities such as chronicle recognition, motion planning, task planning, and execution monitoring increases. To integrate such functionalities into a coherent system it is necessary to reconcile the different formalisms used by the functionalities to represent information and knowledge about the world. To construct and integrate these representations and maintain a correlation between them and the environment it is necessary to extract information and knowledge from data collected by sensors. However, deliberative functionalities tend to assume symbolic and crisp knowledge about the current state of the world while the information extracted from sensors often is noisy and incomplete quantitative data on a much lower level of abstraction. There is a wide gap between the information about the world normally acquired through sensing and the information that is assumed to be available for reasoning about the world.As physical autonomous systems grow in scope and complexity, bridging the gap in an ad-hoc manner becomes impractical and inefficient. Instead a principled and systematic approach to closing the sensereasoning gap is needed. At the same time, a systematic solution has to be sufficiently flexible to accommodate a wide range of components with highly varying demands. We therefore introduce the concept of knowledge processing middleware for a principled and systematic software framework for bridging the gap between sensing and reasoning in a physical agent. A set of requirements that all such middleware should satisfy is also described.A stream-based knowledge processing middleware framework called DyKnow is then presented. Due to the need for incremental refinement of information at different levels of abstraction, computations and processes within the stream-based knowledge processing framework are modeled as active and sustained knowledge processes working on and producing streams. DyKnow supports the generation of partial and context dependent stream-based representations of past, current, and potential future states at many levels of abstraction in a timely manner.To show the versatility and utility of DyKnow two symbolic reasoning engines are integrated into Dy-Know. The first reasoning engine is a metric temporal logical progression engine. Its integration is made possible by extending DyKnow with a state generation mechanism to generate state sequences over which temporal logical formulas can be progressed. The second reasoning engine is a chronicle recognition engine for recognizing complex events such as traffic situations. The integration is facilitated by extending DyKnow with support for anchoring symbolic object identifiers to sensor data in order to collect information about physical objects using the available sensors. By integrating these reasoning engines into DyKnow, they can be used by any knowledge processing application. Each integration therefore extends the capability of DyKnow and increases its applicability.To show that DyKnow also has a potential for multi-agent knowledge processing, an extension is presented which allows agents to federate parts of their local DyKnow instances to share information and knowledge.Finally, it is shown how DyKnow provides support for the functionalities on the different levels in the JDL Data Fusion Model, which is the de facto standard functional model for fusion applications. The focus is not on individual fusion techniques, but rather on an infrastructure that permits the use of many different fusion techniques in a unified framework.The main conclusion of this thesis is that the DyKnow knowledge processing middleware framework provides appropriate support for bridging the sense-reasoning gap in a physical agent. This conclusion is drawn from the fact that DyKnow has successfully been used to integrate different reasoning engines into complex unmanned aerial vehicle (UAV) applications and that it satisfies all the stated requirements for knowledge processing middleware to a significant degree.

2008
[23] Full text  Per Nyblom. 2008.
Dynamic Abstraction for Interleaved Task Planning and Execution.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1363. Institutionen för datavetenskap. 94 pages. ISBN: 9789173939058.
Note: Report code: LiU-Tek-Lic-2008:21.
cover: http://liu.diva-portal.org/smash/get/div...

It is often beneficial for an autonomous agent that operates in a complex environment to make use of different types of mathematical models to keep track of unobservable parts of the world or to perform prediction, planning and other types of reasoning. Since a model is always a simplification of something else, there always exists a tradeoff between the model’s accuracy and feasibility when it is used within a certain application due to the limited available computational resources. Currently, this tradeoff is to a large extent balanced by humans for model construction in general and for autonomous agents in particular. This thesis investigates different solutions where such agents are more responsible for balancing the tradeoff for models themselves in the context of interleaved task planning and plan execution. The necessary components for an autonomous agent that performs its abstractions and constructs planning models dynamically during task planning and execution are investigated and a method called DARE is developed that is a template for handling the possible situations that can occur such as the rise of unsuitable abstractions and need for dynamic construction of abstraction levels. Implementations of DARE are presented in two case studies where both a fully and partially observable stochastic domain are used, motivated by research with Unmanned Aircraft Systems. The case studies also demonstrate possible ways to perform dynamic abstraction and problem model construction in practice.

[22] Full text  Heike Joe Steinhauer. 2008.
A Representation Scheme for Description and Reconstruction of Object Configurations Based on Qualitative Relations.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #1204. Linköping University Electronic Press. 178 pages. ISBN: 978-91-7393-823-5.
cover: http://liu.diva-portal.org/smash/get/div...

One reason Qualitative Spatial Reasoning (QSR) is becoming increasingly important to Artificial Intelligence (AI) is the need for a smooth ‘human-like’ communication between autonomous agents and people. The selected, yet general, task motivating the work presented here is the scenario of an object configuration that has to be described by an observer on the ground using only relational object positions. The description provided should enable a second agent to create a map-like picture of the described configuration in order to recognize the configuration on a representation from the survey perspective, for instance on a geographic map or in the landscape itself while observing it from an aerial vehicle. Either agent might be an autonomous system or a person. Therefore, the particular focus of this work lies on the necessity to develop description and reconstruction methods that are cognitively easy to apply for a person.This thesis presents the representation scheme QuaDRO (Qualitative Description and Reconstruction of Object configurations). Its main contributions are a specification and qualitative classification of information available from different local viewpoints into nine qualitative equivalence classes. This classification allows the preservation of information needed for reconstruction into a global frame of reference. The reconstruction takes place in an underlying qualitative grid with adjustable granularity. A novel approach for representing objects of eight different orientations by two different frames of reference is used. A substantial contribution to alleviate the reconstruction process is that new objects can be inserted anywhere within the reconstruction without the need for backtracking or rereconstructing. In addition, an approach to reconstruct configurations from underspecified descriptions using conceptual neighbourhood-based reasoning and coarse object relations is presented.

2007
[21] Björn Hägglund. 2007.
A framework for designing constraint stores.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1302. Linköpings universitet. 118 pages. ISBN: 9789185715701.

A constraint solver based on concurrent search and propagation provides a well-defined component model for propagators by enforcing a strict two-level architecture. This makes it straightforward for third parties to invent, implement and deploy new kinds of propagators. The most critical components of such solvers are the constraint stores through which propagators communicate with each other. Introducing stores supporting new kinds of stored constraints can potentially increase the solving power by several orders of magnitude. This thesis presents a theoretical framework for designing stores achieving this without loss of propagator interoperability.

[20] Full text  Gianpaolo Conte. 2007.
Navigation Functionalities for an Autonomous UAV Helicopter.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1307. Linköping University Electronic Press. 107 pages. ISBN: 9789185715350.

This thesis was written during the WITAS UAV Project where one of the goals has been the development of a software/hardware architecture for an unmanned autonomous helicopter, in addition to autonomous functionalities required for complex mission scenarios. The algorithms developed here have been tested on an unmanned helicopter platform developed by Yamaha Motor Company called the RMAX. The character of the thesis is primarily experimental and it should be viewed as developing navigational functionality to support autonomous flight during complex real world mission scenarios. This task is multidisciplinary since it requires competence in aeronautics, computer science and electronics. The focus of the thesis has been on the development of a control method to enable the helicopter to follow 3D paths. Additionally, a helicopter simulation tool has been developed in order to test the control system before flight-tests. The thesis also presents an implementation and experimental evaluation of a sensor fusion technique based on a Kalman filter applied to a vision based autonomous landing problem. Extensive experimental flight-test results are presented.

[19] Full text  Martin Magnusson. 2007.
Deductive Planning and Composite Actions in Temporal Action Logic.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1329. Institutionen för datavetenskap. 86 pages. ISBN: 9789185895939.
cover: http://liu.diva-portal.org/smash/get/div...

Temporal Action Logic is a well established logical formalism for reasoning about action and change that has long been used as a formal specification language. Its first-order characterization and explicit time representation makes it a suitable target for automated theorem proving and the application of temporal constraint solvers. We introduce a translation from a subset of Temporal Action Logic to constraint logic programs that takes advantage of these characteristics to make the logic applicable, not just as a formal specification language, but in solving practical reasoning problems. Extensions are introduced that enable the generation of action sequences, thus paving the road for interesting applications in deductive planning. The use of qualitative temporal constraints makes it possible to follow a least commitment strategy and construct partially ordered plans. Furthermore, the logical language and logic program translation is extended with the notion of composite actions that can be used to formulate and execute scripted plans with conditional actions, non-deterministic choices, and loops. The resulting planner and reasoner is integrated with a graphical user interface in our autonomous helicopter research system and applied to logistics problems. Solution plans are synthesized together with monitoring constraints that trigger the generation of recovery actions in cases of execution failures.

2006
[18] Full text  Karolina Eliasson. 2006.
The Use of Case-Based Reasoning in a Human-Robot Dialog System.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1248. Institutionen för datavetenskap. 130 pages. ISBN: 918552378X.
Note: Report code: LiU{Tek{Lic{2006:29.

As long as there have been computers, one goal has been to be able to communicate with them using natural language. It has turned out to be very hard to implement a dialog system that performs as well as a human being in an unrestricted domain, hence most dialog systems today work in small, restricted domains where the permitted dialog is fully controlled by the system.In this thesis we present two dialog systems for communicating with an autonomous agent:The first system, the WITAS RDE, focuses on constructing a simple and failsafe dialog system including a graphical user interface with multimodality features, a dialog manager, a simulator, and development infrastructures that provides the services that are needed for the development, demonstration, and validation of the dialog system. The system has been tested during an actual flight connected to an unmanned aerial vehicle.The second system, CEDERIC, is a successor of the dialog manager in the WITAS RDE. It is equipped with a built-in machine learning algorithm to be able to learn new phrases and dialogs over time using past experiences, hence the dialog is not necessarily fully controlled by the system. It also includes a discourse model to be able to keep track of the dialog history and topics, to resolve references and maintain subdialogs. CEDERIC has been evaluated through simulation tests and user tests with good results.

[17] Full text  Patrik Haslum. 2006.
Admissible Heuristics for Automated Planning.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #1004. Institutionen för datavetenskap. 164 pages. ISBN: 91-85497-28-2.
errata: https://liu.diva-portal.org/smash/get/di...

The problem of domain-independent automated planning has been a topic of research in Artificial Intelligence since the very beginnings of the field. Due to the desire not to rely on vast quantities of problem specific knowledge, the most widely adopted approach to automated planning is search. The topic of this thesis is the development of methods for achieving effective search control for domain-independent optimal planning through the construction of admissible heuristics. The particular planning problem considered is the so called “classical†AI planning problem, which makes several restricting assumptions. Optimality with respect to two measures of plan cost are considered: in planning with additive cost, the cost of a plan is the sum of the costs of the actions that make up the plan, which are assumed independent, while in planning with time, the cost of a plan is the total execution time – makespan – of the plan. The makespan optimization objective can not, in general, be formulated as a sum of independent action costs and therefore necessitates a problem model slightly different from the classical one. A further small extension to the classical model is made with the introduction of two forms of capacitated resources. Heuristics are developed mainly for regression planning, but based on principles general enough that heuristics for other planning search spaces can be derived on the same basis. The thesis describes a collection of methods, including the h<sup>m</sup>, additive h<sup>m</sup> and improved pattern database heuristics, and the relaxed search and boosting techniques for improving heuristics through limited search, and presents two extended experimental analyses of the developed methods, one comparing heuristics for planning with additive cost and the other concerning the relaxed search technique in the context of planning with time, aimed at discovering the characteristics of problem domains that determine the relative effectiveness of the compared methods. Results indicate that some plausible such characteristics have been found, but are not entirely conclusive.

[16] Full text  Per Olof Pettersson. 2006.
Sampling-based Path Planning for an Autonomous Helicopter.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1229. Institutionen för datavetenskap. 141 pages. ISBN: 9185497150.
Note: Report code: LiU–Tek–Lic–2006:10.

Many of the applications that have been proposed for future small unmanned aerial vehicles (UAVs) are at low altitude in areas with many obstacles. A vital component for successful navigation in such environments is a path planner that can find collision free paths for the UAV.Two popular path planning algorithms are the probabilistic roadmap algorithm (PRM) and the rapidly-exploring random tree algorithm (RRT).Adaptations of these algorithms to an unmanned autonomous helicopter are presented in this thesis, together with a number of extensions for handling constraints at different stages of the planning process.The result of this work is twofold:First, the described planners and extensions have been implemented and integrated into the software architecture of a UAV. A number of flight tests with these algorithms have been performed on a physical helicopter and the results from some of them are presented in this thesis.Second, an empirical study has been conducted, comparing the performance of the different algorithms and extensions in this planning domain. It is shown that with the environment known in advance, the PRM algorithm generally performs better than the RRT algorithm due to its precompiled roadmaps, but that the latter is also usable as long as the environment is not too complex. The study also shows that simple geometric constraints can be added in the runtime phase of the PRM algorithm, without a big impact on performance. It is also shown that postponing the motion constraints to the runtime phase can improve the performance of the planner in some cases.

2005
[15] Anders Arpteg. 2005.
Intelligent semi-structured information extraction: a user-driven approach to information extraction.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #946. Linköping University Electronic Press. 139 pages. ISBN: 91-85297-98-4.
Note: This work has been supported by University of Kalmar and the Knowledge Foundation.

The number of domains and tasks where information extraction tools can be used needs to be increased. One way to reach this goal is to design user-driven information extraction systems where non-expert users are able to adapt them to new domains and tasks. It is difficult to design general extraction systems that do not require expert skills or a large amount of work from the user. Therefore, it is difficult to increase the number of domains and tasks. A possible alternative is to design user-driven systems, which solve that problem by letting a large number of non-expert users adapt the systems themselves. To accomplish this goal, the systems need to become more intelligent and able to learn to extract with as little given information as possible.The type of information extraction system that is in focus for this thesis is semi-structured information extraction. The term semi-structured refers to documents that not only contain natural language text but also additional structural information. The typical application is information extraction from World Wide Web hypertext documents. By making effective use of not only the link structure but also the structural information within each such document, user-driven extraction systems with high performance can be built.There are two different approaches presented in this thesis to solve the user-driven extraction problem. The first takes a machine learning approach and tries to solve the problem using a modified $Q(\lambda)$ reinforcement learning algorithm. A problem with the first approach was that it was difficult to handle extraction from the hidden Web. Since the hidden Web is about 500 times larger than the visible Web, it would be very useful to be able to extract information from that part of the Web as well. The second approach is called the hidden observation approach and tries to also solve the problem of extracting from the hidden Web. The goal is to have a user-driven information extraction system that is also able to handle the hidden Web. The second approach uses a large part of the system developed for the first approach, but the additional information that is silently obtained from the user presents other problems and possibilities.An agent-oriented system was designed to evaluate the approaches presented in this thesis. A set of experiments was conducted and the results indicate that a user-driven information extraction system is possible and no longer just a concept. However, additional work and research is necessary before a fully-fledged user-driven system can be designed.

[14] Bourhane Kadmiry. 2005.
Fuzzy gain scheduled visual servoing for an unmanned helicopter.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #938. Linköpings universitet. 148 pages. ISBN: 91-85297-76-3.

The overall objective of the Wallenberg Laboratory for Information Technology and Autonomous Systems (WITAS) at Linkoping University is the development of an intelligent command and control system, containing active-vision sensors, which supports the operation of an unmanned air vehicle (UAV). One of the UA V platforms of choice is the R5O unmanned helicopter, by Yamaha.The present version of the UAV platform is augmented with a camera system. This is enough for performing missions like site mapping, terrain exploration, in which the helicopter motion can be rather slow. But in tracking missions, and obstacle avoidance scenarios, involving high-speed helicopter motion, robust performance for the visual-servoing scheme is desired. Robustness in this case is twofold: 1) w.r.t time delays introduced by the image processing; and 2) w.r.t disturbances, parameter uncertainties and unmodeled dynamics which reflect on the feature position in the image, and the camera pose.With this goal in mind, we propose to explore the possibilities for the design of fuzzy controllers, achieving stability, robust and minimal-cost performance w.r.t time delays and unstructured uncertainties for image feature tracking, and test a control solution in both simulation and on real camera platforms. Common to both are model-based design by the use of nonlinear control approaches. The performance of these controllers is tested in simulation using the nonlinear geometric model of a pin-hole camera. Then we implement and test the reSUlting controller on the camera platform mounted on the UAV.

[13] Full text  Jonas Kvarnström. 2005.
TALplanner and other extensions to Temporal Action Logic.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #937. Linköpings universitet. 302 pages. ISBN: 91-85297-75-5.

Though the exact definition of the boundary between intelligent and non-intelligent artifacts has been a subject of much debate, one aspect of intelligence that many would deem essential is <em>deliberation</em>: Rather than reacting \"instinctively\" to its environment, an intelligent system should also be capable of <em>reasoning</em> about it, reasoning about the effects of actions performed by itself and others, and creating and executing plans, that is, determining which actions to perform in order to achieve certain goals. True deliberation is a complex topic, requiring support from several different sub-fields of artificial intelligence. The work presented in this thesis spans two of these partially overlapping fields, beginning with <em>reasoning about action and change</em> and eventually moving over towards <em>planning</em>.The qualification problem relates to the difficulties inherent in providing, for each action available to an agent, an exhaustive list of all qualifications to the action, that is, all the conditions that may prevent the action from being executed in the intended manner. The first contribution of this thesis is a framework for <em>modeling qualifications</em> in Temporal Action Logic (TAL).As research on reasoning about action and change proceeds, increasingly complex and interconnected domains are modeled in increasingly greater detail. Unless the resulting models are structured consistently and coherently, they will be prohibitively difficult to maintain. The second contribution is a framework for <em>structuring TAL domains</em> using object-oriented concepts.Finally, the second half of the thesis is dedicated to the task of <em>planning</em>. TLplan pioneered the idea of using domain-specific control knowledge in a temporal logic to constrain the search space of a forward-chaining planner. We develop a new planner called TALplanner, based on the same idea but with fundamental differences in the way the planner verifies that a plan satisfies control formulas. T ALplanner generates concurrent plans and can take resource constraints into account. The planner also applies several new automated domain analysis techniques to control formulas, further increasing performance by orders of magnitude for many problem domains.

2003
[12] Full text  Anders Arpteg. 2003.
Adaptive Semi-structured Information Extraction.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #1000. Institutionen för datavetenskap. 85 pages. ISBN: 9173735892.
Note: Report code: LiU-Tek-Lic-2002:73.

The number of domains and tasks where information extraction tools can be used needs to be increased. One way to reach this goal is to construct user-driven information extraction systems where novice users are able to adapt them to new domains and tasks. To accomplish this goal, the systems need to become more intelligent and able to learn to extract information without need of expert skills or time-consuming work from the user.The type of information extraction system that is in focus for this thesis is semistructural information extraction. The term semi-structural refers to documents that not only contain natural language text but also additional structural information. The typical application is information extraction from World Wide Web hypertext documents. By making effective use of not only the link structure but also the structural information within each such document, user-driven extraction systems with high performance can be built.The extraction process contains several steps where different types of techniques are used. Examples of such types of techniques are those that take advantage of structural, pure syntactic, linguistic, and semantic information. The first step that is in focus for this thesis is the navigation step that takes advantage of the structural information. It is only one part of a complete extraction system, but it is an important part. The use of reinforcement learning algorithms for the navigation step can make the adaptation of the system to new tasks and domains more user-driven. The advantage of using reinforcement learning techniques is that the extraction agent can efficiently learn from its own experience without need for intensive user interactions.An agent-oriented system was designed to evaluate the approach suggested in this thesis. Initial experiments showed that the training of the navigation step and the approach of the system was promising. However, additional components need to be included in the system before it becomes a fully-fledged user-driven system.

2002
[11] Full text  Patrik Haslum. 2002.
Prediction as a Knowledge Representation Problem: A Case Study in Model Design.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #942. Institutionen för datavetenskap. 106 pages. ISBN: 9173733318.
Note: Report code: LiU-Tek-Lic-2002:15.

The WITAS project aims to develop technologies to enable an Unmanned Airial Vehicle (UAV) to operate autonomously and intelligently, in applications such as traffic surveillance and remote photogrammetry. Many of the necessary control and reasoning tasks, e.g. state estimation, reidentification, planning and diagnosis, involve prediction as an important component. Prediction relies on models, and such models can take a variety of forms. Model design involves many choices with many alternatives for each choice, and each alternative carries advantages and disadvantages that may be far from obvious. In spite of this, and of the important role of prediction in so many areas, the problem of predictive model design is rarely studied on its own.In this thesis, we examine a range of applications involving prediction and try to extract a set of choices and alternatives for model design. As a case study, we then develop, evaluate and compare two different model designs for a specific prediction problem encountered in the WITAS UAV project. The problem is to predict the movements of a vehicle travelling in a traffic network. The main difficulty is that uncertainty in predictions is very high, du to two factors: predictions have to be made on a relatively large time scale, and we have very little information about the specific vehicle in question. To counter uncertainty, as much use as possible must be made of knowledge about traffic in general, which puts emphasis on the knowledge representation aspect of the predictive model design.The two mode design we develop differ mainly in how they represent uncertainty: the first uses coarse, schema-based representation of likelihood, while the second, a Markov model, uses probability. Preliminary experiments indicate that the second design has better computational properties, but also some drawbacks: model construction is data intensive and the resulting models are somewhat opaque.

[10] Full text  Bourhane Kadmiry. 2002.
Fuzzy Control for an Unmanned Helicopter.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #938. Institutionen för datavetenskap. 108 pages. ISBN: 917373313X.
Note: Report code: LiU-Tek-Lic-2002:11. The format of the electronic version of this thesis differs slightly from the printed one: this is due mainly to font compatibility. The figures and body of the thesis are remaining unchanged.

The overall objective of the Wallenberg Laboratory for Information Technology and Autonomous Systems (WITAS) at Linköping University is the development of an intelligent command and control system, containing vision sensors, which supports the operation of a unmanned air vehicle (UAV) in both semi- and full-autonomy modes. One of the UAV platforms of choice is the APID-MK3 unmanned helicopter, by Scandicraft Systems AB. The intended operational environment is over widely varying geographical terrain with traffic networks and vehicle interaction of variable complexity, speed, and density.The present version of APID-MK3 is capable of autonomous take-off, landing, and hovering as well as of autonomously executing pre-defined, point-to-point flight where the latter is executed at low-speed. This is enough for performing missions like site mapping and surveillance, and communications, but for the above mentioned operational environment higher speeds are desired. In this context, the goal of this thesis is to explore the possibilities for achieving stable ‘‘aggressive’’ manoeuvrability at high-speeds, and test a variety of control solutions in the APID-MK3 simulation environment.The objective of achieving ‘‘aggressive’’ manoeuvrability concerns the design of attitude/velocity/position controllers which act on much larger ranges of the body attitude angles, by utilizing the full range of the rotor attitude angles. In this context, a flight controller should achieve tracking of curvilinear trajectories at relatively high speeds in a robust, w.r.t. external disturbances, manner. Take-off and landing are not considered here since APIDMK3 has already have dedicated control modules that realize these flight modes.With this goal in mind, we present the design of two different types of flight controllers: a fuzzy controller and a gradient descent method based controller. Common to both are model based design, the use of nonlinear control approaches, and an inner- and outer-loop control scheme. The performance of these controllers is tested in simulation using the nonlinear model of APID-MK3.

2001
[9] Full text  Marcus Bjäreland. 2001.
Model-based execution monitoring.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #688. Linköpings universitet. 153 pages. ISBN: 91-7373-016-5.

The task of monitoring the execution of a software-based controller in order to detect, classify, and recover from discrepancies between the actual effects of control actions and the effects predicted by a model, is the topic of this thesis. Model-based execution monitoring is proposed as a technique for increasing the safety and optimality of operation of large and complex industrial process controllers, and of controllers operating in complex and unpredictable environments (such as unmanned aerial vehicles).In this thesis we study various aspects of model-based execution monitoring, including the following:The relation between previous approaches to execution monitoring in Control Theory, Artificial Intelligence and Computer Science is studied and a common conceptual framework for design and analysis is proposed.An existing execution monitoring paradigm, <em>ontological control</em>, is generalized and extended. We also present a prototype implementation of ontological control with a first set of experimental results where the prototype is applied to an actual industrial process control system: The ABB STRESSOMETER cold mill flatness control system.A second execution monitoring paradigm, <em>stability-based execution monitoring</em>, is introduced, inspired by the vast amount of work on the \"stability\" notion in Control Theory and Computer Science.Finally, the two paradigms are applied in two different frameworks. First, in the \"hybrid automata\" framework, which is a state-of-the-art formal modeling framework for hybrid (that is, discrete+continuous) systems, and secondly, in the logical framework of GOLOG and the Situation Calculus.

[8] Full text  Joakim Gustafsson. 2001.
Extending temporal action logic.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #689. Linköpings universitet. 218 pages. ISBN: 91-7373-017-3.

An autonomous agent operating in a dynamical environment must be able to perform several \"intelligent\" tasks, such as learning about the environment, planning its actions and reasoning about the effects of the chosen actions. For this purpose, it is vital that the agent has a coherent, expressive, and well understood means of representing its knowledge about the world.Traditionally, all knowledge about the dynamics of the modeled world has been represented in complex and detailed action descriptions. The first contribution of this thesis is the introduction of domain constraints in TAL, allowing a more modular representation of certain kinds of knowledge.The second contribution is a systematic method of modeling different types of conflict handling that can arise in the context of concurrent actions. A new type of fluent, called influence, is introduced as a carrier from cause to actual effect. Domain constraints govern how influences interact with ordinary fluents. Conflicts can be modeled in a number of different ways depending on the nature of the interaction.A fundamental property of many dynamical systems is that the effects of actions can occur with some delay. We discuss how delayed effects can be modeled in TAL using the mechanisms previously used for concurrent actions, and consider a range of possible interactions between the delayed effects of an action and later occurring actions.In order to model larger and more complex domains, a sound modeling methodology is essential. We demonstrate how many ideas from the object-oriented paradigm can be used when reasoning about action and change. These ideas are used both to construct a framework for high level control objects and to illustrate how complex domains can be modeled in an elaboration tolerant manner.

1999
[7] Silvia Coradeschi. 1999.
Anchoring symbols to sensory data.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #611. Linköpings universitet. 136 pages. ISBN: 91-7219-623-8.

Intelligent agents embedded in physical environments need the ability to connect, or <em>anchor</em>, the symbols used to perform abstract reasoning to the physical entities which these symbols refer to. Anchoring must deal with indexical and objective references, definite and indefinite identifiers, and a temporary impossibility to perceive physical entities. Furthermore it needs to rely on sensor data which is inherently affected by uncertainty, and to deal with ambiguities. In this thesis, we outline the concept of anchoring and its functionalities. Moreover we define the general structure for an anchoring module and we present an implementation of the anchoring functionalities in two different domains: an autonomous airborne vehicle for traffic surveillance and a mobile ground vehicle performing navigation.

1998
[6] Joakim Gustafsson. 1998.
Extending temporal action logic for ramification and concurrency.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #719. Univ.. 121 pages. ISBN: 9172192879.

[5] Full text  Marcus Bjäreland. 1998.
Two Aspects of Automating Logics of Action and Change: Regression and Tractability.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #674. Linköpings universitet. 83 pages. ISBN: 9172191732.
Note: Thesis No 674. LiU-Tek-Lic 1998:09

The autonomy of an artificial agent (e.g. a robot) will certainly depend on its ability to perform \"intelligent\" tasks, such as learning, planning, and reasoning about its own actions and their effects on the enviroment, for example predicting the consequences of its own behaviour. To be able to perform these (and many more) tasks, the agent will have to represent its knowledge about the world.The research field \"Logics of Action Change\" is concerned with the modelling of agents and dynamical, changing environments with logics.In this thesis we study two aspects of automation of logics of action and change. The first aspect, regression, is used to \"reason backwards\", i.e. to start with the last time point in a description of a course of events, and moving backwards through these events, taking the effects of all actions into consideration. We discuss the consequences for regression of introducing nondeterministic actions, and provide the logic PMON with pre- and postdiction procedures. We employ the classical computer science tool, the weakest liberal precondition operator (wlp) for this, and show that logical entailment of PMON is equivalent to wlp computations.The second aspect is computational complexity of logics of action and change, which has virtually been neglected by the research community. We present a new and expressive logic, capable of expressing continuous time, nondeterministic actions, concurrency, and memory of actions. We show that satisfiability of a theory in this logic is NP-complete. Furthermore, we identify a tractable subset of the logic, and provide a sound, complete, and polynomial algorithm for satisfiability of the subset.

1997
[4] Silvia Coradeschi. 1997.
A decision-mechanism for reactive and coordinated agents: Silvia Coradeschi.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #615. Univ.. 87 pages. ISBN: 9178719410.

<p data-select-like-a-boss=\"1\">In this thesis we present preliminary results in the development of LIBRA (LInköping Behavior Representation for Agents). LIBRA is a rule-based system for specifying the behavior of automated agents that combine reactivity to an uncertain and rapidly changing envi-ronment and coordination. A central requirement is that the behavior specification should be made by users who are not computer and AI specialists. Two application domains are considered: air-combat simulation and a simulated soccer domain from the perspective of the RoboCup competition. The behavior of an agent is specified in terms of prioritized production rules organized in a decision tree. Coordinated behaviors are encoded in the decision trees of the individual agents. The agents initiate tactics depending on the situation and recognize the tactics that the other team members have initiated in order to take their part in it.What links individual behavior descriptions together are explicit communication and common means to describe the situation the agents find themselves in.

1991
[3] Patrick Doherty. 1991.
NML3: a non-monotonic formalism with explicit defaults.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #258. Linköpings tekniska högskola. 290 pages. ISBN: 91-7870-816-8.

The thesis is a study of a particular approach to defeasible reasoning based on the notion of an information state consisting of a set of partial interpretations constrained by an information ordering. The formalism proposed, called NML3, is a non-monotonic logic with explicit defaults and is characterized by the following features: (1) The use of the strong Kleene three-valued logic as a basis. (2) The addition of an explicit default operator which enables distinguishing tentative conclusions from ordinary conclusions in the object language. (3) The use of the technique of preferential entailment to generate non-monotonic behavior. The central feature of the formalism, the use of an explicit default operator with a model theoretic semantics based on the notion of a partial interpretation, distinguishes NML3 from the existing formalisms. By capitalizing on the distinction between tentative and ordinary conclusions, NML3 provides increased expressibility in comparison to many of the standard non-monotonic formalisms and greater flexibility in the representation of subtle aspects of default reasoning.In addition to NML3, a novel extension of the tableau-based proof technique is presented where a signed formula is tagged with a set of truth values rather than a single truth value. This is useful if the tableau-based proof technique is to be generalized to apply to the class of multi-valued logics. A refutation proof procedure may then be used to check logical consequence for the base logic used in NML3 and to provide a decision procedure for the propositional case of NML3.A survey of a number of non-standard logics used in knowledge representation is also provided. Various formalisms are analyzed in terms of persistence properties of formulas and their use of information structures.

1990
[2] Patrick Doherty. 1990.
A three-valued approach to non-monotonic reasoning.
Licentiate Thesis. In series: Linköping Studies in Science and Technology. Thesis #230. Linköping University. 117 pages. ISBN: 9178706726.

The subject of this thesis is the formalization of a type of non-monotonic reasoning using a three-valued logic based on the strong definitions of Kleene. Non-monotonic reasoning is the rule rather than the exception when agents, human or machine, must act where information about the environment is uncertain or incomplete. Information about the environment is subject to change due to external causes, or may simply become outdated. This implies that inferences previously made may no longer hold and in turn must be retracted along with the revision of other information dependent on the retractions. This is the variety of reasoning we would like to find formal models for.We start by extending Kleene-s three-valued logic with an \"external negation\" connective where ~ a is true when a is false or unknown. In addition, a default operator D is added where D a is interpreted as \"a is true by default. The addition of the default operator increases the expressivity of the language, where statements such as \"a is not a default\" are directly representable. The logic has an intuitive model theoretic semantics without any appeal to the use of a fixpoint semantics for the default operator. The semantics is based on the notion of preferential entailment, where a set of sentences G preferentially entails a sentence a, if and only if a preferred set of the models of G are models of a. We also show that one version of the logic belongs to the class of cumulative non-monotonic formalisms which are a subject of current interest.A decision procedure for the propositional case, based on the semantic tableaux proof method is described and serves as a basis for a QA-system where it can be determined if a sentence a is preferentially entailed by a set of premises G. The procedure is implemented.

1977
[1] Anders Haraldsson. 1977.
A program manipulation system based on partial evaluation.
PhD Thesis. In series: Linköping Studies in Science and Technology. Dissertations #14. Linköpings universitet. 264 pages. ISBN: 9173721441.
preview image: http://liu.diva-portal.org/smash/get/div...
cover: http://liu.diva-portal.org/smash/get/div...

Program manipulation is the task to perform transformations on program code, and is normally done in order to optimize the code with respect of the utilization of some computer resource. Partial evaluation is the task when partial computations can be performed in a program before it is actually executed. If a parameter to a procedure is constant a specialized version of that procedure can be generated if the constant is inserted instead of the parameter in the procedure body and as much computations in the code as possible are performed.A system is described which works on programs written in INTERLISP, and which performs partial evaluation together with other transformations such as beta-expansion and certain other optimization operations. The system works on full LISP and not only for a \"pure\" LISP dialect, and deals with problems occurring there involving side-effects, variable assignments etc. An analysis of a previous system, REDFUN, results in a list of problems, desired extensions and new features. This is used as a basis for a new design, resulting in a new implementation, REDFUN-2. This implementation, design considerations, constraints in the system, remaining problems, and other experience from the development and experiments with the system are reported in this paper.


Page responsible: Patrick Doherty
Last updated: 2014-04-30