Abstract of Ph.d. thesis published since 1983
Linköping Studies in Science and Technology
No 14
A PROGRAM MANIPULATION SYSTEM BASED ON PARTIAL EVALUATION
Anders Haraldsson
Program manipulation is the task to perform transformations on program code, and
is normally done in order to optimize the code with respect of the utilization
of some computer resource. Partial evaluation is the task when partial
computations can be performed in a program before it is actually executed. If a
parameter to a procedure is constant a specialized version of that procedure can
be generated if the constant is inserted instead of the parameter in the
procedure body and as much computations in the code as possible are performed. A
system is described which works on programs written in INTERLISP, and which
performs partial evaluation together with other transformations such as
beta-expansion and certain other optimization operations. The system works on
full LISP and not only for a "pure" LISP dialect, and deals with problems
occurring there involving side-effects, variable assignments etc. An analysis of
a previous system, REDFUN, results in a list of problems, desired extensions and
new features. This is used as a basis for a new design, resulting in a new
implementation, REDFUN-2. This implementation, design considerations,
constraints in the system, remaining problems, and other experience from the
development and experiments with the system are reported in this paper.
No 17
PROBABILITY BASED VERIFICATION OF TIME MARGINS IN DIGITAL DESIGNS
Bengt Magnhagen
Switching time characteristics for digital elements specify ranges in which
the elements can respond and when inputs are permitted to change for predictable
output performance. Whether the switching time requirements are met or not is
verified by calculating the probability of conflicts between transitions
propagated in the digital network. To accurately perform this verification it is
necessary to handle the influence of time variable distribution, time
correlations within a chip, reconvergent fanout, loading factors, operating
temperature, etc.
The project included both construction of a simulation system employing the
above principles, and experiences with its use in constructed and actual design
examples. Experience showed that the assuming normally distributed time
variables gives simulation results which are better in agreement with physical
observations than results from design verification systems based on other
principles. Application of the system to real design problems has shown that
such a system can be a valuable design tool.
No 18
SEMATISK ANALYS AV PROCESSBESKRIVNINGAR I NATURLIGT SPRÅK
Mats Cedvall
The purpose of the project described in this report is to study control
structures in Natural Swedish, especially those occuring when tasks of
algorithmic nature are described, and how to transform these specifications into
programs, which can then be executed.
The report describes and discusses the solutions that are used in an implemented
system which can read and comprehend descriptions of patience (solitaire) games.
The results are partly language dependent, but are not restricted to this
specific problem environment.
The system is divided into four modules. The syntactic module splits the
sentence approximately to its component parts. In addition to the standard
component categories, such as subject and predicate, every preposition is
regarded as a component category of the sentence. The semantic analysis within a
sentence works with a set of internallsation rules, one for each combination of
a verb and a component part. The third module deals with the semantics on text
level and integrates the representation of a sentence into the program code that
is built up. The last module is an interpreter which can execute the programs
representing patience games.
No 22
A MACHINE INDEPENDENT LISP COMPILER AND ITS IMPLICATIONS FOR IDEAL HARDWARE
Jaak Urmi
A LISP compiler is constructed without any a priori assumptions about the
target machine. In parallel with the compiler a LISP oriented instruction set is
developed. The instruction set can be seen as either an intermediarylanguage for
a traditional computer oras the instruction set for a special purpose LISP
machine. The code produced by the compiler is evaluated with regard to its
static and dynamic properties. Finally some architectural aspects on LISP
oriented hardware are discussed. The notion of segments with different word
lengths, under program control, is developed and a proposed implementation of
this is described.
No 33
COMPILATION OF MULTIPLE FILE QUERIES IN A META-DATABASE SYSTEM
Tore Risch
A meta-database system is constructed for describing the contents of very
large databases. The meta-database is implemented as data structures in a symbol
manipulation language, separate from the underlying database system. A number of
programs are built around the meta-database. The most important program module
is a query compiler, which translates a non-procedural query language called LRL
into a lower level language (COBOL). LRL permits the specification of database
retrievals without stating which files are to be used in the search, or how they
shall be connected. This is decided automatically by the query compiler. A major
feature of the system is a method, the Focus method, for compiletime
optimization of these choices. Other facilities include the definition of
"views" of the database; data directory services; authority codes; and
meta-database entry and update.
Design issues discussed include the decision to compile rather than interpret
non-procedural query languages; the decision to separate the meta-database from
the underlying database system; and the problem of achieving an architecture
convertible to any underlying database system. Experience with one such
conversion is reported.
No 51
SYNTHESIZING DATABASE STRUCTURES FROM A USER ORIENTED DATA MODEL
Erland Jungert
In data processing a form (document) is normally a medium for data entry. Forms
are mainly user oriented, however, they can also be motivated for other reasons.
In this study different properties and motives for use of forms are discussed.
It will be demonstrated how user defined form types can be used to generate
definitions for consistent and non-redundant databases and also how form types
can be interfaced to such databases and provide a query language. It will also
be shown that form types can constitute application programs written in a
special purpose language. An important feature here is that form types can be
used as input in a process for automatic program generation. Discussed is also
the extraction of efficient access paths from the user defined form types and
the architecture of a form oriented system which make full use of the mentioned
properties.
No 54
CONTRIBUTIONS TO THE DEVELOPMENT OF METHODS AND TOOLS FOR INTERACTIVE DESIGN
OF APPLICATIONS SOFTWARE
Sture Hägglund
This thesis consists of five research reports dealing with different aspects of
the design of interactive application oriented software. A generalized framework
for dialogue design is presented and the implementation of customized
programming environments, supporting computer utilization for specific
applications is discussed. Highlights of our presentation are:
- Uniform treatment of different kinds of end-user dialogues, especially with respect to irregular or unexpected terminal inputs
- Emphasis on programming environments instead of language design, promoting (he view of programming as a specification process performed with a data editor.
- Introduction of an intermediate system level, where a general-purpose programming system is specialized for a given class of applications, through the support of specialized conceptual frameworks and default mechanisms.
- Promotion of control independence in the sense that the specification of program execution particulars, such as end-user interactions etc., is postponed as long as possible and liable to subsequent change without reprogramming.
No 55
PERFORMANCE ENHANCEMENT IN A WELL-STRUCTURED PATTERN MATCHER THROUGH PARTIAL
EVALUATION
Pär Emanuelson
Partial evaluation is a technique which can be utilized for the generation
of compiled code from the corresponding interpreter. In this work the partial
evaluation technique is applied to a pattern match interpreter, in order to
achieve the simultaneous goals of a general, well-structured program which is
extendible and still make high performance at execution possible. A formal
definition of pattern matching is presented, which is the basis for the
interpreter. The partial evaluation technique is evaluated with respect to other
techniques for implementation of pattern matchers. Control structures for
pattern matching such as backtracking, generators, and recursion are presented,
and the appropriateness of these for use in partial evaluation is discussed.
No 77
MECHANISMS OF MODIFIABILITY IN LARGE SOFTWARE SYSTEMS
Östen Oskarsson
Large software systems are often characterized by a continuing evolution,
where a large number of people are involved in maintaining and extending the
system. Software modifiability is a critical issue in such system evolution. It
is desirable that the basic design is modifiable, and that subsequent evolution
maintains this modifiability. This thesis is an investigation of the mechanisms
behind the exhibited modifiability and lack of modifiability in a large
commercial software system during a part of its evolution.
First, the relation between modifiability and different types of
modularizations are discussed, and a dichotomy of software modularizations Is
proposed. As a measure of modifiability at system level, i.e. disregarding the
internal modifiability of modules, we use the number of modules which are
influenced by the implementation of a certain system change. The implementation
of each requirement in one release of the system is examined, and the underlying
causes of good and bad modifiability are explained. This results in a list of
factors which were found to Influence the modifiability.
No 94
CODE GENERATOR WRITING SYSTEMS
Hans Lunell
Abstract: This work studies Code Generator Writing Systems (CGWS), that is,
software systems aimed at facilitating or automating the development of software
for the synthesizing part of the compilation process. The development of such
systems is reviewed and analyzed.
Part I lays a basis for the review. General models of compilation and compiler
development are presented, as well as of Software Writing Systems, a companion
concept to CGWSs. Furthermore, a number of code generation issues are discussed
in some detail.
Part II contains the review, system by system. A decade of development is
presented and analyzed, with an almost complete coverage of previous systems,
including the systems and designs of Elson and Rake, Wilcox, Miller, Donegan,
Wasilew, Weingart, Fraser, Newcomer, Cattell, Glanville and Ganapathi.
Part III is organized thematically, focusing on different aspects of the
reviewed systems. For each aspect, common and different traditions, ideals and
solutions are brought to light.
The epilogue, finally, indicates questions and areas which deserve further
research. It is found that few conclusive results exist, and that most of the
previous work has circled around a minor part of code synthesis: in this thesis
characterized as the aspect of code selection. Largely ignored are register
handling, storage handling, code formatting and implementation decision support.
Throughout this work principal methodological issues involved in surveying and
referring to such varied work are raised and discussed. In particular, we
discuss the appropriateness of adopting certain techniques usually found in the
humanities.
No 97
ADVANCES IN MINIMUM WEIGHT TRIANGULATION
Andrzej Lingas
Abstract: A triangulation of a planar point set is a maximal set of
non-intersecting straight-line segments between points in this set. Any
triangulation of a planar point set partitions the convex hull of the set into
triangles. A minimum weight triangulation (MWT) is a triangulation achieving the
smallest, possible total edge length. The problem of finding MWT was raised by
an application in interpolating values of two argument functions, several years
ago. Nowadays, it remains one of the most intriguing, specific problems of
unknown status belonging to the intersection of Theoretical Computer Science and
Discrete Mathematics. One could neither prove its decision version to be
NP-complete nor offer a heuristic producing a solution within a non-trivial
factor of the optimum.
We propose a novel heuristic for MWT running in cubic time. Its idea is
simple. First we find the convex hull of the input point set. Next, we construct
a specific planar forest connecting the convex hull with the remaining points in
the input set. The convex hull plus the forest result in a simply-connected
polygon. By dynamic programming we find a minimum weight triangulation of the
polygon. The union of the polygon triangulation with the polygon yields a
triangulation of the input set. In consequence, we are able to derive the first,
non-trivial upper bound on the worst case performance (i.e. factor) of a
polynomial time heuristic for MWT. Moreover, under the assumption of uniform
point distribution we prove that the novel heuristic, and the known Delauney and
Greedy heuristics for MWT yield solutions within a logarithmic factor of the
optimum, almost certainly.
We also prove the NP-completeness of Minimum Weight Geometric Triangulation
for multi-connected polygons. Moreover, we give an evidence that Minimum Weight
Geometric Triangulation for planar point sets, more related to MWT, is
NP-complete.
No 109
TOWARDS A DISTRIBUTED PROGRAMMING ENVIRONMENT BASED ON INCREMENTAL COMPILATION
Peter Fritzson
Abstract: A Programming Environment is a system that provides computer
assistance during software development and maintenance. The primary objective of
this work concerns practically usable methods and tools in the construction of
incremental and integrated programming environments that provide good support
for debugging and testing of programs in a distributed context, in our case a
host-target configuration. Such a system, called DICE - Distributed Incremental
Compiling Environment, has been constructed and currently supports development
of PASCAL programs. Three of the papers in this volume are concerned with this
topic.
It is demonstrated how powerful symbolic debuggers may be implemented with the
aid of an incremental compiler. Methods for statement-level incremental
compilation are described. Strategies suitable for implementing programming
environments are discussed and exemplified by the DICE system. Some preliminary
experience from the use of the prototype version of the DICE system is given.
The concept of Consistent Incremental Compilation is defined, both informally
and by algebraic methods. A semi-formal description of the architecture of the
DICE system is presented. Many aspects of this system description are relevant
for a large class of programming environments of this kind. Problems that arise
from allowing mixed execution and program editing are also considered.
One of the tools in a programming environment is the prettyprinter. The topic
of the fourth paper is the automatic generation of prettyprinters. A
language-independent algorithm for adaptive prettyprinting is described together
with its application to ADA and PASCAL. Problems associated with the
representation and prettyprinting of comments in abstract syntax trees are
discussed together with some solutions.
No 111
THE DESIGN OF EXPERT PLANNING SYSTEMS. AN EXPERIMENTAL OPERATIONS PLANNING
SYSTEM FOR TURNING
Erik Tengvald
Abstract: The goal of this thesis work is to gather experience from a
practical application of AI technology. More precisely we are interested in how
to build expert planning system. I.e. systems making plans requiring expertise.
The practical application we have chosen is operations planning for turning, the
making of plans for the machining of blanks into parts. The thesis can be seen
as composed of three main parts.
The first part is introductory. In this we describe our practical domain and
also give a short description of the methods which we initially believed to be
of interest for building expert planning systems.
The second part contains a description of our experimental work. This part
contains a sketch of an operations planning system for turning. The operations
planning task does indeed require a fair amount of expertise and common sense,
and because of resource limitations we have not been able to make a complete
operations planning system.
The third part finally contains our experiences. The main result is a major
reorientation of our viewpoint, from an interest in the structure of expert
systems towards the structure of the expert system building process. In this
section we list what we consider to be the main method to facilitate expert
system building namely, the designers ability to avoid introducing unnecessary
control knowledge by the judicious use of search, the designer ability to take
over ready made concept structures from the domain experts and the designers
ability to use the system-ware as an intellectual tool. This part also contains
notes on the practical use of our experiences and some guidelines for further
work.
This thesis presents a conceptual framework for design of distributed
applications. It will help the designer to cope with complexities introduced by
the nature of distribution. The framework consists of a set of structuring
rules. Resulting designs are modular and hierarchical.
Also presented is a programming environment that takes advantage of structures
introduced by the framework. In particular, we show how the structures can be
used for improving testing methodologies for distributed applications.
No 155
HEURISTICS FOR MINIMUM DECOMPOSITIONS OF POLYGONS
Christos Levcopoulos
Abstract: The following problems of minimally decomposing polygons are
considered: (1) decompose a polygon into a minimum number of rectangles, (2)
partition a polygon into rectangles by inserting edges of minimum total length
and (3) partition a polygon into triangles by inserting a maximal set of non
intersecting diagonals, such that their total length in minimized.
The first problem has an application in fabricating masks for integrated
circuits. Tight upper and lower bounds are shown for the maximal number of
rectangles which may be required to cover any polygon. Also, a fast heuristic
which achieves these upper bounds is presented.
The second problem has an application in VLSI design, in dividing routing
regions into channels. Several heuristics are proposed, which produce solutions
within moderate constant factors from the optimum. Also, by employing an unusual
divide-and-conquer method, the time performance of a known heuristic is
substantially reduced.
The third problem has an application in numerical analysis and in constructing
optimal search trees. Here, the contribution of the thesis concerns analysis of
the so called greedy triangulation. Previous upper and lower bounds on the
length of the greedy triangulation are improved. Also, a linear-time algorithm
computing greedy triangulations for an interesting class of polygons is
presented.
No 165
A THEORY AND SYSTEM FOR NON-MONOTONIC REASONING
James W. Goodwin
Abstract: Logical Process Theory (LPT) is a formal theory of non-monotonic
reasoning, inspired by dependency nets and reason maintenance. Whereas logics
focus on fixpoints of a complete inference rule, LPT focuses on fixpoints of
finite subsets of the inference rule, representing the set of inferences made so
far by the reasoning process in a given state. LPT is thus both a logical
meta-theory and a process meta-theory of non-monotonic reasoning.
WATSON is an implementation of a non-monotonic LPT. DIAGNOSE is a simple
diagnostic reasoner written in WATSON. They show that LPT is implementable, an
adequate for some non-monotonic reasoning. A new algorithm for non-monotonic
reason maintenance and proofs of its total correctness and complexity are given.
Part II investigates "reasoned control of reasoning”: the reasoner reasons
about its own state, and decides what to reason about next. LPT and WATSON are
extended to support control assertions which determine which formulas are
"active", i.e. eligible to be forward chained. A map-coloring example
demonstrates reasoned control.
No 170
A FORMAL METHODOLOGY FOR AUTOMATED SYNTHESIS OF VLSI SYSTEMS
Zebo Peng
Abstract: Automated synthesis of VLSI systems deals with the problem of
automatically transforming a VLSI system from an abstract specification into a
detailed implementation. This dissertation describes a formal design methodology
and an integrated set of automatic as well as computer aided design tools for
the synthesis problem.
Four major tasks of the synthesis process have been treated in this research:
first the automatic transformation of a high level behavioral description which
specifies only what the system should be able to do into a structural
description which specifies the physical components and their connections;
second the partitioning of a structural description into a set of modules so
that each module can be implemented independently and operated asynchronously;
third the optimization of the system implementation in terms of cost and
performance; finally, the automatic generation of microprograms to implement the
control structures of VLSI circuits.
To address these four synthesis problems, a formal design representation
model, the extended timed Petri net (ETPN), has been developed. This design
representation consists of separate but related models of control and data path.
It can be used to capture both the structures and behaviors of VLSI systems as
well as the intermediate results of the synthesis process. As such, the
synthesis tasks can be carried out by a sequence of small step transformations.
The selection of these transformations is guided by an optimization algorithm
which makes design decisions concerning operation scheduling, data path
allocation, and control allocation simultaneously. This integrated approach
results in a better chance to reach the globally optimal solution.
The use of such a formal representation model also leads to the efficient use
of CAD and automatic tools in the synthesis process and the possibility of
verifying some aspects of a design before it is completed. An integrated design
environment, the CAMAD design aid system, has been implemented based on the ETPN
model. Several examples have been run on CAMAD to test the performance of the
synthesis algorithms. Experimental results show that CAMAD can efficiently
generate designs of VLSI systems for a wide class of applications from
microprocessor based architectures to special hardware.
No 174
A PARADIGM AND SYSTEM FOR DESIGN OF DISTRIBUTED SYSTEMS
Johan Fagerström
Abstract: Design and implementation of software for distributed systems
involve many difficult tasks. Designers and programmers not only face
traditional problems, they must also deal with new issues such as
non-deterministic language constructs, and complex timing of events.
This thesis presents a conceptual framework for design of distributed
applications. It will help the designer to cope with complexities introduced by
the nature of distribution. The framework consists of a set of structuring
rules. Resulting designs are modular and hierarchical.
Also presented is a programming environment that takes advantage of structures
introduced by the framework. In particular, we show how the structures can be
used for improving testing methodologies for distributed applications.
No 192
TOWARDS A MANY-VALUED LOGIC OF QUANTIFIED BELIEF
Dimiter Driankov
Abstract: We consider a logic which "truth-values" are represented as
quantified belief/disbelief pairs, thus integrating reports on how strongly the
truth of a proposition is believed, and how strongly it is disbelieved. In this
context a major motive for the logic proposed is, that it should not lead (as in
classical logic) to irrelevant conclusions when contradictory beliefs are
encountered. The logical machinery is built around the notion of the so-called
logical lattice: a particular partial order on belief/disbelief pairs and fuzzy
set-theoretic operators representing meet and join. A set of principles
(semantically valid and complete) to be used in making inferences is proposed,
and it is shown that they are a many-valued variant of the tautological
entailment of relevance logic.
To treat non truth-functional aspects of knowledge we introduce also the
notion of the information lattice together with particular meet and join
operators. These are used to provide answers to three fundamental questions: how
to represent knowledge about belief/disbelief in the constituents of a formula
when supplied with belief/disbelief about the formula as a whole; how to
determine the amount of belief/disbelief to be assigned to formulas in an
epistemic state (or a state of knowledge), that is, a collection of partial
interpretations, and finally, how to change the present belief/disbelief in the
truth of formulas, when provided with an input bringing in new belief/disbelief
in the truth of these formulas. The answer to all these questions is given by
defining a formula as a mapping from one epistemic state to a new state. Such a
mapping is constructed as the minimum mutilation of the given epistemic state
which makes a formula to be believed true (or false) in the new one. The
entailment between formulas is also given the meaning of an input and its
properties are studied.
We study also if - then inference rules that are not pure tautological
entailment, but rather express the causal relationship between the beliefs held
with respect to the truth and falsity of the antecedent and the conclusion.
Detachment operators are proposed to be used in cases when: (i) it is firmly
believed that belief/disbelief in the validity of the conclusion follows from
belief and/or disbelief in the validity of the antecedent, and (ii) it is
believed, but only to a degree, that belief/disbelief in the validity of the
conclusion follows from belief/disbelief in the validity of the antecedent. It
is shown that the following four modes of inference are legitimated within the
setting of these rules: modus ponens, modus tollens, denial, and confirmation.
We consider also inference rules augmented with the so-called exception
condition: if /A/ then /B/ unless /C/. The if - then part of the rule expresses
the major relationship between A and B, i.e., it is believed (up to a degree)
that belief and/or disbelief in B follows from belief and/or disbelief in A.
Then the unless part acts as a switch that transforms the belief/disbelief pair
of B from one expressing belief in its validity to one indicating disbelief in
the validity of B, whenever there is a meaningful enough belief in the exception
condition C.
We also give a meaning to the inference rules proposed as mappings from
epistemic states to epistemic states, thus using them as a tool for changing
already existing beliefs as well as for deriving new ones.
No 213
NON-MONOTONIC INHERITANCE FOR AN OBJECT ORIENTED KNOWLEDGE BASE
Lin Padgham
Abstract: This thesis is a collection of reports presenting an object-oriented
database which has been developed as the basis for an intelligent information
system, LINCKS, and a theory for default reasoning regarding the objects in the
database w.r.t. one or more type schemas, also to be available within the
database.
We describe NODE, the object-oriented knowledge repository developed, and
those characteristics which make it a suitable base for an intelligent system.
NODE uses a basic data structure suitable for building complex structured
objects. It also maintains history information regarding its objects, and
establishes a basis for developing the notion of parallel work on the same
complex object.
We introduce a model where object types are defined by two sets of
characteristics called the type core and the type default. The type core
contains those characteristics considered necessary for all members of the type,
while the type default describes the prototypical member of the type. We
introduce a default assumption operator E which allows us to assume that a given
object has as many of the typical characteristics as is reasonable to believe,
given the information we have. We describe and define the process of assumption
modification which eventually leads to a consistent set of assumptions. These
can then be used to draw conclusions. We also introduce the idea of a negative
default assumption, N, which allows us to draw extra negative conclusions. This
opens up the possibility for dealing with contraposition in a controlled manner.
We develop the notion of a revision function which establishes preferences for
some default assumptions over others. We define a basic revision function which,
amongst other things, prefers specificity. We then use that as a basis for
definition of further revision functions which give different styles of
inheritance reasoning.
Finally we give a detailed technical description of a possible implementation
of the theory described, using matrix representations. The algorithms are shown
to have approximately linear efficiency given a large number of types where most
of the relationships are of a default nature.
No 214
A FORMAL HARDWARE DESCRIPTION AND VERIFICATION METHOD
Tony Larsson
Abstract: Design of correctly working hardware systems involves the
description of functional, structural and temporal aspects at different levels
of abstraction and the verification of the requested equivalence between these
descriptions. This process is usually very time-consuming and its simplification
is a desirable aim.
To provide for this it is important that the description language and the
verification method can be used at as many abstraction levels as possible and
that previous results can be reused. Further in order to support formal
reasoning about hardware circuits and their correctness, it is preferable if the
description method is based on a well-founded formalism.
As one goal of this thesis we will illustrate that by the extension of
predicate logic with a temporal reference operator, it is possible to specify
both functional, temporal and structural properties of hardware circuits. The
temporal reference operator makes it possible to describe and reason about
relationships between the streams of values observable at the ports of a
hardware circuit. We specify the intended meaning of this temporal operator by a
set of axioms and by giving it an interpretation vis-a-vis a temporal model.
Based on these axioms it is possible to formally reason about temporal
relationships.
This leads to the major goal, i.e. to provide support for a further mechanized
verification of hardware based on transformation of boolean, arithmetic,
relational and temporal constructs expressed in the description language.
Important contributions of the thesis are methods for multi-level hardware
description and methods for mechanized verification including functional,
structural and temporal aspects that can be used as a complement to existing
theorem proving systems. A prototype implementation of the description language
based on the generalized (untyped) predicate logic presented and an
implementation of a verification system has been part of the research underlying
this thesis.
No 221
FUNDAMENTALS AND LOGICAL FOUNDATIONS OF TRUTH MAINTENANCE
Michael Reinfrank
Abstract: Despite their importance in AI problem solving, nonmonotonic truth
maintenance systems (TMSs) still lack sufficiently well-understood logical
foundations. In this thesis, I present a rigorous logical theory of TMSs. I
pursue a two-step, bottom-up approach. First, I specify a direct, but
implementation-independent, theory of truth maintenance. This theory, then, is
used to
draw a connection between TMSs and Autoepistemic Logic, thus closing a gap
between theory and implementation in Nonmonotonic Reasoning,
provide a correctness proof for an encoding of nonmonotonic justifications in
an essentially monotonic assumption-based TMS,
design a uniform framework for truth maintenance and nonmonotonic inference
based on the concept of justification-schemata,
discuss a model theory of TMSs in terms of stable, maximally preferred model
sets.
At the time of writing, no comprehensive introductory readings on truth
maintenance are available. Therefore, the present thesis begins with a set of
lecture notes which provide the necessary background information for the
subsequent formal treatment of foundational issues.
No 239
KNOWLEDGE-BASED DESIGN SUPPORT AND DISCOURSE MANAGEMENT IN USER INTERFACE
MANAGEMENT SYSTEMS
Jonas Löwgren
This dissertation is about User Interface Management Systems (UIMSs), and more
specifically about new ways to extend the scope and the functionality of these
systems.
I define a UIMS as an interactive tool or set of tools intended to facilitate
the design, development and delivery of user interfaces. The assumption
underlying the application of UIMS techniques to software development is that
the user interface can to some extent be separated from the underlying
functionality of the application. Current UIMS technology is, however, not
capable of coping with this separation in the case of conversational expert
systems. In the first part of the dissertation, I present a new UIMS
architecture, based on planning techniques and a representation of the beliefs
of the user and the system, and show by means of an example that dialogue
independence can be achieved for a task-oriented expert system by using this new
architecture.
The second part is concerned with support for the user of the UIMS---the user
interface designer. The approach I advocate is to enhance the design and
development environment with knowledge of user interface design, knowledge which
is used to generate comments on the user interface designer’s work. A prototype
expert critiquing system was built to test the feasibility of knowledge-based
evaluation of user interface designs. The results were encouraging and also
demonstrated that the level of user interface representation is crucial for the
quality of the evaluation. I propose an architecture where this kind of
knowledge-based support is integrated with a UIMS and argue that the
requirements on a high-level user interface representation can be relaxed if the
system also analyses data from empirical tests of the user interface prototype.
No 244
META-TOOL SUPPORT FOR KNOWLEDGE ACQUISITION
Henrik Eriksson
Knowledge acquisition is a major bottleneck in expert system development.
Specialized, or domain-oriented, knowledge acquisition tools can provide
efficient support in restricted domains. However, the principal drawback with
specialized knowledge acquisition tools is that they are domain-dependent. This
means that the cost of implementing, and thus applying, such tools is high.
Meta-level environments is an approach to support knowledge engineers in
developing such knowledge acquisition tools. Meta-tools, i.e. tools for creating
knowledge acquisition tools, can be used to specify and automatically generate
knowledge acquisition tools for single domains and even single applications.
This thesis presents an abstract architecture approach to the specification of
knowledge acquisition tools. In this framework knowledge acquisition tools can
be specified according to an abstract model of the target tool architecture.
DOTS is a meta-tool that supports the abstract-architecture specification
scheme. Knowledge engineers can use DOTS to specify and generate domain-oriented
knowledge acquisition tools that can be used by domain experts directly.
Two implementations of knowledge acquisition tools for different domains are
presented in this thesis. These tools are representatives of knowledge
acquisition tools that are desirable to generate from meta-tools. One of them
was hand-crafted and specialized to the domain of protein purification planning.
The other emerged from an evaluation of DOTS by developing a knowledge
acquisition tool in a different domain (troubleshooting laboratory equipment).
Results from this evaluation are also reported.
No 252
AN EPISTEMIC APPROACH TO INTERACTIVE DESIGN IN MULTIPLE INHERITANCE
HIERARCHIES
Peter Eklund
The thesis explores the advantages of a marriage between a “mixed dialogue”
interaction metaphor and belief logics and in particular how the two can be used
for multiple inheritance hierarchy design. The result is a design aid which
produces critiques of multiple inheritance hierarchies in terms of their logical
consequences. The work draws on a number of theoretical issues in artificial
intelligence, namely belief logics and multiple inheritance reasoning, applying
“belief sets” to dialogue and using multiple inheritance hierarchy design as a
specific application.
The work identifies three design modes for the interface which reflect the
intuitions of multiple inheritance hierarchy design and conform to an existing
user modeling framework. A major survey of multiple inheritance hierarchies
leads to the allocation of a precise inheritance semantics for each of these
design modes. The semantics enable a definition of entailment in each, and are
in turn used to determine the translation from inheritance networks to belief
sets.
The formal properties of belief sets imply that when an ambiguous inheritance
network is encountered more than than one belief set must be created. Each
belief set provides an alternative interpretation of the logical consequences of
the inheritance heirarchy. A “situations matrix” provides the basic referent
data structure for the system we describe. Detailed examples of multiple
inheritance construction demonstrate that a significant design aid results from
an explicit representation of operator beliefs and their internalization using
an epistemic logic.
No 258
NML3 - A NON-MONOTONIC FORMALISM WITH EXPLICIT DEFAULTS
Patrick Doherty
The thesis is a study of a particular approach to defeasible reasoning based
on the notion of an information state consisting of a set of partial
interpretations constrained by an information ordering. The formalism proposed,
called NML3, is a non-monotonic logic with ex-plicit defaults and is
characterized by the following features: (1) The use of the strong Kleene
three-valued logic as a basis. (2) The addition of an explicit default operator
which enables distinguishing tentative conclusions from ordinary conclusions in
the object language. (3) The use of the technique of preferential entailment to
generate non-monotonic behavior. The central feature of the formalism, the use
of an explicit default operator with a model theoretic semantics based on the
notion of a partial interpretation, distinguishes NML3 from the existing
formalisms. By capitalizing on the distinction between tentative and ordinary
conclusions, NML3 provides increased expressibility in comparison to many of the
standard non-monotonic formalisms and greater flexibility in the representation
of subtle aspects of default reasoning.
In addition to NML3, a novel extension of the tableau-based proof technique is
presented where a signed formula is tagged with a set of truth values rather
than a single truth value. This is useful if the tableau-based proof technique
is to be generalized to apply to the class of multi-valued logics. A refutation
proof procedure may then be used to check logical consequence for the base logic
used in NML3 and to provide a decision procedure for the pro-positional case of
NML3.
A survey of a number of non-standard logics used in knowledge representation
is also provided. Various formalisms are analyzed in terms of persistence
properties of formulas and their use of information structures.
No 260
GENERALIZED ALGORITHMIC DEBUGGING TECHNIQUE
Nahid Shahmehri
This thesis presents a novel method for semi-automatic program debugging: the
Generalized Algorithmic Debugging Technique, GADT. The notion of declarative
algorithmic debugging was first introduced for logic programming. However, this
is the first algorithmic debugging method based on the principle of declarative
debugging, which can handle debugging of programs written in an imperative
language including loops and side-effects. In order to localize a bug, the
debugging algorithm incrementally acquires knowledge about the debugged program.
This knowledge is supplied by the user. The algorithm terminates when the bug
has been localized to within the body of a procedure or an explicit loop.
The generalized algorithmic debugging method uses program transformation and
program flow analysis techniques to transform the subject program to a largely
side-effect free internal form, which is used for bug localization. Thus, this
method defined two views of a program: (1) the user view which is the original
program with side-effects and (2) the transformed view which is the transformed
side-effect free version of the original program. Transparent program debugging
is supported by keeping a mapping between these two views. The bug localization
algorithm works on the transformed version, whereas user interactions are
defined in terms of the user view.
We have presented a general technique which it is not based on any ad-hoc
assumptions about the subject program. The flexibility of this method has made
it possible to further improve the bug localization algorithm by employing a
number of other techniques, i.e. program slicing and test database lookup, thus
increasing the degree of automation provided by GADT. These extensions are
topics for ongoing research projects and further work.
A survey and evaluation of a number of automated debugging systems and the
techniques behind these systems is also presented. We have introduced several
criteria for comparing these techniques with GADT. A prototype implementation of
the generalized algorithmic debugging technique has been done to verify its
feasibility, and to provide feedback for further refinement of the method.
No 264
REPRESENTATIONS OF DISCOURSE: COGNITIVE AND COMPUTATIONAL ASPECTS
Nils Dahlbäck
This work is concerned with empirical studies of cognitive and computational
aspects of discourse representations. A more specific aim is to contribute to
the development of natural language intefaces for interaction with computers,
especially the development of representations making possible a continuous
interactive dialogue between user and system.
General issues concerning the relationship between human cognitive and
computational aspects of discourse representations were studied through an
empirical and theoretical analysis of a psychological theory of discourse
coherence, Johnson-Laird’s theory of mental models. The effects of previous
background knowledge of the domain of discourse on the processing of the types
of texts used in previous work was demonstrated. It was argued that this
demonstration does not invalidate any of the basic assumptions of the theory,
but should rather be seen as a modification or clarification. This analysis also
suggested that there are principled limitations on what workers in computational
linguistics can learn from psychological work on discourse processing. While
there is much to be learned from empirical investigations concerning what kinds
of knowledge is used during the participation in dialogues and in the processing
of other kinds of connected discourse, there is less to be learned concerning
how this is represented in detail. One specific consequence of this position is
the claim that computational theories of discourse are in principle theories
only of the processing of discourse in computers, as far as the detailed
representational account is concerned.
Another set of studies used the so-called Wizard of Oz-method, i.e. dialogues
with simulated natural language interfaces. The focus was on the dialogue
structure and the use of referring and anaphoric expressions. The analysis
showed that it is possible to describe the structure of these dialogues using
the LINDA-model, the basic feature of which is the partitioning of the dialogues
in a number of initiative-response (IR) units. The structure can be described
using a simple context free grammar. The analysis of the referring expressions
also shows a lack of some of the complexities encountered in human dialogues.
The results suggests that it is possible to use computationally simpler methods
of dialogue management than what has hitherto been assumed, both for the
dialogue management and the resolution of anaphoric references.
No 265
ABSTRACT INTERPRETATION AND ABSTRACT MACHINES: CONTRIBUTIONS TO A METHODOLOGY
FOR THE IMPLEMENTATION OF LOGIC PROGRAMS
Ulf Nilsson
Abstract:Because of the conceptual gap between high-level programming
languages like logic programming and existing hardware, the problem of
compilation often becomes quite hard. This thesis addresses two ways of
narrowing this gap --- program analysis through abstract interpretation and the
introduction of intermediate languages and abstract machines. By means of
abstract interpretations it is possible to infer program properties which are
not explicitly present in the program --- properties which can be used by a
compiler to generate specialized code. We describe a framework for constructing
and computing abstract interpretations of logic programs with equality. The core
of the framework is an abstract interpretation called the base interpretation
which provides a model of the run-time behaviour of the program. The model
characterized by the base interpretation consists of the set of all reachable
computation states of a transition system specifying an operational semantics
reminiscent of SLD-resolution. This model is in general not effectively
computable, however, the base interpretation can be used for constructing new
abstract interpretations which approximate this model. Our base interpretation
combines both a simple and concise formulation with the ability of inferring a
wide range of program properties. In addition the framework also supports
efficient computing of approximate models using a chaotic iteration strategy.
However, the framework supports also other computation strategies.
We also show that abstract interpretations may form a basis for implementation
of deductive databases. We relate the magic templates approach to bottom-up
evaluation of deductive databases with the base interpretation of C. Mellish and
prove that they not only specify isomorphic models but also that the
computations which lead up to those models are isomorphic. This implies that
methods (for instance, evaluation and transformation techniques) which are
applicable in one of the fields are also applicable in the other. As a
side-effect we are also able to relate so-called ”top-down’’ and ”bottom-up’’
abstract interpretations.
Abstract machines and intermediate languages are often used to bridge the
conceptual gap between language and hardware. Unfortunately --- because of the
way they are presented -- it is often difficult to see the relationship between
the high-level and intermediate language. In the final part of the thesis we
propose a methodology for designing abstract machines of logic programming
languages in such a way that much of the relationship is preserved all through
the process. Using partial deduction and other transformation techniques a
source program and an interpreter are ”compiled’’ into a new program consisting
of ”machine code’’ for the source program and an abstract machine for the
machine code. Based upon the appearance of the abstract machine the user may
choose to modify the interpreter and repeat the process until the abstract
machine reaches a suitable level of abstraction. We demonstrate how these
techniques can be applied to derive several of the control instructions of
Warren’s Abstract Machine, thus complementing previous work by P. Kursawe who
reconstructed several of the unification instructions using similar techniques.
No 270
THEORY AND PRACTICE OF TENSE-BOUND OBJECT REFERENCES
Ralph Rönnquist
Abstract: The work presented in this thesis is a study of a formal method for
representation of timeand development. It constitutes a formalisation of the
conception that change and development is attributed to objects, which then
occur in time structures of versions. This conception is taken as the foundation
for a formal temporal logic, LITE, which is then defined insyntax, semantics and
interpretation.
The resulting logic is studied with respect to how it captures temporal
aspects of developments. In particular the way apparently atemporal formulas
convey implicit synchronisations between object versions is studied. This
includes the temporal implications of reification, of specifying database
invariances, and the intuitions regarding propagation of change for composite
objects.
The logic is also applied and discussed for a few particular process
characterisation tasks. In this logic, processes are generally characterised in
terms of how data changes rather than which actions are performed. As a result,
the same characterisation can be used for both sequential and parallel execution
environments.
The conceptualisation of development and the formal semantics is further
utilised for introducing temporal qualifications in a terminological logic. The
theoretical issues in terminological logics are relatively well understood. They
therefore provide an excellent testbed for experimenting with the usefulness of
the LITE temporal logic.
No 273
PIPELINE EXTRACTION FOR VLSI DATA PATH SYNTHESIS
Björn Fjellborg
Abstract: An important concern in VLSI design is how to exploit any inherent
concurrency in the designed system. By applying pipelining, a high degree of
concurrency and efficiency can be obtained. Current design tools for automatic
pipeline synthesis exploit this by pipelining loops in the design. However, they
lack the ability to automatically select the parts of the design that can
benefit from pipelining. Pipeline extraction performs this task as a first step
of pipeline synthesis. This thesis addresses the problem of pipeline extraction
from a general perspective, in that the search for pipelines is based on
detecting potential for hardware sharing and temporal overlap between the
individual tasks in a design. Thus loops appear as an important special case,
not as the central concept. A formalism for reasoning about the properties
underlying pipelinability from this perspective has been developed. Using that,
a series of results on exactly what mutual dependencies between operations that
allow a pipelined schedule with static control sequence to be constructed are
proven. Furthermore, an evaluation model for designs with mixed pipelined and
non-pipelined parts has been formulated. This model and the formalism’s concept
of pipelinability form the basis for a heuristics-guided branch and bound
algorithm that extracts an optimal set of pipelines from a high-level
algorithmic design specification. This is implemented in the pipeline extraction
tool PiX, which operates as a preprocessor to the CAMAD VLSI design system. The
extraction is realized as transformations on CAMAD’s Petri net design
representation. For this purpose, a new model for representing pipeline
constraints by Petri nets has been developed. Preliminary results from PiX are
competitive with those from existing pipeline synthesis tools and also verify a
capability to extract cost-efficient pipelines from designs without apparent
pipelining properties
No 276
A FORMUL BASIS FOR HORN CLAUSE LOGIC WITH EXTERNAL POLYMORPHIC FUNCTIONS
Staffan Bonnier
Abstract: Horn clause logic has certain properties which limit its usefulness
as a programming language. In this thesis we concentrate on two such
limitations:
(P1) Horn clause logic has no support for the (re-) use of external software
modules. Thus, procedures which are more easily solved in other kinds of
languages still have to be encoded as Horn Clauses.
(P2) To work with a predefined structure like integer arithmetic, one has to
axiomatize it by a Horn clause program. Thus functions of the structure are to
be represented as predicates of the program.
When extending the Horn clause formalism, there is always a trade-off between
general applicability and purity of the resulting system. There have been many
suggestions for solving one or both of these problems. Most of the solutions are
based on one of the following two strategies:
(a) To allow new operational features, such as acces to low-level constructs
of other languages.
(b) To introduce new language constructs, and to support them by a formal
semantics.
In this thesis a solution to problems (P1) and (P2) is suggested. It combines
the strategies of (a) and (b) by limiting their generality: We allow Horn clause
programs to call procedures written in arbitrary languages. It is assumed
however that these procedures compute typed first-order functions. A clean
declarative semantics is obtained by viewing the procedures as a set c of
equations. This set is completely determined by two parameters. The types of the
procedures, and the input-output relationship they induce.As a first step
towards an operational semantics, we show how the computation of correct answers
can be reduced to solving equations modulo c. For the purpose of solving such
equations a type driven narrowing algorithm (TDN) is developed and proved
complete. TDN furthermore benefits from the assumption that polymorphic
functions are parametrical. Still TDN is impractical since it may create
infinitely branching search trees. Therefore a finitely terminaring version of
TDN (FTDN) is considered. Any unification procedure satisfying the operational
restrictions imposed on FTDN is necessarily imcomplete. When only monomorphic
types of infinite size are present, we prove however that FTDN generates a
complete set of answers whenever such a set is generated by some procedures
satisfying the restrictions. A necessary condition for TDN and FTDN to work
properly is that the set of equations to be solved is well-typed. We therefore
give a sufficient condition on programs and goals which ensures that only
well-typed sets of equations are generated.
No 277
DEVELOPING KNOWLEDGE MANAGEMENT SYSTEMS WITH AN ACTIVE EXPERT METHODOLOGY
Kristian Sandahl
Knowledge Management, understood as the ability to store, distribute and
utilize human knowledge in an organization, is the subject of this dissertation.
In particular we have studied the design of methods and supporting software for
this process. Detailed and systematic description of the design and development
processes of three case-study implementations of Knowledge Management software
are provided. The outcome of the projects is explained in terms of an Active
Expert development methodology, which is centered around support for a domain
expert to take a substantial responsibility for the design and maintenance of a
Knowledge Management system in a given area of application.
Based on the experiences from the case studies and the resulting methodology,
an environment for automatically supporting Knowledge Management was designed in
the KNOWLEDGE-LINKER research project. The vital part of this architecture is a
knowledge acquisition tool, used directly by the experts in creating and
maintaining a knowledge base. An elaborated version of the Active Expert
development methodology was then formulated as the result of applying the
KNOWLEDGE-LINKER approach in a fourth case study. This version of the
methodology is also accounted for and evaluated together with the supporting
KNOWLEDGE-LINKER architecture.
No 281
COMPUTATIONAL COMPLEXITY OF REASONING ABOUT PLANS
Christer Bäckström
The artificial intelligence (AI) planning problem is known to be very hard in
the general case. Propositional planning is PSPACE-complete and first-order
planning is undecidable. Many planning researchers claim that all this
expressiveness is needed to solve real problems and some of them have abandoned
theory-based planning methods in favour of seemingly more efficient methods.
These methods usually lack a theoretical foundation so not much is known about
the correctness and the computational complexity of these. There are, however,
many applications where both provable correctness and efficiency are of major
concern, for instance, within automatic control.
We suggest in this thesis that it might be possible to stay within a
well-founded theoretical framework and still solve many interesting problems
tractably. This should be done by identifying restrictions on the planning
problem that improve the complexity figure while still allowing for interesting
problems to be modelled. Finding such restrictions may be a non-trivial task,
though. As a first attempt at finding such restrictions we present a variant of
the traditional STRIPS formalism, the SAS+ formalism. The SAS+ formalism has
made it possible to identify certain restrictions which define a computationally
tractable planning problem, the SAS+-PUS problem, and which would not have been
easily identified using the traditional STRIPS formalism. We also present a
polynomial-time, sound and complete algorithm for the SAS+-PUS problem.
We further prove that the SAS+ formalism in its unrestricted form is equally
expressive as some other well-known formalisms for propositional planning.
Hence, it is possible to compare the SAS+ formalism with these other formalisms
and the complexity results carry over in both directions.
Furthermore, we analyse the computational complexity of various subproblems
lying between unrestricted SAS+ planning and the SAS+-PUS problem. We find that
most planning problems (not only in the SAS+ formalism) allow instances having
exponentially-sized minimal solutions and we argue that such instances are not
realistic in practice.
We conclude the thesis with a brief investigation into the relationship
between the temporal projection problem and the planning and plan validation
problems.
No 292
STUDIES IN INCREMENTAL NATURAL-LANGUAGE ANALYSIS
Mats Wirén
This thesis explores the problem of incremental analysis of natural-language
text. Incrementality can be motivated on psychological grounds, but is becoming
increasingly important from an engineering perspective as well. A major reason
for this is the growing importance of highly interactive, “immediate” and
real-time systems, in which sequences of small changes must be handled
efficiently.
The main technical contribution of the thesis is an incremental parsing
algorithm that analyses arbitrary changes (insertions, deletions and
replacements) of a text. The algorithm is grounded in a general chart-parsing
architecture, which allows different control strategies and grammar formalisms
to be used. The basic idea is to analyse changes by keeping track of
dependencies between partial analyses (chart edges) of the text. The algorithm
has also been adapted to interactive processing under a text editor, thus
providing a system that parses a text simultaneously as it is entered and
edited. By adopting a compositional and dynamic model of semantics, the
framework can be extended to incremental interpretation, both with respect to a
discourse context (induced by a connected, multisentential text) and a
non-linguistic context (induced by a model of the world).
The notion of keeping track of dependencies between partial analyses is
similar to reason maintenance, in which dependencies are used as a basis for
(incremental) handling of belief changes. The connections with this area and
prospects for cross-fertilization are discussed. In particular, chart parsing
with dependencies is closely related to assumption-based reason maintenance.
Both of these frameworks allow competing analyses to be developed in parallel.
It is argued that for the purpose of natural-language analysis, they are
superior to previously proposed, justification-based approaches, in which only a
single, consistent analysis can be handled at a time.
No 297
INTERPROCEDURAL DYNAMIC SLICING WITH APPLICATIONS TO DEBUGGING AND TESTING
Mariam Kamkar
The need of maintenance and modification demand that large programs be
decomposed into manageable parts. Program slicing is one method for such
decomposition. A program slice with respect to a specified variable at some
program point consists of those parts of the program that may directly or
indirectly affect the value of that variable at the particular program point.
This is useful for understanding dependences within programs. A static program
slice is computed using static data and control flow analysis and is valid for
all possible executions of the program. Static slices are often imprecise, i.e.,
they contain unnecessarily large parts of the program. Dynamic slices however,
are precise but are valid only for a single execution of the program.
Interprocedural dynamic slices can be computed for programs with procedures, and
these slices consist of all executed call statements which are relevant for the
computation of the specified variable at the specified program point.
This thesis presents the first technique for interprocedural dynamic slicing
which deals with procedures/functions at the abstract level. This technique
first generates summary information for each procedure call (or function
application), then represents a program as a summary graph of dynamic
dependences. A slice on this graph consists of vertices for all procedure calls
of the program that affect the value of a given variable at the specified
program point. The amount of information saved by this method is considerably
less than what is needed by previous methods for dynamic slicing, since it only
depends on the size of the program’s execution tree, i.e., the number of
executed procedure calls, which is smaller than a trace of all executed
statements.
The interprocedural dynamic slicing method is applicable in at least two
areas, program debugging and data flow testing. Both of these applications can
be made more effective when using dynamic dependence information collected
during program execution. We conclude that the interprocedural dynamic slicing
method is superior to other slicing methods when precise dependence information
for a specific set of input data values at the procedural abstraction level is
relevant.
No 302
A STUDY IN DIAGNOSIS USING CLASSIFICATION AND DEFAULTS
Tingting Zhang
This dissertation reports on the development of a model and system for medical
diagnosis based on the use of general purpose reasoning methods and a knowledge
base which can be built almost entirely from existing medical texts. The
resulting system is evaluated empirically by running 63 patient protocols
collected from a community health centre on the system, and comparing the
diagnoses with those given by medical experts.
It is often the case in Artificial Intelligence that general purpose reasoning
methods (such as default reasoning, classification, planning, inductive
learning) are developed at a theoretical level but are not used in real
applications. One possible reason for this is that real applications typically
need several reasoning strategies to solve a problem. Combining reasoning
strategies, each of which uses a different representation of knowledge is
non-trivial. This thesis addresses the issue of combining strategies in a real
application. Each of the strategies used required some modification, either as a
result of the representation chosen, or as a result of the application demands.
These modifications can indicate fruitful directions for future research.
One well known problem in building A.I. systems is the building of the
knowledge base. This study examines the use of a representation and method which
allowed for the knowledge base to be built from standard medical texts with only
minimal input from a medical expert.
The evaluation of the resulting system indicated that in cases where medical
experts were in agreement, the system almost always reached the same diagnosis.
In cases where medical doctors themselves disagreed the system behaved within
the range of the medical doctors in the study.
No 312
DIALOGUE MANAGEMENT FOR NATURAL LANGUAGE INTERFACES - AN EMPIRICAL APPROACH
Arne Jönsson
Natural language interfaces are computer programs that allow a person to
communicate with a computer system in his own language. This thesis deals with
management of coherent dialogue in natural language interfaces, which involves
addressing the issues of focus structure and dialogue structure. Focus structure
concerns the recording of entities mentioned in the discourse to allow a user to
refer to them in the course of the interaction, dialogue structure involves
handling the relationships between segments in the dialogue.
In a theoretical investigation two approaches to dialogue management are
compared: one is based on recognizing the user’s plan from his goals and
intentions, and the other on modelling the possible actions of the user in a
dialogue grammar. To establish a sound foundation for the design of the dialogue
manager, empirical studies were carried out in the form of Wizard of Oz
experiments. In such studies users interact with what they think is a natural
language interface, but in fact there is a human intermediary. Conducting
experiments of this kind requires careful design and a powerful simulation
environment. Such an environment is presented together with guidelines for the
design of Wizard of Oz experiments. The empirical investigations indicate that
dialogue in natural language interfaces lack many of the complicated features
characterizing human dialogue. Furthermore, the kind of language employed by the
user is dependent to some extent on the application, resulting in different
sublanguages.
The results from the empirical investigations have been subsequently used in
the design of a dialogue manager for natural language interfaces which can be
used in a variety of applications. The dialogue manager utilizes the restricted
but more computationally feasible approach of modelling dialogue structure in a
dialogue grammar. Focus structure is handled via dialogue objects modelled in a
dialogue tree. The dialogue manager is designed to facilitate customization to
the sublanguage utilized in various applications. In the thesis I discuss how
the dialogue manager is customized to account for the dialogue behaviour in
three applications. The results demonstrate the feasibility of the proposed
approach to building application-specific dialogue managers for various
applications.
No 338
REACTIVE SYSTEMS INPHYSICAL ENVIRONMENTS : COMPOSITIONAL MODELLING AND
FRAMEWORK FOR VERIFICATION
Simin Nadjm-Tehrani
This thesis addresses the question of correctness of reactive programs which
are embedded in physical environments, and which perform a combination of
symbolic and numeric computations. Such hybrid systems are of growing interest
in the areas of artificial intelligence, control engineering, and software
verification. The verification of hybrid systems requires the use of modular
models and a combination of discrete and continuous modelling techniques. The
thesis proposes new methods which serve these goals. The proposed methods are
closely related to a layered software architecture hosting both synchronous and
asynchronous computations. The architecture has been used for the development of
prototype automobile co-driver systems.
We consider the adequacy of representational formalisms for hybrid systems. To
this end, modular models for verification of an anti-collision device are
studied at three levels of abstraction. First, dynamic transition systems (DTS)
and their timed version TDTS are proposed for the discrete models of the
physical environment and the controller. Using the detailed example, the
derivation of discrete environment models from physical models is discussed — a
point of emphasis being the association of discrete modes with regions in the
continuous state space. Next, the models are compared with a hybrid transition
system in which the continuous changes are represented explicitly.
We show that if strict modularity wrt. the sequence of control actions is
required, a physically motivated timed (discrete) model for the environment can
not be obtained by simply adding timing constraints to the untimed model. The
iterative method used for the derivation of untimed models is then extended by
inclusion of memory modes. In the hybrid model, this complete separation of the
plant and the controller can be achieved with minimal effort.
The thesis presents formal definitions, operational semantics, and parallel
composition operators for the three types of transition systems. Two novel
features of the hybrid formalism enable a convenient interface to physical
models of mode switching systems. First, the separation of state and input
variables, and second, the use of algebraic equations for the description of
change in some continuous variables. A variant of metric temporal logic is also
presented for description of complex transformations of quantitative to
qualitative values.
No 371
BUSINESS MODELS FOR DECISION SUPPORT AND LEARNING. A STUDY OF DISCRETE-EVENT
MANUFACTURING SIMULATION AT ASEA/ABB 1968-1993.
Bengt Savén
This thesis describes the planning, execution and results of an embedded case
study of discrete-event manufacturing simulation at Asea Brown Boveri’s (ABB)
operations in Sweden from 1968 through 1993. The main aim of the study has been
to explore and learn more about the values created from manufacturing
simulations. This is done by describing and analyzing the context of
manufacturing simulation within one company group for a long period of time. The
work has focused on four issues: the manufacturing systems, the simulation
software tools, the application projects and the developments over the 26 years
time horizon. The study is based on personal interviews, questionnaires and
documents, such as project reports, meeting minutes etc.
Two in-house manufacturing simulators are described and compared with the two
most frequently used standard software tools during the 26 year period. The most
important differences between the tools were found in the ease of learning and
use of the tools, the modeling flexibility and the model visibility. 61 projects
in which this software has been applied are described and analyzed. The majority
of the projects involve capacity planning and/or evaluation of control rules.
Three recent projects within one division are described and analyzed in
detail. The values created are more diverse than expected. Generally the
literature brings us the notion of simulation as a tool for evaluating
alternatives in a decision process. However, the study shows that this is just
one of twelve possible motives for using simulation. A model is suggested that
distinguishes these possible motives along three dimensions: focus on process,
focus on phase and focus on actors.
Different hypotheses as to why the use of simulation has changed over the 26
year period are discussed. One reason is found to be the level of investment and
the software capabilities. However, management’s interest in manufacturing in
general and organizational learning through simulation in particular seem to be
of greater importance. Trends in the manufacturing industry and their impact on
the demand for simulation are also discussed in the text, as well as a
comparison between discrete-event simulation and some alternatives for capacity
planning.
No 375
CONCEPTUAL MODELLING OF MODE SWITCHING PHYSICAL SYSTEMS
Ulf Söderman
This thesis deals with fundamental issues underlying the systematic
construction of behaviour models of mode switching engineering systems, i.e.
systems constructed by engineers involving continuous as well as discrete
behavioural changes. The aim of this work is to advance the design and
development of effective computer aided modelling systems providing high-level
support for the difficult and intellectually demanding task of model
construction. In particular, the thesis is about conceptual modelling of
engineering systems, i.e. modelling characterized by the explicit use of well
defined abstract physical concepts. A comprehensive review of conceptual
modelling is presented, discussing modelling in its own and forming a reference
for the development of computer aided modelling systems.
The main contribution of this work is the extension of the conceptual
modelling framework by an abstract and generic concept referred to as the ideal
switch concept. This concept enables a uniform and systematic treatment of mode
switching engineering systems. In the discussion of the switch concept and its
usage, the energy based bond graph approach is employed as a specific example of
a conceptual modelling approach. The bond graph version of the switch concept is
presented. This switch element complies with the classical bond graph modelling
formalism and hence the extended formalism, here referred to as switched bond
graphs, preserves all the essential properties of classical bond graphs. The
systematic method for construction of bond graphs can be applied. Component
models can remain context independent through acausal modelling and causal
analysis can be performed automatically at the bond graph level.
Furthermore, for the representation of overall computational models of mode
switching systems a mathematical structure related with state automata is
introduced. This structure is referred to as mode transitions systems. For the
mathematical characterization of individual switches a simplified version of
this structure, referred to as switch transition system, is introduced. The
systematic construction of computational models is discussed and a systematic
method is proposed. For this purpose a transition system composition operator
for parallel composition is presented.
No 383
EXPLOITING GROUNDNESS IN LOGIC PROGRAMS
Andreas Kågedal
The logical variable is one of the distinguishing features of logic
programming, but it has been noticed that its full potential is used quite
restrictively. Often program predicates are used in a “directional” way, where
argument positions are partitioned into input and output positions. At every
call of a given predicate, input arguments are bound to ground terms and at
success of the call the output arguments will also have been instantiated to
ground terms. This thesis addresses two aspects related to this kind of
directionality in logic programming.
The first part of the thesis is a continuation and implementation of the work
of Bonnier and Maluszynski. They give a theoretical framework for how external
procedures written in another programming language can be integrated into a
logic programming framework without sacrificing a declarative reading. In many
Prolog systems, one is allowed to call an external procedure as a directional
predicate from a program clause. With nonground arguments this may cause
unpredictable effects and often leads to a run-time error. Instead,
Bonnier/Maluszynski view external procedures as functions which will not be
evaluated until all arguments are ground. The thesis defines a language GAPLog,
a superset of Prolog, using this kind of external procedures. Systematic
development of its implementation by transformation techniques is one of the
contributions of this thesis. The result is a compiler from GAPLog to (SICStus)
Prolog.
The second part of the thesis is a continuation of Kluzniak’s work concerning
data flow analysis of programs written in Ground Prolog. In Ground Prolog,
argument positions of all predicates must be user-defined as either input or
output positions. Input values are required to be ground at call time and output
values—at success. This restriction enabled Kluzniak to develop a specialized
method for data flow analysis which can be used for inferring liveness
information. An interesting feature of this approach is that it provides a
conceptual model for the analysis of data flow between individual program
variables. However, it is presented in a rather informal way. This makes it
difficult to understand the mechanisms of approximations and to ensure the
correctness of the method. The main contribution of the second part is a
theoretical framework designed for Kluzniak’s method and based on abstract
interpretation. A concept of dependency graph between program variables is
systematically derived from a formal semantics based on the notion of proof
tree. The derivation steps clearly indicate the design decisions taken. This
allows for a better understanding of the method and a more precise approximation
of the program’s data flow. Kluzniak’s work on liveness analysis for Ground
Prolog is also extended and improved.
No 396
ONTOLOGICAL CONTROL: DESCRIPTION, IDENTIFICATION AND RECOVERY FROM PROBLEMATIC
CONTROL SITUATIONS
George Fodor
This thesis is an introduction to the main principles, operations and
architecture involved in the design of a novel type of supervisory controller
called an ontological controller. An ontological controller supervises a
programmable controller in order to:
Detect dynamically when the programmable controller is in a problematic
control situation due to a violation of ontological assumptions and thus, unable
to achieve a pre-specified control goal (i.e. the identification operation), and
When possible, move the programmable controller in such a state from which it
can regain its control and eventually achieve the pre-specified control goal in
spite of the previous violation of ontological assumptions (i.e. the recovery
operation).
The ontological assumptions are essential for the correctness of the control
algorithm of the programmable controller, but are implicit in it. A programmable
controller succeeds in achieving a pre-specified control goal only if the
ontological assumptions are not violated during the execution of its control
algorithm. Since the ontological assumptions are not explicitly represented in
the control algorithm, the programmable controller itself is not ”aware” of them
and violations of these cannot be detected by it.
A control paradigm which can be used to provide a proof that the ontological
assumptions are violated during the execution of the control algorithm, or that
they were simply incorrect already during its design is called ontological
control.
No 413
COMPILING NATURAL SEMANTICS
Mikael Pettersson
Natural semantics has become a popular tool among programming language
researchers. It is used for specifying many aspects of programming languages,
including type systems, dynamic semantics, translations between representations,
and static analyses. The formalism has so far largely been limited to
theoretical applications, due to the absence of practical tools for its
implementation. Those who try to use it in applications have had to translate
their specifications by hand into existing programming languages, which can be
tedious and error-prone. Hence, natural semantics is rarely used in
applications.
Compiling high-level languages to correct and efficient code is non-trivial,
hence implementing compilers is difficult and time-consuming. It has become
customary to specify parts of compilers using special-purpose specification
languages, and to compile these specifications to executable code. While this
has simplified the construction of compiler front-ends, and to some extent their
back-ends, little is available to help construct those parts that deal with
semantics and translations between higher-level and lower-level representations.
This is especially true for the Natural Semantics formalism.
In this thesis, we introduce the Relational Meta-Language, RML, which is
intended as a practical language for natural semantics specifications. Runtime
efficiency is a prerequisite if natural semantics is to be generally accepted as
a practical tool. Hence, the main parts of this thesis deal with the problem of
compiling natural semantics, actually RML, to highly efficient code.
We have designed and implemented a compiler, rml2c, that translates RML to
efficient low-level C code. The compilation phases are described in detail.
High-level transformations are applied to reduce to usually enormous amount of
non-determinism present in specifications. The resulting forms are often
completely deterministic. Pattern-matching constructs are expanded using a
pattern-match compiler, and a translation is made into a continuation-passing
style intermediate representation. Intermediate-level CPS optimizations are
applied before low-level C code is emitted. A new and efficient technique for
mapping tailcalls to C has been developed.
We have compared our code with other alternative implementations. Our
benchmarking results show that our code is much faster, sometimes by orders of
magnitude. This supports our thesis that the given compilation strategy is
suitable for a significant class of specifications.
A natural semantics specification for RML itself is given in the appendix.
No 414
RT LEVEL TESTABILITY IMPROVEMENT BY TESTABILITY ANALYSIS AND TRANSFORMATIONS
Xinli Gu
An important concern in VLSI design is how to make the manufactured circuits
more testable. Current design tools exploit existing design for testability
(DFT) techniques to improve design testability in the post design phase. Since
the testability improvement may affect design performance and area, re-designing
is often required when performance and area constraints are not satisfied. Thus
design costs and time to bring a product to the market are all increased. This
dissertation presents an approach to improving design testability during an
early design stage, at register-transfer (RT) level, to overcome these
disadvantages. It improves testability by three methods under the guidance of a
testability analysis algorithm.
- The proposed RT level testability analysis algorithm detects hard-to-test
design parts by taking into account the structures of a design, the depth from
I/O ports and the testability characteristics of the components used. It
reflects the test generation complexity and test application time for achieving
high test quality.
The first testability improvement method uses the partial scan technique to
transform hard-to-test registers and lines to scan registers and test modules.
Design testability is increased by direct access to these hard-to-test parts.
- The second method uses DFT techniques to transform hard-to-test registers
and lines into partitioning boundaries, so that a design is partitioned into
several sub-designs and their boundaries become directly accessible. Since test
generation can be carried out for each partition independently, test generation
complexity is significantly reduced. Also the test generation results can be
shared among the partitions.
- The third method improves the testability by enhancing the state
reachability of the control part of a design. It analyzes the state reachability
for each state in the control part. The state reachability enhancements are
motivated by 1) controlling the termination of feedback loops, 2) increasing the
ability of setting and initializing registers and the control of test starting
points, and 3) enabling arbitrary selection of conditional branches.
- Experiments using commercial tools and test benchmarks are performed to
verify our approaches. Results show the efficiency of the test quality
improvement by using our testability improvement approaches.
No 416
DISTRIBUTED DEFAULT REASONING
Hua Shu
This thesis is concerned with the logical accounts of default reasoning
reflecting the idea of reasoning by cases. In a multi-agent setting, for
example, given that any agent who believes A will derive C, and any agent who
believes B will derive C, a default reasoner that is capable of reasoning by
cases should be able to derive that any agent who believes A / B (read ”A or B”)
will derive C. The idea of reasoning by cases lies behind a formal property of
default logics, called distribution. Although to human beings reasoning by cases
is a very basic and natural pattern of reasoning, relatively few formalisms of
default reasoning satisfy the condition of distribution. This has a lot to do
with the lack of adequate logical accounts of defaults that bear explicit
relation to the idea of reasoning by cases.
This thesis provides a model of what we call distributed default reasoning
which approximates the idea of reasoning by cases. Basically, we interpret the
premises in a propositional language by a collection of coherent sets of
literals and model default reasoning as the process of extending the coherent
sets in the collection. Each coherent set can be regarded as a description of a
case. Different from the previous approach, we apply defaults independently and
in parallel to extend the individual, coherent sets. This distributive manner of
applying defaults enables us to naturally model the pattern of reasoning by
cases.
Based on that model of distributed default reasoning, we are able to study
some variants of default conclusions with regards to normal defaults. One of the
variants captures the notion of informational approximations. It turns out to
possess a rich combination of desirable properties: semi-monotonicity,
cumulativity and distribution.
When non-normal defaults are used, it is desirable to have a logic satisfying
the condition of commitment to assumptions. In order to achieve that, we extend
the model of distributed default reasoning by keeping track of justifications of
applied defaults. This modification enables us to define a new variant of
default logic that satisfies not only semi-monotonicity, cumulativity and
distribution, but also commitment to assumptions.
No 429
SIMULATION SUPPORTED INDUSTRIAL TRAINING FROM AN ORGANISATIONNAL LEARNING
PERSPECTIVE - DEVELOPMENT AND EVALUATION OF THE SSIT METHOD
Jaime Villegas
This thesis deals with the problem of training people in companies, from
workers to managers, who are in need of a good understanding of the problm
situation at work in order to reach appropriate, effective decisions. This
research has had two main goals since it started in 1992: (i) To develop a
training method which might be used to facilitate the individual and
organisational learning process of its participants, and (ii) To test and
evaluate this method through several empirical case studies in different
companies.
The method is known as SSIT - Simulation Supported Industrial Training. The
main idea behind this training method is to help participants to better
understand their own problems at the company with the help of computer-based
simulation games. The main characteristics which make this training method
unique are the following:
The simulation games are tailor-made to the participants specific problems.
The training is carried out directly at the work place.
The training is based on the execution of a number of simulation games which
successively illustrate the problems of the company.
The training method combines the work on the simulation games with other
traditional types of learning techniques such as theoretical instruction and
group discussions.
The training promotes not only the participants individual learning, but also
the organisational learning process.
Other theoretical and practical contributions of this research include:
The description and evaluation of four case studies of implementations of the
SSIT method.
For these training projects 18 simulation games have been developed and 32
participants have taken advantage of them.
The case studies have reported positive training effects on the participants
and on the company.
The cost-effectiveness analysis has revealed significant advantages of using
the SSIT method in comparison to other commercial production courses.
No 431
STUDIES IN ACTION PLANNING: ALGORITHM AND COMPLEXITY
Peter Jonsson
The action planning problem is known to be computationally hard in the general
case. Propositional planning is PSPACE-complete and first-order planning is
undecidable. Consequently, several methods to reduce the computational
complexity of planning have been suggested in the literature. This thesis
contributes to the advance and understanding of some of these methods.
One proposed method is to identify restrictions on the planning problem to
ensure tractability. We propose a method using a state-variable model for
planning and define structural restrictions on the state-transition graph. We
present a planning algorithm that is correct and tractable under these
restrictions and present a map over the complexity results for planning under
our new restrictions and certain previously studied restrictions. The algorithm
is further extended to apply to a miniature assembly line.
Another method that has been studied is state abstraction. The idea is to
first plan for the most important goals and then successively refine the plan to
also achieve the less important goals. It is known that this method can speed up
planning exponentially under ideal conditions. We show that state abstraction
may likewise slow down planning exponentially and even result in generating an
exponentially longer solution than necessary.
Reactive planning has been proposed as an alternative to classical planning.
While a classical planner first generates the whole plan and then executes it, a
reactive planner generates and executes one action at a time, based on the
current state. One of the approaches to reactive planning is universal plans. We
show that polynomial-time universal plans satisfying even a very weak notion of
completeness must be of exponential size.
A trade-off between classical and reactive planning is incremental planning,
i.e a planner that can output valid prefixes of the final plan before it has
finished planning. We present a correct incremental planner for a restricted
class of planning problems. The plan existence problem is tractable for this
class despite the fact that the plan generation problem is provably exponential.
Hence, by first testing whether an instance is solvable or not, we can avoid
starting to generate prefixes of invalid plans.
No 437
DIRECTIONAL TYPES IN LOGIC PROGRAMMING
Johan Boye
This thesis presents results concerning verification and analysis of logic
programs, especially Prolog programs. In particular we study a verification
framework based on a class of simple specifications, called directional types.
Unlike many earlier proposed approaches to logic program verification, we aim at
automation and simplicity rather than completeness.
The idea of directional types is to describe the computational behaviour of
Prolog programs by associating an input and an output assertion to every
predicate. In our approach a directional type of a predicate is understood as an
implication: whenever the call of a predicate satisfies the input assertion,
then the call instantiated by any computed answer substitution satisfies the
output assertion.
Prolog programmers often use programming techniques that involve so-called
incomplete data structures, like open trees, difference-lists, etc. Furthermore,
many modern Prolog systems offer the possibility of using a dynamic computation
rule (delay declarations). Verification and analysis of programs using these
facilities is a notoriously hard problem. However, the methods presented in this
thesis can, despite their simplicity, normally handle also such programs.
The main contributions of the thesis are:
A new verification condition for directional types, making it possible to also
prove correctness of programs using incomplete data structures and/or dynamic
computation rules. The verification condition is extended to polymorphic
assertions.
Results concerning the limits of automatic verification. We give conditions on
the assertion language for decidability of the verification condition. We also
study some interesting special cases of assertion languages where automatic
verification is possible.
A study in inference of directional types. We give a simple algorithm to infer
directions (input/output) from a logic program (note that this is not equivalent
to mode analysis). We further discuss how to infer directional types from a
directed program (possibly with help from the user).
Results on the use of directional types for controlling execution of
(functional) logic programs.
No 439
ACTIVITIES, VOICES AND ARENAS: PARTICIPATORY DESIGN IN PRACTICE
Cecilia Sjöberg
The aim of the thesis is to explore participatory design of information
systems in theory and practice. The focus lies on the early phases in the design
process. It integrates perspectives from the “Scandinavian” approaches to
systems development and software engineering. The design process studied was
situated in the public service field and, in particular, in primary health care.
A qualitative research approach was used to develop a local and a small-scale
theory. The analysis of the data was derived from critical theory within the
sociology of change.
The resulting local theoretical framework is based on three dimensions: theme,
voice and arena. The content of the participatory process showed to be complex,
in that it ranged from work practice to technology and from structure to process
issues. Having a multi-professional composition, it was affected by the
different orientations towards action. Agreement had to be reached within the
group on the forms of how to proceed with the design. The object of each voice
in the design discourse was reflected in how the voice was used in the themes of
the design. The design activities and practice proved to be influenced by
workplace, organizational and societal structures. The discourse was mainly
situated at the workplace arena with the focus on systems development in a work
context.
The participatory design process required more resources and skills than a
traditional project. The design group was also restrained by social structures,
possibly due to the multi-professional character of participatory design. On the
other hand, the structures’ visibility opened the design norms and hence the
design was less likely to evolve inaccurately. This study provides a basis for
development of methodological support in participatory design and points out
issues for future study on the power structures influencing design.
No 448
PART-WHOLE REASONING IN DESCRIPTION LOGICS
Patrick Lambrix
In many application areas natural models of the domains require the ability to
express knowledge about the following two important relations: is-a and part-of.
The is-a relation allows us to organize objects with similar properties in the
domain into classes. Part-of allows us to organize the objects in terms of
composite objects. The is-a relation has received a lot of attention and is
well-understood, while part-of has not been studied as extensively. Also the
interaction between these two relations has not been studied in any detail.
In this work we propose a framework for representation and reasoning about
composite objects based on description logics. Description logics are a family
of object-centered knowledge representation languages tailored for describing
knowledge about concepts and is-a hierarchies of these concepts. This give us
the possibility to study the interaction between part-of and is-a. We present a
language where we can distinguish among different kinds of parts and where we
can express domain restrictions, number restrictions and different kinds of
constraints between the parts of composite objects. We also introduce some
reasoning services targeted to part-of. By introducing specialized
representation and reasoning facilities, we have given part-of first-class
status in the framework.
We have explored the use of our description logic for composite objects for a
number of application areas. In our first prototype application we re-modeled
the Reaction Control System of NASA's space shuttle. We discuss the advantages
that our approach provided. Secondly, we investigated the use of our description
logic in the modeling of a document management system. We discuss the needs of
the application with respect to representation and reasoning. This model is then
extended to a model for information retrieval that deals with structured
documents. Finally, we sketch how our description logic for composite objects
can be used in a machine learning setting to learn composite concepts.
No 452
ON EXTENSIBLE AND OBJECT-RELATIONAL DATABASE TECHNOLOGY FOR FINITE ELEMENT
ANALYSIS APPLICATIONS
Kjell Orsborn
Future database technology must be able to meet the requirements of scientific
and engineering applications. Efficient data management is becoming a strategic
issue in both industrial and research activities. Compared to traditional
administrative database applications, emerging scientific and engineering
database applications usually involve models of higher complexity that call for
extensions of existing database technology. The present thesis investigates the
potential benefits of, and the requirements on, computational database
technology, i.e. database technology to support applications that involve
complex models and analysis methods in combination with high requirements on
computational efficiency.
More specifically, database technology is used to model finite element
analysis (FEA) within the field of computational mechanics. FEA is a general
numerical method for solving partial differential equations and is a demanding
representative for these new database applications that usually involve a high
volume of complex data exposed to complex algorithms that require high execution
efficiency. Furthermore, we work with extensible and object-relational (OR)
database technology. OR database technology is an integration of object-oriented
(OO) and relational database technology that combines OO modelling capabilities
with extensible query language facilities. The term OR presumes the existence of
an OR query language, i.e. a relationally complete query language with OO
capabilities. Furthermore, it is expected that the database management system
(DBMS) can treat extensibility at both the query and storage management level.
The extensible technology allows the design of domain models, that is database
representations of concepts, relationships, and operators extracted from the
application domain. Furthermore, the extensible storage manager allows efficient
implementation of FEA-specific data structures (e.g. matrix packages), within
the DBMS itself that can be made transparently available in the query language.
The discussions in the thesis are based on an initial implementation of a
system called FEAMOS, which is an integration of a main-memory resident OR DBMS,
AMOS, and an existing FEA program, TRINITAS. The FEAMOS architecture is
presented where the FEA application is equipped with a local embedded DBMS
linked together with the application. By this approach the application
internally gains access to general database capabilities, tightly coupled to the
application itself. On the external level, this approach supports mediation of
data and processing among subsystems in an engineering information system
environment. To be able to express matrix operations efficiently, AMOS has been
extended with data representations and operations for numerical linear matrix
algebra that handles overloaded and multi-directional foreign functions.
Performance measures and comparisons between the original TRINITAS system and
the integrated FEAMOS system show that the integrated system can provide
competitive performance. The added DBMS functionality can be supplied without
any major performance loss. In fact, for certain conditions the integrated
system outperforms the original system and in general the DBMS provides better
scaling performance. It is the authors opinion that the suggested approach can
provide a competitive alternative for developing future FEA applications.
Nr 459
DEVELOPMENT ENVIRONMENTS FOR COMPLEX PRODUCT MODELS
Olof Johansson
The complexity in developing high-tech industrial artifacts such as power
plants, aircrafts etc. is huge. Typically for these advanced products is that
they are hybrids of various technologies and contain several types of
engineering models that are related in a complex fashion. For power plant
design, there are functional models, mechanical models, electrical models etc.
To efficiently meet new demands on environment friendly technology, models of
product life cycles and environmental calculations must be brought into the
product design stage. The complexity and evolution of software systems for such
advanced product models will require new approaches to software engineering and
maintenance.
This thesis provides an object-oriented architectural framework, based on a
firm theoretical core on which efficient software development environments for
complex product modeling systems can be built.
The main feature of the theory presented in the thesis, is that the software
engineering models of the engineering application domain (e.g. power plant
design) are separated from software implementation technology, and that source
code for the basic functionality for object management and user interaction with
the objects in the product modeling system is generated automatically from the
software engineering models.
This software engineering technique has been successfully used for developing
a product modeling system for turbine- and power plant system design at ABB,
using state of the art database technology.
When software products of the next generation of engineering database and user
interface technology are made commercially available, a product modeling system
developed according to the theory presented in the thesis can be re-implemented
within a small fraction of the effort invested in developing the first system.
The product modeling system at ABB was put into production in 1993. It is now
regularly used by about 50 engineers. More than 80 steam and gas turbine plants
and several PFBC power plants have been designed using the system.
No 461
USER-DEFINED CONSTRUCTIONS IN UNIFICATION-BASED FORMALISMS
Lena Strömbäck
Unification-based formalisms have been part of the state-of-the-art within
linguistics and natural language processing for the past fifteen years. A number
of such formalisms have been developed, all providing partially different
constructions for representing linguistic knowledge. This development has been a
benefit for the linguist and language engineer who want to develop a general
natural language grammar, but the variety of formalisms makes it hard to find
the most suitable formalism for a particular problem.
The goal of this thesis is to investigate the possibility of developing a
platform for experimenting with different constructions within unification-based
formalisms. To this end a meta-formalism, FLUF (FLexible Unification Formalism),
has been created that allows the user to define his own constructions. This
possibility is a further development of user-defined constructions as used in
other formalisms.
While developing FLUF, the two properties of flexibility and predictability
have been important as goals for the formalism. The property of flexibility
allows the user to adjust the constructions within FLUF to the needs of his
current problem while predictability concerns the computational behaviour and
enables the user to adjust it to the current expressive power of the formalism.
The FLUF formalism consists mainly of three parts. The first part allows the
user to define datatypes and functions on datatypes. This is similar to
user-defined constructions in other formalisms, but here the user is allowed to
affect the performance of the unification algorithm in several ways. The second
part adds typing and inheritance to FLUF. Also here the main idea has been to
provide variants of typing for the user to choose from. The last part, which
allows for the definition of nonmonotonic constructions, is a feature that is
not provided in other formalisms.
The thesis concludes with a description of a pilot implementation of a tool
based on FLUF and some possible applications where this tool can be used. This
implementation suggests that it would be possible to build a future tool based
on FLUF, provided predefined modules can be used to achieve better efficiency
for the system.
No 462
TABULATION-BASED LOGIC PROGRAMMING: A MULTI-LEVEL VIEW OF QUERY ANSWERING
Lars Degerstedt
This thesis is devoted to query answering of logic programs and deductive
databases. The main theme in the work is to characterize the techniques studied
on several levels of abstraction in order to obtain simple but (mathematically)
accurate models.
In the thesis we suggest the use of notions of partial deduction (i.e. partial
evaluation of logic programs) as a unifying framework for query answering. A
procedural schema called the partial deduction procedure is introduced. It
covers a spectrum of existing query answering techniques. In particular, the
procedural schema subsumes the standard notions of ”top-down” and ”bottom-up”
resolution. The partial deduction framework is especially well adapted for
deductive databases where both top-down and bottom-up oriented processing can be
applied.
In the thesis we concentrate mainly on an instance of the partial deduction
procedure, tabulated resolution. The technique is perhaps the most important
instance of the framework since it blends the goal-directedness of the top-down
method with the saturation technique used by the bottom-up approach. The
relation between the partial deduction procedure and tabulated resolution is
similar to the relation between chart parsing and the Earley parsing method in
computational linguistics.
In the thesis we present a new declarative framework, the search forest, that
separates the search space from search strategies for tabulated resolution. We
show how the new framework is related to earlier suggested methods such as
OLDT-resolution and the magic templates approach.
Furthermore, we investigate how the partial deduction procedure can be
extended to programs with negation by failure using the well-founded semantics.
As a first instance of the framework, a new bottom-up method for the
well-founded semantics is suggested, based on an existing fixed point
characterization. The method also provides the first step in the extension of
the search forest to the well-founded semantics. The search forest is extended
to programs with negation by failure, and is proven sound and complete for a
broad class of programs. Moreover, we extend the magic templates approach to the
well-founded semantics and show how the technique provides a way to implement
the search forest framework.
Finally, we suggest a way to develop abstract machines for query answering
systems through stepwise refinements of the high-level descriptions of the
techniques discussed above. We stress that the machines are to be modeled in a
declarative way in order to keep them simple; in each model we separate between
a logical component and the control. To achieve this separation the abstract
machines are modeled by means of fixed point equations where the fixed point
operations play the role of ”instructions” of the machine. We suggest the use of
the non-deterministic framework of chaotic iteration as a basis for computation
of these models. The ordinary bottom-up method is used as an illustrative
example in this discussion.
No 475
STRATEGI OCH EKONOMISK STYRNING - EN STUDIE AV HUR EKONOMISKA STYRSYSTEM
UTFORMAS OCH ANVÄNDS EFTER FÖRETAGSFÖRVÄRV
Fredrik Nilsson
Företagsförvärv har en benägenhet att misslyckas. Vanliga förklaringar är
frånvaro av en strategisk analys, bristande planering samt problem under
integrationsfasen. Internationella studier har även visat att den ekonomiska
styrningens utformning och användning i det förvärvade företaget påverkar
utfallet. Det övergripande syftet med denna studie är att öka kunskapen om och
förståelsen för hur ekonomiska styrsystem utformas och används efter
företagsförvärv. Ett viktigt delsyfte är att utveckla en föreställningsram som
beskriver och förklarar det studerade fenomenet.
Undersökningens utgångspunkt är en tidigare genomförd pilotstudie (Nilsson,
1994). I föreliggande studie vidareutvecklas föreställningsramen genom dels
litteraturstudier, dels tre fallstudier. De förvärvande och förvärvade företagen
beskrivs och analyseras i dimensionerna strategi, organisation och ekonomiska
styrsystem. För att fånga dess dimensioner används ett stort antal
standardiserade mätinstrument. På så sätt kan både studiens validitet och
reliabilitet förbättras. Dimensionerna studeras vid två tidpunkter, före
respektive efter förvärvet. Desssutom studeras själva förändringsprocessen.
Resultaten av studien pekar på två drivkrafter som kan förklara utformningen
och användningen av det förvärvade företagets styrsystem. Dessa drivkrafter är
(a) det förvärvande företagets koncernstrategi; och (b) det förvärvade
företagets affärsstrategi. Fallstudierna visar även att dessa drivkrafter -
tillsammans och i vissa fall - kan skapa svårförenliga krav på det förvärvade
företagets styrsystem. I det förvärvande företaget är styrsystemet ett viktigt
hjälpmedel i koncernledningens arbete att nå en hög grad av
verksamhetsintegration.. För att underlätta detta arbete är det en fördel om
rapporter, begrepp och modeller utformas och används på ett likartat sätt. För
att uppnå denna enhetlighet är det vanligt att det förvärvande och förvärvade
företagets styrsystem samordnas. Dessa krav kan ibland vara svåra att förena med
hur det förvärvade företagets ledning och medarbetare vill utforma och använda
styrsystemet. De använder styrsystemet som hjälpmedel i arbetet med att utveckla
den egna affären. Med en sådan utgångspunkt bör styrsystemet anpassas till det
förvärvade företagets situation och då särskilt den affärsstrategiska
inriktningen. Ett av studiens resultat är att visa när det är möjligt att nå en
samtidig samordning och situationsanpassning av det förvärvade företagets
styrsystem. Betydelsen av att det förvärvande företaget utvecklar en
styrfilosofi för att hantera olika krav på det förvärvade företagets styrsystem
framhålls.
No 480
AN EMPIRICAL STUDY OF REQUIREMENTS-DRIVEN IMPACT ANALYSIS IN OBJECT-ORIENTED
SOFTWARE EVOLUTION
Mikael Lindvall
Requirements-driven impact analysis (RDIA) identifies the set of software
entities that need to be changed to implement a new requirement in an existing
system. RDIA thus involves a transition from requirements to software entities
or to a representative model of the implemented system. RDIA is performed during
the release planning phase. Input is a set of requirements and the existing
system. Output is, for each requirement, a set of software entities that have to
be changed. The output is used as input to many project-planning activities, for
example cost estimation based on change volume.
The overall goal of this thesis has been to gather knowledge about RDIA and
how to improve this crucial activity. The overall means has been an empirical
study of RDIA in the industrial object-oriented PMR-project. RDIA has been
carried out in two releases, R4 and R6, of this project as a normal part of
project developers’ work. This in-depth case-study has been carried out over
four years and in close contact with project developers.
Problems with underprediction have been identified — many more entities than
predicted are changed. We have also found that project developers are unaware of
their own positive and negative capabilities in predicting change. We have found
patterns that indicate that certain characteristics among software entities,
such as size, relations and inheritance, may be used together with complementary
strategies for finding candidates for change. Techniques and methods for data
collection and data analysis are provided as well as a thorough description of
the context under which this research project was conducted. Simple and robust
methods and tools such as SCCS, Cohen’s kappa, median tests and graphical
techniques facilitate future replications in other projects than PMR.
No 485
OPINION-BASED SYSTEMS - THE COOPERATIVE PERSPECTIVE ON KNOWLEDGE-BASED
DECISION SUPPORT
Göran Forslund
During the last fifteen years expert systems have successfully been applied to
a number of difficult problems in a variety of different application domains.
Still, the task of actually developing these systems has been much harder than
was predicted, and among systems delivered many have failed to meet user
acceptance.
The view taken in this thesis is that many of these problems can be explained
in terms of a discrepancy between the tasks expert systems have been intended
for and the kind of situations where they typically have been used. Following
recent trends toward more cooperative systems, our analysis shows the need for a
shift in research focus from autonomous problem solvers to cooperative
advice-giving systems intended to support joint human-computer decision making.
The focus of this thesis is on the technical problems involved in realizing the
more cooperative form of expert systems.
This thesis examines the task of designing and implementing expert systems
that are to be utilised as cooperative decision support and advice-giving
systems at workplaces. To this purpose, several commercial case-studies
performed over a 10-year period have been utilised together with reported
shortcomings of existing expert systems techniques and a review of relevant
research in decision-making theory. Desiderata - concerning issues such as
cooperation, flexibility, explicitness, knowledge acquisition and maintenance -
for an architecture intended to support the implementation of the desired
behaviour of cooperative advice-giving systems are formulated, and a system
architecture and a knowledge representation intended to meet the requirements of
these desiderata is proposed.
The properties of the suggested architecture, as compared to the desiderata,
are finally examined, both theoretically and in practice. For the latter purpose
the architecture is implemented as the medium-sized system, CONSIDER. The
studies performed indicate that the proposed architecture actually possesses the
properties claimed. Since the desiderata have been formulated to facilitate the
task of building cooperative advice-giving systems for real-world situations,
this knowledge should both be useful for practitioners and of enough interest
for other researchers to motivate further studies.
No 494
ACTIVE DATABASE MANAGEMENT SYSTEMS FOR MONITORING AND CONTROL
Martin Sköld
Active Database Management Systems (ADBMSs) have been developed to support
applications with detecting changes in databases. This includes support for
specifying active rules that monitor changes to data and rules
that perform some
control tasks for the applications. Active rules can also be used for
specifying constraints that must be met to maintain the integrity of the data.
for maintaining long-running transactions, and for authorization control.
This thesis begins with presenting case studies on using ADBMSs for monitoring
and control. The areas of Computer Integrated Manufacturing (CIM) and
Telecommunication Networks have been studied as possible applications that can
use active database technology. These case studies have served as requirements
on the functionality that has later been developed in an ADBMS. After an
introduction to the area of active database systems it is exemplified how active
rules can be used by the applications studied. Several requirements are
identified such as the need for efficient execution of rules with complex
conditions and support for accessing and monitoring external data in a
transparent manner.
The main body of work presented is a theory for incremental evaluation, named
partial differencing. It is shown how the theory is used for
implementing efficient rule condition monitoring in the AMOS ADBMS. The
condition monitoring is based on a functional model where changes to
rule conditions are defined as changes to functions. External data is introduced
as foreign functions to provide transparency between access and
monitoring of changes to local data and external data.
The thesis includes several publications from both international journals and
international conferences. The papers and the thesis deal with issues such as a
system architecture for a CIM system using active database technology, extending
a query language with active rules, using active rules in the studied
applications, grouping rules into modules (rule contexts), efficient
implementation of active rules by using incremental evaluation techniques,
introducing foreign data into databases, and temporal support in active database
systems for storing events monitored by active rules. The papers are
complemented with background information and work done after the papers were
published, both by the author and by colleagues.
No 495
AUTOMATIC VERIFICATION OF PETRI NETS IN A CLP FRAMEWORK
Hans Olsén
This thesis presents an approach to automatic verification of Petri Nets. The
method is formulated in a CLP framework and the class of systems we consider is
characterized syntactically as a special class of Constraint Logic Programs. The
state space of the system in question coincides with the least fixpoint of the
program. The method presented can therefore equivalently be viewed as a
construction of a fixpoint computation scheme, for the programs under
consideration. The main motivation is to synthesize invariants for verification
The approach to verify a program consists of two parts:
- Computing a finite representation of the flxpoint as a formula in some given theory.
- Checking that the fixpoint entails the specification, also expressed as a formula in the theory.
A CLP program is considered as an inductive definition of a set and the idea
is to find the minimal solution by constructing a non-recursive formula defining
the same set in a (decidable) theory. In the case of Petri Nets, the method
proposed will, when successful, generate a set of linear Diophantine equations
whose solutions are exactly the markings reachable in the Petri Net. Actually,
the base clause of the recursive program, which specifies the initial marking in
the case of Petri Nets, can be parametric. Thus, a generic formula can be
computed that characterizes the fixpoint for every instance of the parameters.
Using this facility, a kind of liveness property can also be proved.
If the theory is decidable, the second phase is automatic. The first phase
will fail if the theory is too weak for expressing the fixpoint. Even if the
fixpoint is definable in the theory, the first phase may fail. The programs we
study include programs with the expressive power of universal Turing machines.
Whether the fixpoint is expressible in a restricted theory is itself undecidable
for such programs. Therefore the method is inherently incomplete. We have
identified a non-trivial class of Petri Nets for which the method is guaranteed
to succeed.
The approach to computing a finite representation of the fixpoint is based on
the idea of describing a possibly infinite bottom-up fixpoint computation by the
language of all possible firing sequences of the recursive clauses of the
program. Each element in the fixpoint is generated by some sequence of clause
applications. Usually, several sequences may generate the same element so that a
sublanguage may be sufficient for generating the fixpoint. This is equivalent to
saying that a restricted computation strategy is complete. For a particular
class of firing languages, called flat languages, the associated set of
reachable elements can be described by a non-recursive formula in the theory
used. The task is therefore to find a computation strategy defined by a flat
language that is sufficient for generating the fixpoint. We define a number of
rewrite rules for expressions defining languages. The computation proceeds by
repeatedly rewriting expressions with the objective to reach an expression
defining a flat language. The computation is guaranteed to terminate, but it may
fail to generate a flat language. This is because each rewriting rule results in
a language expression which is smaller according to a well-founded ordering.
Either a flat language must eventually be constructed or no rewriting rule can
be applied. There may exist a flat language by which the fixpoint can be
generated although it may not be possible to construct by the rewriting rules
presented in this thesis.
Partial correctness is verified by checking the entailment of a property by
the fixpoint. Since entailment of the fixpoint by a property may equally well be
checked, completeness can also be verified. For checking entailment we apply the
proof procedure of Presburger arithmetic introduced by Boudet and Comon.
The main contributions of the thesis are:
- A method for computing finite representations of a certain class of inductively defined sets.
- The identification of a class of Petri Nets, closely related to so called Basic Parallel Processes, for which the method is guaranteed to succeed.
- An experimental system that implements the method proposed and a detailed report on the automatic verification of several non-trivial examples taken from the literatur.
No 498
ALGORITHMS AND COMPLEXITY FOR TEMPORAL AND SPATIAL FORMALISMS
Thomas Drakengren
The problem of computing with temporal information was early recognised within
the area of artificial intelligence, most notably the temporal interval
algebra by Allen has become a widely used formalism for representing and
computing with qualitative knowledge about relations between temporal intervals.
However, the computational properties of the algebra and related formalisms are
known to be bad: most problems (like satisfiability) are NP-hard. This
thesis contributes to finding restrictions (as weak as possible) on Allen's
algebra and related temporal formalisms (the point-interval algebra
and extensions of Allen's algebra for metric time) for which the
satisfiability problem can be computed in polynomial time.
Another research area utilising temporal information is that of reasoning
about action, which treats the problem of drawing conclusions based on the
knowledge about actions having been performed at certain time points (this
amounts to solving the infamous frame problem). One paper of this
thesis attacks the computational side of this problem; one that has not been
treated in the literature (research in the area has focused on modelling
only). A nontrivial class of problems for which satisfiability is a
polynomial-time problem is isolated, being able to express phenomena such as
concurrency, conditional actions and continuous time.
Similar to temporal reasoning is the field of spatial reasoning,
where spatial instead of temporal objects are the field of study. In two papers,
the formalism RCC-5 for spatial reasoning, very similar to Allen's
algebra, is analysed with respect to tractable subclasses, using techniques from
temporal reasoning.
Finally, as a spin-off effect from the papers on spatial reasoning, a
technique employed therein is used for finding a class of intuitionistic
logic
for which computing inference is tractable.
No 502
ANALYSIS AND SYNTHESIS OF HETEROGENEOUS REAL-TIME SYSTEMS
Jakob Axelsson
During the development of a real-time system the main goal is to find an
implementation that satisfies the specified timing constraints. Often, it is
most cost-effective to use a heterogeneous solution based on a mixture of
different microprocessors and application-specific integrated circuits. There is
however a lack of techniques to handle the development of heterogeneously
implemented systems, and this thesis therefore presents a novel approach
inspired by research in the area of hardware/software codesign. The behaviour of
the entire system is specified in a high-level, homogeneous description,
independently of how different parts will later be implemented, and a thorough
design space exploration is performed at the system level using automatic or
semi-automatic synthesis tools which operate on virtual prototypes of the
implementation.
The objective of the synthesis is to find the least costly implementation
which meets all timing constraints, and in order to predict these
characteristics of the final system, different analysis methods are needed. The
thesis presents an intrinsic analysis which estimates the hardware resource
usage of individual tasks, and an extrinsic analysis for determining the effects
of resource sharing between several concurrent tasks. The latter is similar to
the fixed-priority schedulability analysis used for single-processor systems,
but extended to heterogeneous architectures. Since these analysis procedures are
applied early in the design process, there are always some discrepancies between
the estimated data and the actual characteristics of the final system, and
constructive ways of dealing with these inaccuracies are therefore also
presented.
Several synthesis algorithms are proposed for different aspects of the design.
The hardware architecture is assembled from a component library using heuristic
search techniques, and three alternative algorithms are evaluated in the thesis.
The optimal partitioning of the functionality on an architecture is found using
a branch-and-bound algorithm. Finally, a fixed-priority scheduler is
instantiated by assigning priorities to the concurrent tasks of the behaviour.
Together, the proposed analysis and synthesis methods provide a solid basis for
systematic engineering of heterogeneous real-time systems.
No 503
COMPILER GENERATION FOR DATA-PARALLEL PROGRAMMING LANGUAGES FROM TWO-LEVEL
SEMANTICS SPECIFICATIONS
Johan Ringström
This thesis is an empirical study of compiler generation for data-parallel
languages from denotational-semantics-based formal specifications. We
investigate whether compiler generation from such specifications is practical,
not only with respect to generation of practical compilers, but also with
respect to compilation of programs into efficient code and execution of the
compiled programs on massively parallel SIMD (Single Instruction Multiple Data)
architectures. Efficient compilers has been generated for Predula Nouveau, a
small Pascal-like data-parallel language with embedded data-parallel primitives.
To demonstrate the practicality and generality of the approach, experimental
studies have been made for two SIMD target architectures. Compilers can
currently be generated which emit code for the MasPar MP-1, which is an
architecture for large multi-user systems, and the RVIP, which is an
architecture for embedded systems. Performance studies have been made on the
compiler generator system, the compilers it generates, and in particular the
code generated from these compilers.
Compiler generation systems are becoming increasingly common. Most such
systems use attribute grammars as specification formalism, but systems exist
which use other types of formalisms. However, few systems use denotational
semantics based formalisms. Furthermore, these systems generate compilers for
sequential programming languages. Compiler generator systems for parallel, and
in particular data-parallel languages, are very rare. Thus, this work is one of
the first case studies of generation of efficient compilers for such languages.
The formal semantics specification approach uses two abstraction levels. The
higher level uses denotational semantics, including a set of auxiliary
data-parallel functions. These functions serve as an intermediate form which
defines the interface to and is exported from the low-level specification.
Internally, this lower level uses several specification methods, where a
target-architecture-specific part uses methods which may vary between different
architectures. The architecture-independent part of the lower level uses a fixed
operational semantics based specification in the form of a general internal
representation which includes data-parallel operations.
No 512
NÄRHET OCH DISTANS - STUDIER AV KOMMUNIKATIONSMÖNSTER I SATELLITKONTOR OCH
FLEXIBLA KONTOR
Anna Moberg
Today two major directions can be seen in organisation and work forms as far
as distance and proximity are concerned. One trend involves geographic
distribution of the companys operations, where information technology is the
facilitator of maintaining contacts. The other trend involves achieving
proximity between individuals who are expected to communicate with each other to
perform their work effectively. Examples of this kind of organising can be seen
in the form of team and design of environments which are to support cooperation.
The overall theme in the thesis is communication patterns in new organisational
forms. It consists of two studies with separate results.
The first study takes up satellite offices which are an organisational form
with a geographic distribution of operations within a section. The study
resulted in a licentiate thesis (Moberg, 1993). The aim was to identify
similarities and differences in communication patterns between the satellite
office and the corresponding operations at the main office. Data was gathered
using a communication diary and interviews. Three companies were discussed in
the study, all with customer service as a function and distributed at different
geographical locations while belonging to the same section. Communication
between people at the main office and at the satellite office was performed
mainly through the use of information technology. The study showed no great
differences in communication patterns between main office and satellite offices.
There was frequent exchange of information between the units, but also within
the respective groups. Telework in this form seems to suit the type of
operations that was studied, i.e. relatively simple work duties where much of
the information exists in electronic form and where a large part of the work
tasks consist of telephone contact with customers.
The other study deals with flexible offices, i.e. open space approaches where
several employees share office space. The concept is intimately connected with
information technology and flexible work for the individuals. The aim of the
study was to create an understanding of the affect of the flexible office on
communication and cooperation and also of how individuals experience their
working environment. A large part of the data gathering was in the form of
questionnaire studies before and after the introduction of the new form of
office. The effects of introducing the flexible office was perceived both
positively and negatively. The open space approach both facilitates and impairs
communication and a number of paradoxical effects were identified as regards
communication and work performance. The effects on the work of groups were for
the most part positive, while the effects for the individual were both positive
and negative. The flexible office seems to be a suitable concept for a
teamoriented way of working, when work tasks are relatively complex and include
a large proportion of contact with colleagues. However, the work should not
demand too much concentration.
No 520
DESIGN AND MODELLING OF A PARALLEL DATA SERVER FOR TELECOM APPLICATIONS
Mikael Ronström
Telecom databases are databases used in the operation of the telecom network
and as parts of applications in the telecom network. The first telecom databases
were Service Control Points (SCP) in intelligent Networks. These provided mostly
number translations for various services, such as Freephone. Also databases that
kept track of the mobile phones (Home Location Registers, HLR) for mobile
telecommunications were early starters. SCPs and HLRs are now becoming the
platforms for service execution of telecommunication services. Other telecom
databases are used for management of the network, especially for real-time
charging information. Many information servers, such as Web Servers, Cache
Servers, Mail Servers, File Servers are also becoming part of
These servers have in common that they all have to answer massive amounts of
rather simple queries, that they have to be very reliable, and that they have
requirements on short response times. Some of them also need large storage and
some needs to send large amounts of data to the users.
Given the requirements of telecom applications an architecture of a Parallel
Data Server has been developed. This architecture contains new ideas on a
replication architecture, two-phase commit protocols, and an extension on the
nWAL concept writing into two or more main memories instead of writing to disk
at commit. The two-phase commit protocol has been integrated with a protocol
that supports network redundancy (replication between clusters).
Some ideas are also described on linear hashing and B-trees, and a data
structure for tuple storage that provides efficient logging. It is shown how the
data server can handle all types of reconfiguration and recovery activities with
the system on-line. Finally advanced support of on-line schema change has been
developed. This includes support of splitting and merging tables without any
service interruption.
Together these ideas represent an architecture of a Parallel Data Server that
provides non-stop operation. The distribution is transparent to the application
and this will be important when designing load control algorithms of the
applications using the data server. This Parallel Data Server opens up a new
usage area for databases. Telecom applications have traditionally been seen as
an area of proprietary solutions. Given the achieved performance, reliability
and response time of the data server presented in this thesis it should be
possible to use databases in many new telecom applications.
No 522
TOWARDS EFFECTIVE FAULT PREVENTION - AN EMPIRICAL STUDY IN SOFTWARE
ENGINEERING
Niclas Ohlsson
Quality improvement in terms of lower costs, shorter development times and
increased reliability are not only important to most organisations, but also
demanded by the customers. This thesis aims at providing a better understanding
of which factors affect the number of faults introduced in different development
phases and how this knowledge can be used to improve the development of
large-scale software. In particular, models that enable identification of
fault-prone modules are desirable. Such prediction models enable management to
reduce costs by taking special measures, e.g. additional inspection (fault
detection), and assigning more experienced developers to support the development
of critical components (fault avoidance). This thesis is a result of studying
real projects for developing switching systems at Ericsson. The thesis
demonstrates how software metrics can form the basis for reducing development
costs by early identification, i.e. at the completion of design, of the most
fault-prone software modules. Several exploratory analyses of potential
explanatory factors for fault-proneness in different phases are presented. An
integrated fault analysis process is described that facilitates and was used in
the collection of more refined fault data. The thesis also introduces a new
approach to evaluate the accuracy of prediction models, Alberg diagrams,
suggests a strategy for how variables can be combined, and evaluates and
improves strategies by replicating analyses suggested by others.
No 526
A SYSTEMATIC APPROACH FOR PRIORITIZING SOFTWARE REQUIREMENTS
Joachim Karlsson
In most commercial development projects, there are more candidate requirements
subject to implementation than available time and resources allow for. A
carefully chosen set of requirements must therefore be selected for
implementation. A systematic approach for prioritizing candidate requirements is
a very useful means to provide necessary and useful input for the crucial
selection decision.
This thesis provides results from the development and applications of
different approaches for prioritizing requirements in close collaboration with
Ericsson Radio Systems AB. A pairwise comparison approach for prioritizing
requirements according to multiple criteria has been developed and applied. To
overcome the high number of comparisons that the approach often required in
projects with many requirements, different candidate approaches have been
investigated and applied for reducing the required effort. An approach for
managing requirement interdependencies and their implications for the
prioritizing approach has been developed. A support tool packaging the
prioritizing approach and automating much of the manual work in the approach has
been developed and evaluated in practice.
Qualitative results indicate that the proposed approach is an effective means
for selecting among candidate requirements, for allocating resources to them and
for negotiating requirements. The approach further enables knowledge transfer
and visualization, helps to establish consensus among project members and
creates a good basis for decisions. Quantitative results indicate that the
requirements actually selected for implementation have a profound impact on the
final product. In several projects where requirements were prioritized according
to the criteria value for customer and cost of implementation, implementing the
requirements which optimize the relation of value for customer to cost of
implementation would reduce the development cost and development time. Software
systems with substantially the same value for customer can consequently be
delivered with a reduction in cost and lead-time when the proposed prioritizing
approach is deployed carefully.
No 530
DECLARATIVE DEBUGGING FOR LAZY FUNCTIONAL LANGUAGES
Henrik Nilsson
Lazy functional languages are declarative and allow the programmer to write
programs where operational issues such as the evaluation order are left
implicit. It is desirable to maintain a declarative view also during debugging
so as to avoid burdening the programmer with operational details, for example
concerning the actual evaluation order which tends to be difficult to follow.
Conventional debugging techniques focus on the operational behaviour of a
program and thus do not constitute a suitable foundation for a general-purpose
debugger for lazy functional languages. Yet, the only readily available,
general-purpose debugging tools for this class of languages are simple,
operational tracers.
This thesis presents a technique for debugging lazy functional programs
declaratively and an efficient implementation of a declarative debugger for a
large subset of Haskell. As far as we know, this is the first implementation of
such a debugger which is sufficiently efficient to be useful in practice. Our
approach is to construct a declarative trace which hides the operational
details, and then use this as the input to a declarative (in our case
algorithmic) debugger.
The main contributions of this thesis are:
- A basis for declarative debugging of lazy functional programs is developed in the form of a trace which hides operational details. We call this kind of trace the Evaluation Dependence Tree (EDT).
- We show how to construct EDTs efficiently in the context of implementations of lazy functional languages based on graph reduction. Our implementation shows that the time penalty for tracing is modest, and that the space cost can be kept below a user definable limit by storing one portion of the EDT at a time.
- Techniques for reducing the size of the EDT are developed based on declaring modules to be trusted and designating certain functions as starting-points for tracing.
- We show how to support source-level debugging within our framework. A large subset of Haskell is handled, including list comprehensions.
- Language implementations are discussed from a debugging perspective, in particular what kind of support a debugger needs from the compiler and the run-time system.
- We present a working reference implementation consisting of a compiler for a large subset of Haskell and an algorithmic debugger. The compiler generates fairly good code, also when a program is compiled for debugging, and the resource consumption during debugging is modest. The system thus demonstrates the feasibility of our approach.
No 555
TIMING ISSUES IN HIGH-LEVEL SYNTHESIS
Jonas Hallberg
High-level synthesis transforms a behavioral specification into a
register-transfer level implementation of a digital system. Much research has
been put into auto- mating this demanding and error-prone task. Much of the
effort has been directed towards finding techniques which minimize the length of
the operation schedule and/or the implementation cost. As the techniques have
matured and found their way into commercial applications, new problems have
emerged such as the need to be able to specify not only the functional but also
the timing behavior, and the difficulty to generate implementations with this
timing behavior.
This thesis addresses the timing-related problems in high-level synthesis by
modeling the timing of a design at three different levels. In the high-level
model, timing is expressed by constraints on the execution time of sequences of
opera- tions. At the middle level the timing is given by the selected clock
period and the operation schedule. Finally, the low-level model is used to
estimate the delay of each individual operation, taking into account the effects
given by functional and storage units, multiplexors, interconnections, and the
controller. This elaborated low-level timing model provides the basis for
deciding the middle-level timing in such a way that the possibility of reaching
a final implementation with this tim- ing behavior is maximized. The middle
level timing, in turn, is used to verify the timing constraints given by the
high-level model.
A set of design transformations has been developed to enable an integrated
high-level synthesis algorithm performing automatic clock period selection, mul-
ticycle scheduling, resource allocation, and resource binding. The task of
finding a sequence of transformations which leads to a (near) optimal solution
yields a combinatorial optimization problem. To solve this problem an
optimization algo- rithm based on the tabu search heuristic is proposed.
The resulting high-level synthesis system has been applied to standard bench-
marks and an example from the operation and maintenance (OAM) functionality of
an asynchronous transfer mode (ATM) switch. The results motivate the usage of
the proposed low-level and high-level timing models and demonstrate the effi-
ciency of the implemented high-level synthesis system.
No 561
MANAGEMENT OF 1-D SEQUENCE DATA - FROM DISCRETE TO CONTINUOUS
Ling Li
Data over ordered domains such as time or linear positions are termed
sequence data. Sequence data require special treatments which are not
provided by traditional DBMSs. Modelling sequence data in traditional
(relational) database systems often results in awkward query expressions and bad
performance. For this reason, considerable research has been dedicated to
supporting sequence data in DBMSs in the last decade. Unfortunately, some
important requirements from applications are neglected, i.e., how to support
sequence data viewed as
continuous under arbitrary user-defined interpolation assumptions,
and how to perform sub-sequence extraction efficiently based on the conditions
on the value domain. We term these kind of queries as value queries
(in contrast to shape queries that look for general patterns of
sequences).
This thesis presents pioneering work on supporting value queries on 1-D
sequence data based on arbitrary user-defined interpolation functions. An
innovative indexing technique, termed the IP-index, is proposed. The
motivation for the IP-index is to support efficient calculation of implicit
values of sequence data under user-defined interpolation functions. The IP-index
can be implemented on top of any existing ordered indexing structure such as a
B+-tree. We have implemented the IP-index in both a disk-resident database
system (SHORE) and a main-memory database system (AMOS). The highlights of the
IP-index - fast insertion, fast search, and space efficiency are verified by
experiments. These properties of the IP-index make it particularly suitable for
large
sequence data.
Based on the work of the IP-index, we introduce an extended SELECT operator,
/sigma/*, for sequence data. The /sigma/* operator, /sigma/*cond(TS),
retrieves sub-sequences (time intervals) where the values inside those
intervals satisfy the condition cond. Experiments made on SHORE using
both synthetic and real-life time sequences show that the /sigma/* operator
(supported by the IP-index) dramatically improve the performance of value
queries. A cost model for the /sigma/* operator is developed in order to be able
to optimize complex queries. Optimizations of time window queries and sequence
joins are investigated and verified by experiments.
Another contribution of this thesis is on physical organization of sequence
data. We propose a multi-level dynamic array structure for dynamic,
irregular
time sequences. This data structure is highly space efficient and meets the
challenge of supporting both efficient random access and fast appending.
Other relevant issues such as management of large objects in DBMS, physical
organization of secondary indexes, and the impact of main-memory or
disk-resident DBMS on sequence data structures are also investigated.
A thorough application study on "terrain-aided navigation" is presented to
show that the IP-index is applicable to other application domains.
No 563
STUDENT MODELLING BASED ON COLLABORATIVE DIALOGUE WITH A LEARNING COMPANION
Eva L. Ragnemalm
When using computers to support learning, one significant problem is how to
find out what the student understands and knows with respect to the knowledge
the computer system is designed to help him to learn (the system's content goal
). This analysis of the student is based on the input he provides to the system
and it is evaluated with respect to the content goals of the system. This
process is called student modelling. In essence this problem can be seen as that
of bridging a gap between the input to the system and its content goals.
It is difficult to study the student's reasoning because it is not directly
observable. With respect to the gap, this is a problem of paucity of student
input. One possible solution, explored in this dissertation, is to have the
student work collaboratively with a computer agent, a Learning Companion, and
eavesdrop on the emerging dialogue.
This dissertation explores the feasibility of this idea through a series of
studies. Examples of naturally occurring collaborative dialogue from two
different domains are examined as to their informativeness for a student
modelling procedure. Spoken as well as written dialogue is studied. The problem
of information extraction from collaborative dialogue is briefly explored
through prototyping. Prototyping is also used to study the design of a Learning
Companion, whose behavior is based on observations from the dialogues in the
informativeness study. It is concluded that for certain types of student models,
collaborative dialogue with a Learning Companion is indeed a useful source of
information, and it appears technically feasible. Further research is, however,
needed on the design of both information extraction and the Learning Companion.
No 567
DOES DISTANCE MATTER?: ON GEOGRAPHICAL DISPERSION IN ORGANISATIONS
Jörgen Lindström
In the discussion on organisations and organisational form, several concepts
have appeared to denote what is said to be new organisational forms. These
concepts many times imply a geographical dispersion of organisations. The
changes to organisational structure—and notably geographical dispersion—are
often seen as enabled by developments in information and communication
technology (ICT), developments providing us with tools that make it possible to
communicate and handle information over geographical distances "better" and more
"efficiently" than ever before. Thus, it is implied that distance is dead or at
least losing
in importance for organisations.
In this thesis, however, it is contended that distance is still an important
concept and the aim of the thesis is to gain an understanding of the possible
importance of geographical distance for the design and management of
organisations. More specifically, it focuses on how different communication
modes—basically face-to-face as compared to technology-mediated
communication—affect the process of organising. This is discussed both on a
general level and with a special focus on the role and work of managers.
It is concluded that distance is still a very important fact in organisational
life. Basically, this is because social interaction through technology differs
in fundamental ways from social interaction face-to-face. Even if many tasks can
be handled through technology-mediated communication if considered in isolation,
the picture changes when all tasks are considered simultaneously and over time.
Then the necessity of having shared frames and a common set of significant
symbols and the difficulties involved in creating, recreating, and maintaining
these via technology imposes a lower limit on the amount of face-to-face
interaction necessary.
No 582
DESIGN, IMPLEMENTATION AND EVALUATION OF A DISTRIBUTED MEDIATOR SYSTEM FOR
DATA INTEGRATION
Vanja Josifovski
An important factor of the strength of a modern enterprise is its capability
to effectively store and process information. As a legacy of the mainframe
computing trend in recent decades, large enterprises often have many isolated
data repositories used only within portions of the organization. The methodology
used in the development of such systems, also known as legacy systems,
is tailored according to the application, whiteout concern for the rest of the
organization. From organizational reasons, such isolated systems still emerge
within different portions of the enterprises. While these systems improve the
efficiency of the individual enterprise units, their inability to interoperate
and provide the user with a unified information picture of the whole enterprise
is a "speed bump" in taking the corporate structures to the next level of
efficiency.
Several technical obstacles arise in the design and implementation of a system
for integration of such data repositories (sources), most notably distribution,
autonomy, and data heterogeneity. This thesis presents a data integration system
based on the wrapper-mediator approach. In particular, it describes the
facilities for passive data mediation in the AMOS II system. These facilities
consist of: (i) object-oriented (OO) database views for reconciliation of data
and schema heterogeneities among the sources, and (ii) a multidatabase query
processing engine for processing and executing of queries over data in several
data sources with different processing capabilities. Some of the major data
integration features of AMOS II are:
- A distributed mediator architecture where query plans are generated using a distributed compilation in several communicating mediator and wrapper servers.
- Data integration by reconciled OO views spanning over multiple mediators and specified through declarative OO queries. These views are capacity augmenting views, i.e. locally stored attributes can be associated with them.
- Processing and optimization of queries to the reconciled views using OO concepts such as overloading, late binding, and type-aware query rewrites.
- Query optimization strategies for efficient processing of queries over a combination of locally stored and reconciled data from external data sources.
The AMOS II system is implemented on a Windows NT/95 platform.
No 589
MODELING AND SIMULATING INHIBITORY MECHANISMS IN MENTAL IMAGE
REINTERPRETATION--TOWARDS COOPERATIVE HUMAN-COMPUTER CREATIVITY
Rita Kovordányi
With the accelerating development of computer and software technology,
human-computer cooperation issues are becoming more and more centered on the
human userÕs abilities and weaknesses. The cognitive characteristics of visual
communication and reasoning, and how these affect the way users take advantage
of the richness of visually represented information comprise one area which
needs to be further explored within this context.
The work reported in this thesis aims to identify cognitive mechanisms which
might inhibit the creative interpretation of visual information, and thereby
indicate which aspects of visual creativity may benefit from support in a
cooperative human-computer system.
We approached this problem by initially focusing on one central mechanism,
selective attention, with an analysis of its constraining role in mental image
reinterpretation. From this kernel, a partial model of mental image
reinterpretation was developed. Given this framework, a family of related, yet,
at a detailed level contradictory cognitive models was simulated to determine
which model components contributed in what way to overall model performance.
Model performance was evaluated with regard to empirical data on human
reinterpretation performance.
Our work contributes an integrated theory for selective attention and a
simulation-based investigation of its role in mental image reinterpretation. We
have developed and evaluated a method for investigating the causal structure of
cognitive models using interactive activation modeling and systematic computer
simulations. Also, we account for our experience in combining computer science
methods with the cognitive modeling paradigm.
No 592
SUPPORTING THE USE OF DESIGN KNOWLEDGE AN ASSESSMENT OF COMMENTING AGENTS
Mikael Ericsson
This thesis contributes to an understanding of the usefulness of and effects
from using commenting agents for supporting the use of design knowledge in user
interface design. In two empirical studies, we have explored and investigated
commenting agents from the aspects of usefulness, appropriateness of different
tool behaviour and forms of comments. Our results show a potential value of the
commenting approach, but also raises several questions concerning the cost and
actual effects.
The use of formalized design is considered valuable, yet problematic. Such
knowledge is valuable in order to achieve reuse, quality assurance, and design
training, but hard to use due to the large volumes, complex structures and weak
reference to the design context. The use of knowledge-based tools, capable of
generating comments on an evolving design, has been seen as a promising approach
to providing user interface designers with formalized design knowledge in the
design situation. However, there is a lack of empirical explorations of the
idea.
In our research, we have conducted a three-part study of the usefulness of
commenting tools. First, a Wizard-of-Oz study with 16 subjects was performed to
investigate designers' perceptions of the usefulness of a commenting tool, along
with the appropriateness of different tool behaviors and forms of comment. We
focus on tool mode (active/passive support) and mood (imperative/declarative
comments). Secondly, eight professional designers participated in an interview
about support needs. Thirdly, a conceptual design prototype was tested by 7
designers, using cooperative evaluation. A broad set of qualitative and
quantitative methods have been used to collect and analyse data.
Our results show that a commenting tool is seen as disturbing but useful (since
it affects the user's work situation). Using a commenting tool affects the
designer's evaluation behaviour, i.e., there is an indication of some form of
knowledge transfer. The short-term result is an increased consciousness in terms
of design reflection and guideline usage. In terms of preferred tool behaviour,
our results show that imperative presentation, i.e. pointing out ways of
overcoming identified design problems, is the easiest to understand. A high
perceived mental workload relates to problems detecting comments when using a
commenting tool; this means that comments from an active agent risk being
overlooked.
In addition, a large part of this thesis can be described as a report of our
experiences from using Wizard ofOz techniques to study user interface design
support tools. We present our experience and advice for future research.
No 593
ACTIONS, INTERACTIONS AND NARRATIVES
Lars Karlsson
The area of reasoning about action and change is concerned with the
formalization of actions and their effects as well as other aspects of inhabited
dynamical systems. The representation is typically done in some logical
language. Although there has been substantial progress recently regarding the
frame problem and the ramification problem, many problems still remain. One of
these problems is the representation of concurrent actions and their effects. In
particular, the effects of two or more actions executed concurrently may be
different from the union of the effects of the individual actions had they been
executed in isolation. This thesis presents a language, TAL-C, which supports
detailed and flexible yet modular descriptions of concurrent interactions. Two
related topics, which both require a solution to the concurrency problem, are
also addressed: the representation of effects of actions that occur with some
delay, and the representation of actions that are caused by other actions.
Another aspect of reasoning about action and change is how to describe
higher-level reasoning tasks such as planning and explanation. In such cases, it
is important not to just be able to reason about a specific narrative (course of
action), but to reason about alternative narratives and their properties, to
compare and manipulate narratives, and to reason about alternative results of a
specific narrative. This subject is addressed in the context of the situation
calculus, where it is shown how the standard version provides insufficient
support for reasoning about alternative results, and an alternative version is
proposed. The narrative logic NL is also presented; it is based on the temporal
action logic TAL, where narratives are represented as first-order terms. NL
supports reasoning about (I) metric time, (II) alternative ways the world can
develop relative to a specific choice of actions, and (III) alternative choices
of actions.
No 594
SOCIAL AND ORGANIZATIONAL ASPECTS OF REQUIREMENTS ENGINEERING METHODS
C. G. Mikael Johansson
Improving requirements engineering has been recognized as critical for die
1990s. Reported differences in theoretical reasoning vs. the practice of how
methods are used suggest a need for further research. A way to proceed is to
investigate how real-life system design situations can be supported the best
way. An important area is to investigate social and organizational aspects of
the design of requirements engineering methods. Also increasing the knowledge of
what is regarded as important by the users of methods for requirements
engineering is essential for progress and development of new knowledge in the
field.
The aim of this thesis is to develop knowledge of what social and organizational
issues are important for the use and development of requirements engineering
methods. The research is based on a qualitative research approach using
different qualitative methods and instruments for data gathering and data
analysis.
The results include an outline of a preliminary method for requirements
engineering (Action Design). A "handbook evaluation" shows motives and needs for
requirement engineering methods. Quality characteristics recognized in
requirements engineering as important by the participants are established and
prioritized. Thereafter, the value of visualization of internal functions for an
enterprise in participatory design projects is presented. Finally, an
integration of techniques for enterprise modeling and prioritization of
requirements is performed to analyze what value such integration has and how
improvements can be achieved.
This research suggests an alternative approach to requirements engineering where
support and methods are grounded in the prerequisites for each situation. The
results are mainly applicable to situations where multi-professional
participation is desired. It is concluded that the organizational context has to
be taken into account in the improvement of methods used in requirements
engineering.
No 595
VALUE-DRIVEN MULTI-CLASS OVERLOAD MANAGEMENT IN REAL-TIME DATABASE SYSTEMS
Jörgen Hansson
In complex real-time applications, real-time systems handle significant amounts
of information that must be managed efficiently, motivating the need for
incorporating real- time database management into real-time systems. However,
resource management in real- time database systems is a complex issue. Since
these systems often operate in environments of imminent and transient overloads,
efficient overload handling is crucial to the performance of a real-time
database system.
In this thesis, we focus on dynamic overload management in real-time database
systems. The multi-class workload consists of transaction classes having
critical transactions with contingency transactions and non-critical
transactions. Non-critical transaction classes may have additional requirements
specifying the minimum acceptable completion ratios that should be met in order
to maintain system correctness. We propose a framework which has been
implemented and evaluated for resolving transient overloads in such workloads.
The contributions of our work are fourfold as the framework consists of (i) a
new scheduling architecture and (ii) a strategy for resolving transient
overloads by re-allocating resources, (iii) a value-driven overload management
algorithm (OR-ULD) that supports the strategy, running in 0{n log n)
time (where n is the number of transactions), and (iv) a bias control mechanism
(OR-ULD/BC). The performance of OR-ULD and OR-ULD/BC is evaluated by extensive
simulations. Results show that, within a specified operational envelope, OR-ULD
enforces critical time constraints for multi-class transaction workloads and
OR-ULD/BC further enforces minimum class completion ratio requirements.
No 596
INCORPORATING USER VALUES IN THE DESIGN OF INFORMATION SYSTEMS AND SERVICES IN
THE PUBLIC SECTOR: A METHODS APPROACH
Niklas Hallberg
This thesis is motivated by the aim of public-sector organizations to become
more efficient by quality improvement efforts and the introduction of
information systems. The main objective is to explore methods for the design of
information systems and information-system-supported services in the public
sector, which meet the users'needs.
The thesis is based on six connected studies. The first study was to describe
the structure of how the staff at public-service units seek advice. Based on
data collected through interviews, a quantitative analysis was performed at
primary healthcare centers, m the second study, the use of Quality Function
Deployment (QFD) for orientation of public services to a quasi-market situation
was investigated. The study displayed how clinical-social-medical services can
be orientated to better suit the referral institutions' needs. The third study
was performed to adjust a QFD model to a method for the design of information
systems in the public sector. The development of the model was performed in a
blocked-case study. In the fourth study, the model was extended and applied in a
case study where it was used for participatory design of
information-system-supported services. In the fifth study, the possibility of
integrating the QFD model with process graph notations was investigated. The
study was performed according to a participatory action research methodology, hi
the final study, an information system was designed using the QFD model
developed and implemented for a public sector profession, occupational
therapists.
The main contribution of the thesis is the QFD model, called Medical Software
Quality Deployment (MSQD). for the design of information systems and
information-systems-supported services in the public sector. The advantages of
MSQD are mat it focuses the design work on the users' needs and provides support
for active parlicipauoii of users. Further advantages are that the requirements
are traceable and the design features are prioritized.
As a support for the efforts being made in the public sector to increase
efficiency, MSQD can be used to design appropriate information systems. The
prototype implementation illustrated several optional ways of how this support
can be implemented using low-cost technology. MSQD can further be used to
develop services to better match the users' needs. Hence, it can be used for
inter-organizational information systems design and, thereby, positive gains can
be made in the collaboration between different public service organizat
No 597
AN ECONOMIC PERSPECTIVE ON THE ANALYSIS OF IMPACTS OF INFORMATION TECHNOLOGY:
FROM CASE STUDIES IN HEALTH-CARE TOWARDS GENERAL MODELS AND THEORIES
Vivian Wimarlund
Organizations of all types want to have individuals utilize the Information
Technology (IT) they purchase. For this reason, the identification of factors
that cause individuals to use IT, factors that are important when developing IT,
and factors that influence organizations’ performance when IT is implemented,
provides helpful guidelines for decision-makers.
The empirical studies included in this thesis refer to health care organizations
and cover a span from the presentation of the economic effects of the
implementation of computer based patient records, and the perceived risk that
can arise during the development of IT, to the importance of stimulating direct
user participation in system development processes.
In the theoretical studies, basic techniques are suggested for the analysis of
the economic effects of the use of methods that stimulate user’s involvement,
e.g., Participatory Design. Furthermore, this part also proposes an IT-maturity
indicator that can be use to analyze the fulfilment of integration and
sophistication in the use of IT in contemporary organizations.
The results emphasize the interaction between IT, human, and economic aspects,
indicated the need to include measures of user preferences in systems
development and implementation processes. They also suggest that
successful IT strategies almost inevitably involve simultaneous investment in
organizational change, innovative business strategies and employees’ human
capital. The findings provide new insights into problems that forced
organizations to re-examine criteria for investing resources when choices
related to the development, introduction and use of IT are made, or when it is
necessary to select approaches to system development. They also raise questions
regarding resource scarcity and alternative use of invested resources.
No 607
UNDERSTANDING AND ENHANCING TRANSLATION BY PARALLEL TEXT PROCESSING
Magnus Merkel
In recent years the fields of translation studies, natural language processing
and corpus linguistics have come to share one object of study, namely parallel
text corpora, and more specifically translation corpora. In this thesis it is
shown how all three fields can benefit from each other, and, in particular, that
a prerequisite for making better translations (whether by humans or with the aid
of computer-assisted tools) is to understand features and relationships that
exist in a translation corpus. The Linköping Translation Corpus (LTC) is the
empirical foundation for this work. LTC is comprised of translations from three
different domains and translated with different degrees of computer support.
Results in the form of tools, measures and analyses of translations in LTC are
presented.
In the translation industry, the use of translation memories, which are based
on the concept of reusability, has been increasing steadily in recent years. In
an empirical study, the notion of reusability in technical translation is
investigated as well as translators’ attitudes towards translation tools.
A toolbox for creating and analysing parallel corpora is also presented. The
tools are then used for uncovering relationships between the originals and their
corresponding translations. The Linköping Word Aligner (LWA) is a portable tool
for linking words and expressions between a source and target text. LWA is
evaluated with the aid of reference data compiled before the system evaluation.
The reference data are created and evaluated automatically with the help of an
annotation tool, called the PLUG Link Annotator.
Finally, a model for describing correspondences between a source text and a
target text is introduced. The model uncovers voluntary shifts concerning
structure and content. The correspondence model is then applied to the LTC.
No 598
METHODS AND TOOLS IN COMPUTER-SUPPORTED TASKFORCE TRAINING
Johan Jenvald
Efficient training methods are important for establishing, maintaining and
developing taskforces that are organised to manage complex and dangerous
situations in order to serve and protect our society. Furthermore, the technical
sophistication of various systems in these organisations, for example command,
control and communication systems is growing, while the resources available for
training are being reduced due to budget cuts and environmental restrictions.
Realism in the training situation is important so that the actual training
prepares the trainees for, and improves the performance in, real situations. The
ability to observe and review the training course of events is crucial if we
want to identify the strengths and shortcomings nf the trained unit, in the
overall effort to improve taskforce performance.
This thesis describes and characterises methods and tools in computer-supported
training of multiple teams organised in taskforces, which cany out complex and
time-critical missions in hazardous environments. We present a framework that
consists of a training methodology together with a system architecture for an
instrumentation system which can provide different levels of computer support
during me different training phases. In addition, we use two case studies to
describe the application of our methods and tools in the military force-on-force
battle-training domain and the emergency management and response domain.
Our approach is to use an observable realistic training environment to improve
the training of teams and taskforces. There are three major factors in our
approach to taskforce training that provide the necessary realism and the
ability to make unbiased observations of the training situations. The first
factor is the modelling and simulation of systems and factors that have
a decisive effect on the training situation and that contribute in creating a
realistic training environment. The second factor is the data collection
that supports unbiased recording of the activities of die trained taskforce when
solving a relevant task. The data come both from technical systems and from
reports based on manual observations. The third factor is the visualisation
of compiled exercise data that provides the participants and others with a
coherent view of the exercise.
The main contribution of this thesis is me systematic description of the
combination of a training methodology and a system architecture for an
instrumentation system for computer-supported taskforce training. The
description characterises the properties and features of our computer-supported
taskforce-training approach, applied in two domains.
No 611
ANCHORING SYMBOLS TO SENSORY DATA
Silvia Coradeschi
Intelligent agents embedded in physical environments need the ability to
connect, or DQFKRU, the symbols used to perform abstract reasoning to the
physical entities which these symbols refer to. Anchoring must deal with
indexical and objective references, definite and indefinite identifiers, and
temporary impossibility to percept physical entities. Futhermore it needs to
rely on sensor data which is inherently affected by uncertainty, and to deal
with ambiguities. In this thesis, we outline the concept of anchoring and its
functionalities. Moreover we show examples of uses of anchoring techniques in
two domains: an autonomous airborne vehicle for traffic surveillance and a
mobile ground vehicle performing navigation tasks.
No 613
ANALYSIS AND SYNTHESIS OF REACTIVE SYSTEMS: A GENERIC LAYERED ARCHITECTURE
PERSPECTIVE
Man Lin
This thesis studies methods and tools for the development of reactive
real-time control systems. The development framework is called Generic Layered
Architecture (GLA). The work focuses or analysis and synthesis of software
residing in the lowest two layers of GLA, namely, the Process Layer and the Rule
Layer. The Process Layer controls cyclic computation and the Rule Layer produces
responses by reacting to discrete events. For both layers there exist earlier
defined languages suitable for describing applications. The programs in the
Process Layer and the Rule Layer are called PL and RL programs, respectively.
Several issues are studied. First of all, we study the semantics and correctness
of RL programs. This includes providing semantics for desired responses and
correctness criteria for RL programs and introducing operational semantics and
static checkers together with some soundness results. The combination of rules
and reactive behavior, together with a formal analysis of this behavior, is the
main contribution of this work. The second issue is the estimation of the
worst-case execution time (WCET) of PL and RL programs. This work allows one to
check if the computation resource of the system is adequate and aims at the
predictability of GLA systems. It contributes to the real-time systems area by
performing WCET analysis on different execution models and language constructs
from those studied in the literature. Finally, we deal with the synthesis of GLA
software from a high-level specification. More specifically, we motivate GLA as
the framework to develop hybrid controllers and present a semi-automatic tool to
generate control software in GLA from a specification expressed in terms of
hybrid automata.
These methods provide formal grounds for analysis and synthesis of software in
GLA. Together with the language and tools developed previously, they ease the
process of developing real-time controlsystems.
No 618
SYSTEMIMPLEMENTERING I PRAKTIKEN: EN STUDIE AV LOGIKER I FYRA PROJEKT
Jimmy Tjäder
Managing information system implementations successfully is a question of
enabling learning processes and controlling project performance. However, there
are many reported cases where one or both of these demands are neglected. One
reason for this might be that learning and controlling put different demands on
the way a project manager manages a project. This thesis explores the logic a
project manager uses to describe his or her actions. The aim of this exploration
is to understand the consequences of different types of logic for information
system implementation.
This thesis is based on two studies. The first study focuses on the relationship
between the project manager's logic and the project process. This study is based
on three implementation projects at ABB Infosystems: projects that aimed to
implement an ERP-system, a CAM-system, and an in-house developed system
respectively. The second study focuses on the relationship between the project
manager's logic and the social context. It is based on one large implementation
of an ERP-system conducted by ABB Infosystems at ABB Industrial Systems.
Research methods used in these studies were document analysis, interviews,
participation in meetings, and analysis of e-mail traffic.
A control logic is dependent on previous experience in order to be successful.
Furthermore, it might create an information overload for the project manager and
hold back an important transfer of knowledge between client and consultants. A
learning logic is hard to accomplish in a project setting due to the common use
of project management theory. However, there were episodes during two of the
projects where the project manager described the project based on a learning
logic. During these episodes the focus was on creating arenas where different
participants' point of views could meet. Finally, the most interesting
observation is that there is no single example of a change from a control logic
to a learning logic in any project. The main reason for this is that there is no
external actor that has the influence and ability to introduce a conflicting
point of view, which might enable the introduction of a learning logic.
No 627
TOOLS FOR DESIGN, INTERACTIVE SIMULATION, AND VISUALIZATION OF OBJECT-ORIENTED
MODELS IN SCIENTIFIC
Vadim Engelson
Mathematical models used in scientific computing are becoming large and
complex. In order to handle the size and complexity, the models should be better
structured (using objectorientation) and visualized (using advanced user
interfaces). Visualization is a difficult task, requiring a great deal of effort
from scientific computing specialists.
Currently, the visualization of a model is tightly coupled with the structure
of the model itself. This has the effect that any changes to the model require
that the visualization be redesigned as well. Our vision is to automate the
generation of visualizations from mathematical models. In other words, every
time the model changes, its visualization is automatically updated without any
programming efforts.
The innovation of this thesis is demonstrating this approach in a number of
different situations, e.g. for input and output data, and for two- and
three-dimensional visualizations. We show that this approach works best for
object-oriented languages (ObjectMath, C++, and Modelica).
In the thesis, we describe the design of several programming environments and
tools supporting the idea of automatic generation of visualizations. Tools for
two-dimensional visualization include an editor for class hierarchies and a tool
that generates graphical user interfaces from data structures. The editor for
class hierarchies has been designed for the ObjectMath language, an
object-oriented extension of the Mathematica language, used for scientific
computing. Diagrams showing inheritance, partof relations, and instantiation of
classes can be created, edited, or automatically generated from a model
structure.
A graphical user interface, as well as routines for loading and saving data,
can be automatically generated from class declarations in C++ or ObjectMath.
This interface can be customized using scripts written in Tcl/Tk.
In three-dimensional visualization we use parametric surfaces defined by
object-oriented mathematical models, as well as results from mechanical
simulation of assemblies created by CAD tools.
Mathematica includes highly flexible tools for visualization of models, but
their performance is not sufficient, since Mathematica is an interpreted
language. We use a novel approach where Mathematica objects are translated to
C++, and used both for simulation and for visualization of 3D scenes (including,
in particular, plots of parametric functions).
Traditional solutions to simulations of CAD models are not customizable and
the visualizations are not interactive. Mathematical models for mechanical
multi-body simulation can be described in an object-oriented way in Modelica.
However, the geometry, visual appearance, and assembly structure of mechanical
systems are most conveniently designed using interactive CAD tools. Therefore we
have developed a tool that automatically translates CAD models to visual
representations and Modelica objects which are then simulated, and the results
of the simulations are dynamically visualized. We have designed a high
performance OpenGL-based 3D-visualization environment for assessing the models
created in Modelica. These visualizations are interactive (simulation can be
controlled by the user) and can be accessed via the Internet, using VRML or
Cult3D technology. Two applications (helicopter flight and robot simulation) are
discussed in detail.
The thesis also contains a section on integration of collision detection and
collision response with Modelica models in order to enhance the realism of
simulations and visualizations. We compared several collision response
approaches, and ultimately developed a new penalty-based collision response
method, which we then integrated with the Modelica multibody simulation library
and a separate collision detection library.
We also present a new method to compress simulation results in order to reuse
them for animations or further simulations. This method uses predictive coding
and delivers high compression quality for results from ordinary differential
equation solvers with varying time step.
No 637
DATABASE TECHNOLOGY FOR CONTROL AND SIMULATION
Esa Falkenroth
This thesis shows how modern database technology can improve data management in
engineering applications. It is divided into four parts. The first part reviews
modern database technology with respect to engineering applications. The second
part addresses data management in control applications. It describes how active
database systems can monitor and control manufacturing processes. A
database-centred architecture is presented along with a compiler technique that
transforms manufacturing operations into queries and deterministic terminating
rule-sets. The database-centred approach was evaluated through a case study
involving a medium-sized production cell. The third part focuses on data
management in industrial simulators. More precisely, it shows how main-memory
database systems can support modelling, collection, and analysis of simulation
data. To handle update streams from high-performance simulators, the database
system was extended with a real-time storage structure for simulation data. Fast
retrieval is achieved using a computational indexing method based on a
super-linear equation-solving algorithm. The fourth and final part compares the
two database-centred approaches.
No 639
BRINGING POWER AND KNOWLEDGE TOGETHER: INFORMATION SYSTEMS DESIGN FOR AUTONOMY
AND CONTROL IN COMMAND WORK
Per Arne Persson
THIS THESIS PRESENTS an empirical ethnographic study that has been
conducted as fieldwork within army command organizations, leading to a
qualitative analysis of data. The title of the thesis captures the contents of
both command work and research, both domains being affected by new technologies
during a period of drastic changes in the military institution. The overriding
research question was why efforts to implement modern information technology are
so slow, costly, and why the contribution from the output as regards higher
control efficiency is so uncertain. Two cases will be described and analysed.
One is a meeting and the other is the development of a computer artefact. Based
on these two cases, the study suggests that social value and not only rational
control efficiency defines what is applied, both in the development process and
in practice. Knowledge and power, expertise and authority, represented by
experts and formal leaders have to be brought together if the work is to be
efficient. Both knowledge from research and information technology will be
rejected, if considered irrelevant. I have called this applying a rationality of
practice.
From the case analysis it can be said that command work is not ordinary
managerial work. Rather, it is a kind of design work, dynamic and hard to define
and control. Command work is knowledge-intensive; it designs and produces
symbols. Therefore it is very flexible and involves interpretation and
negotiation of both its content and products. The most important symbol is the
Army, which must be visible and credible, built from real components.
Command work is pragmatic and opportunistic, conducted by experts in the
modern military command structure who transform the operational environment, and
control it through controlling actions. In that respect autonomy, a prerequisite
to meet evolving events—frictions—and power become core issues, interchangeable
goals and means for flexible social control, in cybernetic terms variety. Key
concepts are social value, function and visibility. Actors must be visible in
the command work, and make work visible. Consequently, when designing control
tools, such as information systems, the design challenge is to reconcile dynamic
and pragmatic demands for power, autonomy and control with demands for
stability. Such an organization becomes a viable system, one that can survive,
because there is no conflict between its mind and physical resources. In
operational terms, this means having freedom of action. The prerequisite to
achieve this is one perspective on knowledge and information and that
information systems match the needs growing from within the work because work
builds the organization.
No 660
AN INTEGRATED SYSTEM-LEVEL DESIGN FOR TESTABILITY METHODOLOGY
Erik Larsson
Hardware testing is commonly used to check whether faults exist in a digital
system. Much research has been devoted to the development of advanced hardware
testing techniques and methods to support design for testability (DFT). However,
most existing DFT methods deal only with testability issues at low abstraction
levels, while new modelling and design techniques have been developed for design
at high abstraction levels due to the increasing complexity of digital systems.
The main objective of this thesis is to address test problems faced by the
designer at the system level. Considering the testability issues at early design
stages can reduce the test problems at lower abstraction levels and lead to the
reduction of the total test cost. The objective is achieved by developing
several new methods to help the designers to analyze the testability and improve
it as well as to perform test scheduling and test access mechanism design. The
developed methods have been integrated into a systematic methodology for the
testing of system-on-chip. The methodology consists of several efficient
techniques to support test scheduling, test access mechanism design, test set
selection, test parallelisation and test resource placement. An optimization
strategy has also been developed which minimizes test application time and test
access mechanism cost, while considering constraints on tests, power consumption
and test resources. Several novel approaches to analyzing the testability of a
system at behavioral level and register-transfer level have also been developed.
Based on the analysis results, difficult-to-test parts of a design are
identified and modified by transformations to improve testability of the whole
system. Extensive experiments, based on benchmark examples and industrial
designs, have been carried out to demonstrate the usefulness and efficiency of
the proposed methodology and techniques. The experimental results show clearly
the advantages of considering testability in the early design stages at the
system level.
No 688
MODEL-BASED EXECUTION MONITORING
Marcus Bjäreland
The task of monitoring the execution of a software-based controller
in order to detect, classify, and recover from discrepancies between the actual
effects of control actions and the effects predicted by a model, is the topic of
this thesis. Model-based execution monitoring is proposed as a technique for
increasing the safety and optimality of operation of large and complex
industrial process controllers, and of controllers operating in complex and
unpredictable environments (such as unmanned aerial vehicles). In this thesis we
study various aspects of model-based execution monitoring, including the
following:
The relation between previous approaches to execution monitoring in Control
Theory, Artificial Intelligence and Computer Science is studied and a common
conceptual framework for design and analysis is proposed.
An existing execution monitoring paradigm, ontological control, is generalized
and extended. We also present a prototype implementation of ontological control
with a first set of experimental results where the prototype is applied to an
actual industrial process control system: The ABB STRESSOMETER cold mill
flatness control system.
A second execution monitoring paradigm, stability-based execution monitoring,
is introduced, inspired by the vast amount of work on the ‘‘stability’’ notion
in Control Theory and Computer Science.
Finally, the two paradigms are applied in two different frameworks. First, in
the ‘‘hybrid automata’’ framework, which is a state-of-the-art formal modeling
framework for hybrid (that is, discrete+continuous) systems, and secondly, in
the logical framework of GOLOG and the Situation Calculus.
No 689
EXTENDING TEMPORAL ACTION LOGIC
Joakim Gustafsson
An autonomous agent operating in a dynamical environment must be able to
perform several ‘‘intelligent’’ tasks, such as learning about the environment,
planning its actions and reasoning about the effects of the chosen actions. For
this purpose, it is vital that the agent has a coherent, expressive, and well
understood means of representing its knowledge about the world.
Traditionally, all knowledge about the dynamics of the modeled world has been
represented in complex and detailed action descriptions. The first contribution
of this thesis is the introduction of domain constraints in TAL, allowing a more
modular representation of certain kinds of knowledge.
The second contribution is a systematic method of modeling different types of
conflict handling that can arise in the context of concurrent actions. A new
type of fluent, called influence, is introduced as a carrier from cause to
actual effect. Domain constraints govern how influences interact with ordinary
fluents. Conflicts can be modeled in a number of different ways depending on the
nature of the interaction.
A fundamental property of many dynamical systems is that the effects of
actions can occur with some delay. We discuss how delayed effects can be modeled
in TAL using the mechanisms previously used for concurrent actions, and consider
a range of possible interactions between the delayed effects of an action and
later occurring actions.
In order to model larger and more complex domains, a sound modeling
methodology is essential. We demonstrate how many ideas from the object-oriented
paradigm can be used when reasoning about action and change. These ideas are
used both to construct a framework for high level control objects and to
illustrate how complex domains can be modeled in an elaboration tolerant manner.
No. 720
ORGANIZATIONAL INFORMATION PROVISION MANAGING MANDATORY AND DISCRETIONARY
UTILIZATION OF INFORMATION TECHNOLOGY
Carl-Johan Petri
This dissertation focuses on the organizational units that participate in the
operation of shared information systems – and especially how the codification
responsibilities (information creation, collection and recording) are described
in the governance models: who are supposed to perform these activities and how
are they promoted or hampered by the management control systems?
The IT governance model describes the patterns of authority for key IT
activities in an organization, which are allocated to different stakeholders to
assure that the IT resources are managed and utilized to support the
organization’s strategies and goals.
Altogether, primary data has been compiled in eight case studies and one brief
telephone survey. In addition, three previous case studies (produced by other
researchers) have been used as secondary data.
The findings indicate that technical responsibilities typically are addressed
and managed in the IT governance models, but that the using departments’
responsibilities in the operation rarely are described. Information collection
and recording activities therefore risk to be left unmanaged from an information
systems perspective.
The thesis proposes that an information sourcing responsibility may be
included in the IT
governance models and that the management control systems can be redesigned to
promote
mandatory or discretionary information compilation and recording, such that
the shared
information systems produce the anticipated outcome.
No. 724
DESIGNING AGENTS FOR SYSTEMS WITH ADJUSTABLE AUTONOMY
Paul Scerri
Agents are an artificial intelligence technique of encapsulating a
piece of pro-active, autonomous, intelligent software in a module that senses
and acts in its environment. As the technology underlying sophisticated
multi-agent systems improves, such systems are being deployed in ever more
complex domains and are being given ever more responsibility for more critical
tasks. However, multi-agent technology brings with it not only the potential for
better, more efficient systems requiring less human involvement but also the
potential to cause harm to the system's human users. One way of mitigating the
potential harm an intelligent multi-agent system can do is via the use of
adjustable autonomy. Adjustable autonomy is the idea of dynamically changing the
autonomy of agents in a multi-agent system depending on the circumstances.
Decision making control is transferred from agents to users when the potential
for costly agent errors is large.
We believe that the design of the agents in a multi-agent system impacts the
difficulty with which the system's adjustable autonomy mechanisms are
implemented. Some features of an agent will make the implementation of
adjustable autonomy easier, while others will make it more difficult. The
central contribution of this thesis is a set of guidelines for the design of
agents which, if followed, lead to agents which make adjustable autonomy
straightforward to implement. In addition, the guidelines lead to agents from
which it is straightforward to extract useful information and whose autonomy may
be changed in a straightforward manner.
The usefulness of the guidelines is shown in the design of the agents for two
systems with
adjustable autonomy. The first system is EASE, which is used for creating
intelligent actors
for interactive simulation environments. The second system is the E-Elves
which is a multiagent
system streamlining the everyday coordination tasks of a human organisation.
An
evaluation of the two systems demonstrates that following the guidelines leads
to agents that
make effective adjustable autonomy mechanisms easier to implement.
No. 725
SEMANTIC INSPECTION OF SOFTWARE ARTIFACTS FROM THEORY TO PRACTICE
Tim Heyer
Providing means for the development of correct software still remains a
central challenge of computer science. In this thesis we present a novel
approach to tool-based inspection focusing on the functional correctness of
software artifacts. The approach is based on conventional inspection in the
style of Fagan, but extended with elements of formal verification in the style
of Hoare. In Hoare’s approach a program is annotated with assertions. Assertions
express conditions on program variables and are used to specify the intended
behavior of the program. Hoare introduced a logic for formally proving the
correctness of a program with respect to the assertions.
Our main contribution concerns the predicates used to express assertions. In
contrast to Hoare, we allow an incomplete axiomatization of those predicates
beyond the point where a formal proof of the correctness of the program may no
longer be possible. In our approach predicates may be defined in a completely
informal manner (e.g. using natural language). Our hypothesis is, that relaxing
the requirements on formal rigor makes it easier for the average developer to
express and reason about software artifacts while still allowing the automatic
generation of relevant, focused questions that help in finding defects. The
questions are addressed in the inspection, thus filling the somewhat loosely
defined steps of conventional inspection with a very concrete content. As a
side-effect our approach facilitates a novel systematic, asynchronous inspection
process based on collecting and assessing the answers to the questions.
We have adapted the method to the inspection of code as well as the inspection
of early designs. More precisely, we developed prototype tools for the
inspection of programs written in a subset of Java and early designs expressed
in a subset of UML. We claim that the method can be adapted to other notations
and (intermediate) steps of the software process. Technically, our approach is
working and has successfully been applied to small but non-trivial code (up to
1000 lines) and designs (up to five objects and ten messages). An in-depth
industrial evaluation requires an investment of substantial resources over many
years and has not been conducted. Despite this lack of extensive assessment, our
experience shows that our approach indeed makes it easier to express and reason
about assertions at a high level of abstraction.
No. 726
A USABILITY PERSPECTIVE ON REQUIREMENTS ENGINEERING - FROM METHODOLOGY TO
PRODUCT DEVELOPMENT
Pär Carlshamre
Usability is one of the most important aspects of software. A multitude of
methods and techniques intended to support the development of usable systems has
been provided, but the impact on industrial software development has been
limited. One of the reasons for this limited success is the gap between
traditional academic theory generation and commercial practice. Another reason
is the gap between usability engineering and established requirements
engineering practice. This thesis is based on empirical research and puts a
usability focus on three important aspects of requirements engineering:
elicitation, specification and release planning.
There are two main themes of investigation. The first is concerned with the
development and introduction of a usability-oriented method for elicitation and
specification of requirements, with an explicit focus on utilizing the skills of
technical communicators. This longitudinal, qualitative study, performed in an
industrial setting in the first half of the nineties, provides ample evidence in
favor of a closer collaboration between technical communicators and system
developers. It also provides support for the benefits of a taskoriented approach
to requirements elicitation. The results are also reflected upon in a
retrospective paper, and the experiences point in the direction of an increased
focus on the specification part, in order to bridge the gap between usability
engineering and established requirements management practice.
The second represents a usability-oriented approach to understanding and
supporting release planning in software product development. Release planning is
an increasingly important part of requirements engineering, and it is
complicated by intricate dependencies between requirements. A survey performed
at five different companies gave an understanding of the nature and frequency of
these interdependencies. The study indicated that the major part of requirements
are dependent on others in a way that affects release planning, either by posing
limits on the sequence of implementation, or by a reciprocal effects on cost or
value. This knowledge was then turned into the design and implementation of a
support tool, with the purpose of provoking a deeper understanding of the
release planning task. This was done through a series of cooperative evaluation
sessions with release planning experts. The results indicate that, although the
tool was considered useful by the experts, the initial understanding of the task
was overly simplistic. Release planning was found to be a wicked problem, and a
number of implications for the design of a supportive tool are proposed.
No. 732
FROM INFORMATION MANAGEMENT TO TASK MANAGEMENT IN ELECTRONIC MAIL
Juha Takkinen
Electronic mail (e-mail) is an under-utilised resource of information and
knowledge. It could be an important part of the larger so-called organisational
memory (OM)—if it were not so disorganised and fragmented. The OM contains the
knowledge of the organisation’s employees, written records, and data. This
thesis is about organising and managing information in, and about, e-mail so as
to make it retrievable and usable for task management purposes.
The approach is user-centred and based on a conceptual model for task
management. The model is designed to handle tasks that occur in the
communications in an open distributed system, such as Internet e-mail. Both
structured and unstructured tasks can be supported. Furthermore, the model
includes management of desktop source information, which comprises the different
electronically available sources in a user’s computer environment. The
information from these is used in the model to sort information and thereby
handle tasks and related information. Tasks are managed as conversations, that
is, exchanges of messages.
We present a language called Formal Language for Conversations (FLC), based on
speech act theory, which is used to organise messages and relevant information
for tasks. FLC provides the container for task-related information, as well as
the context for managing tasks. The use of FLC is exemplified in two scenarios:
scheduling a meeting and making conference arrangements.
We describe a prototype based on the conceptual model. The prototype
explicitly refines and supports the notion of threads, which are employed so as
to give tasks a context. It integrates the use of FLC into the traditional
threading mechanism of e-mail, in addition to matching on text in the body. An
agent architecture is also described, which is used to harmonise the information
in the heterogeneous desktop sources. Finally, human-readable filtering rules
created by a machine learning algorithm are employed in the prototype. The
prototype is evaluated with regard to its thread-matching capability, as well as
the creation of usable and readable filtering rules. Both are deemed
satisfactory.
No. 745
LIVE HELP SYSTEMS: AN APPROACH TO INTELLIGENT HELP FOR WEB INFORMATION SYSTEMS
Johan Åberg
Since the creation of the World-Wide Web we have seen a great growth in the
complexity of Web sites. There has also been a large expansion in number of Web
sites and in amount of usage. As a consequence, more and more Web site users are
having problems accomplishing their tasks, and it is increasingly important to
provide them with support.
Our research approach to online help for Web site users is the introduction
and study of what we call live help systems. A live help system is an
intelligent help system which integrates human experts in the process of advice
giving by allowing users to communicate with dedicated expert assistants through
the help system. Traditional fully automatic intelligent help systems have
several common problems. For example, there are problems with large system
complexity, knowledge engineering bottlenecks, and credibility. We hypothesise
that live help systems, offer a solution to these problems.
Our aim with this thesis is to explore the design, technical feasibility, and
usability of live help systems, in order to provide a foundation on which future
research and practise can build. We have systematically explored the design
space of live help systems. We have implemented and successfully deployed a live
help system at an existing Web site, thereby demonstrating technical
feasibility. During the deployment period, data was collected from the users and
the human experts. Our analysis shows that live help systems are greatly
appreciated by Web users, and that they are indeed effective in helping users
accomplish their tasks. We also provide empirical results regarding the
effectiveness of employing automatic help functions as a filter for the human
experts. Further, the concept of user modelling as an aid for human experts has
been explored as part of the field study.
No. 746
MONITORING DISTRIBUTED TEAMWORK TRAINING
Rego Granlund
In team collaboration training, especially when the training is distributed on
the net, it exists a problem of identifying the students' collaboration and work
processes. An important design task when developing distributed interactive
simulation systems for team training is therefore to define a proper monitoring
functionality that will help training managers to evaluate the training. Thus a
goal of a computer-based monitoring system is to give training managers help in
understanding and assessing the performance of the trainees.
This thesis deals with the design and implementation of monitoring strategies
for distributed collaboration training. The aim has been to explore different
automatic monitoring strategies, and how they can help a training manger in
their task of understanding the students' collaboration during a training
session.
To explore possible monitoring strategies, a distributed, net-based micro-world
simulation and training system, C3Fire, has been developed and three series of
experiments has been performed. C3Fire provides a Command, Control and
Communication training environment that can be used for team collaboration
training of emergency management tasks. The training domain, which is forest
fire fighting, acts as a micro-world, which creates a good dynamic environment
for the trainees.
In the three performed studies a total of 192 persons have participated as
students. A 132 of these were computer-literate undergraduate students and 60
professional military officers. In these studies four monitoring goals have been
explored: the effectiveness of the teams, the information distribution in the
organisation, the students situation awareness, and the students work and
collaboration methods.
No. 747
DEVELOPMENT OF IT-SUPPORTED INTER-ORGANISATIONAL COLLABORATION - A CASE STUDY
IN THE SWEDISH PUBLIC SECTOR
Anneli Hagdahl
Collaboration across the organisational boundaries takes place for different
reasons. One of them is to solve complex problems that cannot be dealt with by a
single organisation. The area of vocational rehabilitation constitutes an
example of inter-organisational collaboration motivated by a need for joint
problem solving. Individuals are admitted to vocational rehabilitation with the
aim of entering or re-entering the labour market. These individuals constitute a
heterogeneous group with different kinds of problems, based on e.g. their social
situation, long-term diseases and/or substance abuse. As a result, they are
handled at more than one welfare state agency at the time, and the practitioners
working at these agencies need to collaborate to find individual solutions for
their clients. The expected positive effects of such collaboration are long-term
planning, increased quality of the case management, and reductions of invested
time and money.
In this thesis, an interpretive case study of inter-organisational teamwork
within the vocational rehabilitation is presented. The aim of the study was to
investigate how the collaboration could be supported by information technology.
During a time period of two years, practitioners from three welfare state
agencies took part in the research project. The activities included observations
of the teamwork, individual interviews with the practitioners and design of
information technology that should support the teamwork. An essential part of
the design activities was the user representatives' direct participation in the
design group, composed by practitioners and researchers. To stimulate the
participation, methods with its origin in the participatory design approach were
used.
The design requirements that were defined included support for the team's
communication and joint documentation of cases, and also information sharing
about previous, present and future rehabilitation activities. The teamwork was
characterised by an open, positive atmosphere where the practitioners were
trying to find solutions for the clients within the frames of the current rules
and regulations, limited by the resources allocated for vocational
rehabilitation activities. However, the environment was also found to be dynamic
with changing, and in some cases conflicting, enterprise objectives.
Furthermore, the enterprise
objectives were not broken down into tangible objectives on the operational
level. The physical team meetings and the meetings with the clients constituted
essential parts of the work practices and it is concluded that these meetings
should not be substituted by technology. The case management could, however, be
supported by a flexible tool that meets the users' needs of freedom of action.
No. 749
INFORMATION TECHNOLOGY FOR NON-PROFIT ORGANISATIONS - EXTENDED PARTICIPATORY
DESIGN OF AN INFORMATION SYSTEM FOR TRADE UNION SHOP STEWARDS
Sofie Pilemalm
The conditions for the third, non-profit sector, such as grassroots
organisations and trade unions, have changed dramatically in recent years, due
to prevailing social trends. Non-profit organisations have been seen as early
adopters of information technology, but the area is, at the same time, largely
unattended by scientific research. Meanwhile, the field of information systems
development is, to an increasing extent, recognising the importance of user
involvement in the design process. Nevertheless, participatory development
approaches, such as Participatory Design are not suited to the context of entire
organisations, and new, networked organisational structures, such as those of
non-profit organisations. This reasoning also applies to the theoretical
framework of Activity Theory, whose potential benefits for systems development
have been acclaimed but less often tried in practice.
This thesis aims, first, at extending Participatory Design to use in large,
particularly non-profit organisations. This aim is partly achieved by
integrating Participatory Design with an Argumentative Design approach and with
the application of Activity Theory modified for an organisational context. The
purpose is to obtain reasoning about and foreseeing the consequences of
different design solutions. Second, the thesis aims at exploring information
technology needs, solutions, and consequences in non-profit organisations, in
trade unions in particular. The case under study is the Swedish Trade Union
Confederation (LO) and the design of an information system for its 250 000 shop
stewards.
The thesis is based on six related studies complemented with data from work in
a local design group working according to the principles of Participatory
Design. The first study was aimed at investigating and comparing trade union
management’s view of the new technology and the actual needs of shop stewards.
The second study investigated the situation, tasks and problems of shop
stewards, as a pre-requisite for finding information technology needs. The third
study merged the previous findings into an argumentative design of an
information systems design proposal. The fourth study collected the voices from
secondary user groups in the organisation, and presented an Activity theoretical
analysis of the union organisation and a modified design proposal in the form of
a prototype. The fifth study presented an Activity theoretical framework,
modified for organisational application, and used it for producing hypotheses on
possible shop steward tasks and organisational consequences of the
implementation of the information system. The sixth paper was aimed at the
initial testing of the hypotheses, through the evaluation of information
technology facilities in one of the individual union affiliations. The
complementary data was used to propose further modifications of the integrated
Participatory, Argumentative, and Activity Theory design approach.
The major contributions of the study are, first, a modified Participatory
Design approach to be applied at three levels; in general as a way of overcoming
experienced difficulties with the original approach, in the context of entire,
large organisations, and in the specific non-profit organisation context. The
second contribution is generated knowledge in the new research area of
information technology in the non-profit, trade union context, where for
instance the presented prototype can be seen as a source of inspiration. Future
research directions include further development and formalisation of the
integrated Participatory Design approach, as well as actual consequences of
implementing information technology in non-profit organisations and trade
unions.
No. 757
INDEXING STRATEGIES FOR TIME SERIES DATA
Henrik André-Jönsson
Traditionally, databases have stored textual data and have been used to store
administrative information. The computers used, and more specifically the
storage available, have been neither large enough nor fast enough to allow
databases to be used for more technical applications. In recent years these two
bottlenecks have started to disappear and there is an increasing interest in
using databases to store non-textual data like sensor measurements or other
types of process-related data. In a database a sequence of sensor measurements
can be represented as a time series. The database can then be queried to find,
for instance, subsequences, extrema points, or the points in time at which the
time series had a specific value. To make this search efficient, indexing
methods are required. Finding appropriate indexing methods is the focus of this
thesis.
There are two major problems with existing time series indexing strategies:
the size of the index structures and the lack of general indexing strategies
that are application independent. These problems have been thoroughly researched
and solved in the case of text indexing files. We have examined the extent to
which text indexing methods can be used for indexing time series.
A method for transforming time series into text sequences has been
investigated. An investigation was then made on how text indexing methods can be
applied on these text sequences. We have examined two well known text indexing
methods: the signature files and the B-tree. A study has been made on how these
methods can be modified so that they can be used to index time series. We have
also developed two new index structures, the signature tree and paged trie
structures. For each index structure we have constructed cost and size models,
resulting in comparisons between the different approaches.
Our tests indicate that the indexing method we have developed, together with
the B-tree structure, produces good results. It is possible to search for and
find sub-sequences of very large time series efficiently.
The thesis also discusses what future issues will have to be investigated for
these techniques to be usable in a control system relying on time-series
indexing to identify control modes.
No. 758
LIBRARY COMMUNICATION AMONG PROGRAMMERS WORLDWIDE
Erik Berglund
Programmers worldwide share components and jointly develop components on a
global scale in contemporary software development. An important aspect of such
library-based programming is the need for technical communication with regard to
libraries – LIBRARY. COMMUNICATION. As part of their work, programmers must
discover, study, and learn as well as debate problems and future development. In
this sense, the electronic, networked media has fundamentally changed
programming by providing new mechanisms for communication and global interaction
through global networks such as the Internet. Today, the baseline for library
communication is hypertext documentation. Improvements in quality, efficiency,
cost and frustration of the programming activity can be expected by further
developments in the electronic aspects of library communication.
This thesis addresses the use of the electronic networked medium in the
activity of library communication and aims to discover design knowledge for
communication tools and processes directed towards this particular area. A model
of library communication is provided that describes interaction among programmer
as webs of interrelated library communities. A discussion of electronic,
networked tools and processes that match such a model is also provided.
Furthermore, research results are provided from the design and industrial
evaluation of electronic reference documentation for the Java domain.
Surprisingly, the evaluation did not support individual adaptation
(personalization). Furthermore, global library communication processes have been
studied in relation to open-source documentation and user-related bug handling.
Open-source documentation projects are still relatively uncommon even in
open-source software projects. User-related bug handling does not address the
passive behavior users have towards bugs. Finally, the adaptive authoring
process in electronic reference documentation is addressed and found to provide
limited support for expressing the electronic, networked dimensions of authoring
requiring programming skill by technical writers.
Library communication is addressed here by providing engineering knowledge
with regards to the construction of practical electronic, networked tools and
processes in the area. Much of the work has been performed in relation to Java
library communication and therefore the thesis has particular relevance for the
object-oriented programming domain. A practical contribution of the work is the
DJavadoc tool that contributes to the development of reference documentation by
providing adaptive Java reference documentation.
No. 765
ADAPTING USERS: TOWARDS A THEORY OF QUALITY IN USE
Stefan Holmlid
The power of periods of learning and the knowledge of training professionals
are underestimated and unexplored. The challenges posed in this dissertation to
usability and hci deal with the transformation from usability to use quality,
and learning as a means to promote use quality.
The design of interactive artefacts today is mostly based on the assumption
that the best design is achieved by formatively fitting properties of the
artifact in an iterative process to specified users, with specified tasks in a
specified context. As a contrast to that one current trend is to put a lot more
emphasis on designing the actual use of the artefact. The assumption is that the
best design is achieved through a design process where the artifact is given
form in accordance to how it is put to use.
We want to provide stakeholders of systems development with an increased
sensitivity to what use quality is and how they might participate in focusing on
use quality. Thus, we have asked ourselves what specific use qualities, and
models thereof that we find and formulate when studying a set of systems in use
at a bank, for the purpose of supporting learning environment designers.
This thesis reports on the development of a theory of use quality based on
theoretical investigations and empirical research of use qualities of
interactive artifacts. Empirical studies were performed in close collaboration
and intervention with learning environment developers in two development
projects, focusing on use qualities and qualities of learning to use the
artifact. The four studies comprised; 1] (learning to) use a word processor, 2]
using experiences from that to formulate models of use quality as a design base
for a learning environment for a teller system, 3] (learning to) use the teller
system, and finally 4] assessment and modelling of the use of the teller system.
The specific results are a set of models of use quality, encompassing a number
of empirically derived use qualities. The most central of the latter are;
surprise and confusion, the thin, but bendable, border between ready-to-hand and
present-at-hand, an elasticity of breakdown; ante-use, that which precedes use;
dynamicity and activity, the timebased qualities without which the interactive
material can not be understood or designed. The general results are presented as
a theory of use quality, represented through a set of models of use quality.
These models are aimed at design for use, rather than focusing on, in a
monocultural faschion, an artifact’s properties, its usability, its presence or
the user experience.
No. 771
MULTIMEDIA REPRESENTATIONS OF DISTRIBUTED TACTICAL OPERATIONS
Magnus Morin
Our society frequently faces minor and major crises that require rapid
intervention by well-prepared forces from military organizations and
public-safety agencies. Feedback on the performance in operations is crucial to
maintain and improve the quality of these forces. This thesis presents methods
and tools for reconstruction and exploration of tactical operations.
Specifically, it investigates how multimedia representations of tactical
operations can be constructed and used to help participants, managers, and
analysts uncover the interaction between distributed teams and grasp the
ramifications of decisions and actions in a dynamically evolving situation. The
thesis is the result of several field studies together with practitioners from
the Swedish Armed Forces and from the public-safety sector in Sweden and the
United States. In those studies, models of realistic exercises were constructed
from data collected from multiple sources in the field and explored by
participants and analysts in subsequent after-action reviews and in-depth
analyses. The results of the studies fall into three categories. First, we
explain why multimedia representations are useful and demonstrate how they
support retrospective analysis of tactical operations. Second, we describe and
characterize a general methodology for constructing models of tactical
operations that can be adapted to the specific needs and conditions in different
domains. Third, we identify effective mechanisms and a set of reusable
representations for presenting multimedia models of operations. An additional
contribution is a domain-independent, customizable visualization framework for
exploring multimedia representations.
No. 772
A TYPE-BASED FRAMEWORK FOR LOCATING ERRORS IN CONSTRAINT LOGIC PROGRAMS
Pawel Pietrzak
This thesis presents a method for automatic location of type errors in
constraint logic programs (CLP) and a prototype debugging tool. The approach is
based on techniques of verification and static analysis originating from logic
programming, which are substantially extended in the thesis. The main idea is to
verify partial correctness of a program with respect to a given specification
which is intended to describe (an approximation of) the call-success semantics
of the program. This kind of specification, describing calls and successes for
every predicate of a program is known as descriptive directional type. For
specifying types for CLP programs the thesis extends the formalism of regular
discriminative types with constraint-domain-specific base types and with
parametric polymorphism.
Errors are located by identifying program points that violate verification
conditions for a given type specification. The specifications may be developed
interactively taking into account the results of static analysis.
The main contributions of the thesis are:
- a verification method for proving partial correctness of CLP programs with
respect to
polymorphic specifications of the call-success semantics, - a specification language for defining parametric regular types,
- a verification-based method for locating errors in CLP programs,
- a static analysis method for CLP which is an adaptation and
generalization of techniques
previously devised for logic programming; its implementation is used in our diagnosis tool for
synthesizing draft specifications, - an implementation of the prototype diagnosis tool (called TELL).
No. 774
MODELLING OBJECT-ORIENTED DYNAMIC SYSTEMS USING A LOGIC-BASED FRAMEWORK
Choong-ho Yi
We observe that object-oriented (OO) formalisms and specification languages
are popular and obviously useful, and, in particular, that they are increasingly
used even for systems that change over time. At the same time, however, the
system specification is not precise enough in these approaches. This thesis
presents a formal approach to modelling OO dynamic systems using a logic-based
framework. The UML which is an OO standard language, and the Statecharts
formalism which is a leading approach to modelling dynamic systems, have been
formalized in the framework. In addition, formal reasoning from the
system-in-run perspective has been put forward, focusing on business goals.
Business goals, an emerging issue within systems engineering, are reasoned with
as a systematic way to check whether the goals are achieved or not in real
business activities, and to cope with the situation where the goals are
violated.
No. 779
A STUDY IN THE COMPUTATIONAL COMPLEXITY OF TEMPORAL REASONING
Mathias Broxvall
Reasoning about temporal and spatial information is a common task in computer
science, especially in the field of artificial intelligence. The topic of this
thesis is the study of such reasoning from a computational perspective. We study
a number of different qualitative point based formalisms for temporal reasoning
and provide a complete classification of computational tractability for
different time models. We also develop more general methods which can be used
for proving tractability and intractability of other relational algebras. Even
though most of the thesis pertains to qualitative reasoning the methods employed
here can also be used for quantitative reasoning. For instance, we introduce a
tractable and useful extension to the quantitative point based formalism STP .
This extension gives the algebra an expressibility which subsumes the largest
tractable fragment of the augmented interval algebra and has a faster and
simpler algorithm for deciding consistency.
The use of disjunctions in temporal formalisms is of great interest not only
since disjunctions are a key element in different logics but also since the
expressibility can be greatly enhanced in this way. If we allow arbitrary
disjunctions, the problems under consideration typically become intractable and
methods to identify tractable fragments of disjunctive formalisms are therefore
useful. One such method is to use the independence property. We present an
automatic method for deciding this property for many relational algebras.
Furthermore, we show how this concept can not only be used for deciding
tractability of sets of relations but also to demonstrate intractability of
relations not having this property. Together with other methods for making total
classifications of tractability this goes a long way towards easing the task of
classifying and understanding relational algebras.
The tractable fragments of relational algebras are sometimes not expressive
enough to model real-world problems and a backtracking solver is needed. For
these cases we identify another property among relations which can be used to
aid general backtracking based solvers to find solutions faster.
No. 785
PUBLIKA INFORMATIONSTJÄNSTER - EN STUDIE AV DEN INTERNETBASERADE ENCYKLOPEDINS
BRUKSEGENSKAPER
Lars Hult
Samhället använder idag i allt större omfattning IT-baserade tjänster för
olika typer av kommunikation och informationssökning med Internet som bärare.
Det publika bruket av Internettjänster skiljer sig från traditionella
verksamhetssystem, vilket påverkar inslagen i designprocessen i avgörande grad.
I avhandlingen behandlas uppgiften att utveckla en bruksorienterad
egenskapsrepertoar som stöd för designarbete. En utgångspunkt för arbetet har
varit upplevda problem i praktiken med exempelvis ensidig funktionsorienterad
kravställning och svårigheter att rikta designarbetet för utveckling av publika
IT-produkter. Metodmässigt bygger forskningen på en kombination av
teoriutveckling, explorativ systemutveckling och empiriska observationer, varvid
utveckling och användning av en encyklopedisk informationstjänst har valts som
tillämpningsområde.
Avhandlingens empiriska del baseras på en fallstudie som har genomförts under
tre år inriktad mot att designa, analysera och beskriva utvecklingen av
Nationalencyklopedins Internettjänst och bruksvärdet för dess intressenter.
Studien som är artefakt- och designorienterad har grundats i genrebegreppet
vilket använts som ett bruksorienterat perspektiv för att beskriva artefaktens
bruksegenskaper. Arbetet har genomförts inom en kvalitativ forskningsansats som
bland annat inkluderat prototyputveckling, produktutvärdering och bruksstudier i
hemmiljö.
Det huvudsakliga kunskapsbidraget i avhandlingen utgörs dels av en
genrebeskrivning av den Internetbaserade encyklopedin, dels den konkreta
egenskapsrepertoar kopplad till bruksvärden för Internetbaserade encyklopedier
som identifierats inom studien. Utöver detta framhåller studien en teoretisk
referensram som ansats för genrebeskrivning av publika IT-artefakter. Det
empiriska materialet påvisar främst ickefunktionella inslag som grund för
intressenternas upplevda bruksvärden. Bruksvärden som skiljer sig från
funktionella krav avseende upplevelsen i bruk formuleras som exempelvis
aktualitet, auktoritet, integritet, närhet, precision, sökbarhet,
tillgänglighet, totalitet och trovärdighet. Tilllämpning av dessa egenskaper
inom nuvarande produktutveckling ger nya möjligheter som del i kravställning för
designarbete och stöd för kommunikation inom designgrupp och med
underleverantörer.
No. 793
A GENERIC PRINCIPLE FOR ENABLING INTEROPERABILITY OF STRUCTURED AND
OBJECT-ORIENTED ANALYSIS AND DESIGN TOOLS
Asmus Pandikow
In the 1980s, the evolution of engineering methods and techniques yielded the
object-oriented approaches. Specifically, object orientation was established in
software engineering, gradually relieving structured approaches. In other
domains, e.g. systems engineering, object orientation is not well established.
As a result, different domains employ different methods and techniques. This
makes it difficult to exchange information between the domains, e.g. passing
systems engineering information for further refinement to software engineering.
This thesis presents a generic principle for bridging the gap between structured
and object-oriented specification techniques. The principle enables
interoperability of structured and object-oriented analysis and design tools
through mutual information exchanges. Therefore, the concepts and elements of
representative structured and object-oriented specification techniques are
identified and analyzed. Then, a metamodel for each specification technique is
created. From the meta-models, a common metamodel is synthesized. Finally,
mappings between the meta-models and the common meta-model are created. Used in
conjunction, the meta-models, the common meta-model and the mappings enable tool
interoperability through transforming specification information under one
meta-model via the common meta-model into a representation under another
metamodel. Example transformations that illustrate the proposed principle using
fragments of an aircraft’s landing gear specification are provided. The work
presented in this thesis is based on the achievements of the SEDRES (ESPRIT
20496), SEDEX (NUTEK IPII-98-6292) and SEDRES-2 (IST 11953) projects. The
projects strove for integrating different systems engineering tools in the
forthcoming ISO-10303-233 (AP-233) standard for systems engineering design data.
This thesis is an extension to the SEDRES / SEDEX and AP-233 achievements. It
specifically focuses on integrating structured and modern UML based
object-oriented specification techniques which was only performed schematically
in the SEDRES / SEDEX and AP-233 work.
No. 800
A FRAMEWORK FOR THE COORDINATION OF COMPLEX SYSTEMS’ DEVELOPMENT
Lars Taxén
This study is about the coordination of complex systems’ development. A
Framework has been designed and deployed by the author in the development
practice of Ericsson, a major supplier of telecommunication systems on the
global market. The main purpose of the study is to investigate the impacts on
coordination from the Framework. The development projects are very large and
subject to turbulent market conditions. Moreover, they have many participants
(often several thousand), have tight time constraints and are distributed to
many design centres all over the world. In these projects, coordination of the
development is of crucial importance. The Framework is grounded in a tentative
theory called the Activity Domain Theory, which in turn is based on the praxis
philosophy. In this theory the interaction between the individual and her
environment is mediated by signs. Coordination is conceived as a particular
activity domain which provides coordination to the development projects. The
coordination domain is gradually constructed by the actors in this domain by
iteratively refining a conceptual model, a process model, a transition model, a
stabilizing core and information system support. In this process individual
knowledge, shared meaning and organizational artefacts evolve in a dialectical
manner. The Framework has been introduced in the Ericsson development practice
over a period of more than ten years. Between 1999 and 2002 approximately 140
main projects and sub-projects at Ericsson have been impacted by the Framework.
These projects were distributed to more than 20 different development units
around the world and were carried out in a fiercely turbulent environment. The
findings indicate that the Framework has had a profound impact on the
coordination of the development of the most complex nodes in the 3rd generation
of mobile systems. The knowledge contributions include an account for the
history of the Framework at Ericsson and an identification of elements which
contribute to successful outcomes of development projects.
No. 808
TRE PERSPEKTIV PÅ FÖRVÄNTNINGAR OCH FÖRÄNDRINGAR I SAMBAND MED INFÖRANDE AV
INFORMATIONSSYSTEM
Klas Gäre
Vad är det vi inte upptäcker i implementeringsprojekt? Varför? Vilka är skälen
till oväntade och oplanerade konsekvenser? Det finns inga enkla samband mellan
satsningar på IT och t ex produktivitet. För att bättre förstå dessa samband
behövs begrepp och teorier för att beskriva dem. Införande och användning av IS
genererar förändringar i handlingar, rutiner och sociala processer. Både när vi
genomför och ser tillbaka på systemimplementeringar brukar utgångspunkten vara
planeringsperspektivet: planering, riskanalys och uppföljning mot plan. Detta
perspektiv förbiser t ex ofta skilda förväntningar på förändring som kan
omkullkasta projektplanerna, lärande i organisationer och aktörsgruppers
betydelse i projektet. I studien prövas därför tre olika perspektiv för
beskrivning och analys. De tre perspektiven handlar om att betrakta processen
från tre utgångspunkter för att nå en bättre förståelse för oväntade
förändringar och avvikelser som inträffar i samband med införande och användning
av stora system som berör många människor. Tre perspektiv ger en god bild av
processen, dess innehåll och dynamik och i studien fångas vad som rör sig i
människors tankar och hur detta inverkar vid interaktion med kollegor,
medarbetare, partners m fl. Tre perspektiv ger också tydlighet åt samverkan
mellan teknik i form av stora datasystem och användande människor.
Planeringstraditionsperspektivet fokuserar på aktiviteter i
förhållande till planen: uppföljning av planen, avvikelser från planen, hur den
kan förbättras och framgångsfaktorer i lyckade projekt. Detta var det
dominerande perspektivet hos aktörerna i fallstudien.
Struktureringsperspektivet ser individen som en del i ett socialt
sammanhang av meningsskapande, dominering och legitimerande. Perspektivet lyfter
fram hur aktörers olika föreställningar om verksamheten, förändring och den egna
rollen i helheten ger upphov till handlingar och konsekvenser som ofta är
annorlunda än de som finns i krav- och projektdokument.
I aktörsnätverksperspektivet, står handlingar i centrum, handlingar
som utförs av aktörer, som relaterar till andra aktörer i nätverk. Aktörer är
inte bara människor, utan även mänskliga skapelser som affärssystem. Centralt är
aktörers drivande av intressen och hur de försöker värva och låter sig värvas
till nätverk.
Avhandlingen ger rika bilder av perspektiven på implementering och användning,
med skilda förklaringar och förståelser av processen att implementera och
använda ett affärssystem, och av skillnader mellan förväntningar och inträffade
förändringar. Användning av affärssystem påverkar verksamheten i hög grad. Det
förändrar individerna in i nya arbetssätt eller håller kvar gamla trots behov av
ständiga förbättringar. Avhandlingen ger grund för nya sätt att kommunicera i
införandeprojekt, visar skillnader i intressen och föreställningar tillsammans
med olika förutsättningar för styrning och styrbarhet, tecknar systemet som
medaktör mer än tekniskt verktyg.
No. 821
CONCURRENT COMICS – PROGRAMMING OF SOCIAL AGENTS BY CHILDREN
Mikael Kindborg
This thesis presents a study of how the visual language of comics can be used
for programming of social agents. Social agents are interactive and animated
characters that can express emotions and behaviours in relation to other agents.
Such agents could be programmed by children to create learning games and
simulations. In order to make programming easier, it would be desirable to
support the mental transformation needed to link the static program source code
to the dynamic behaviour of the running program. Comic books use a
representation that captures the dynamics of a story in a visually direct way,
and may thus offer a convenient paradigm for programming of social agents using
a static representation. The thesis addresses the questions of how comic strips
and other signs used in comics can be applied to programming of social agents in
a way that makes the source code resemble the appearance of the running program,
and how such programs are understood by children. To study these questions, a
comic strip programming tool called “Concurrent Comics” has been developed. In
Concurrent Comics, social agent programs are represented as a collection of
events expressed as comic strips. The tool has been used by children at the age
of ten and eleven during several field studies in a school. In classroom
studies, the children were successful in creating language learning games with
the Concurrent Comics tool in a relatively short time (2 to 3 hours). However,
most games had a narrative character and a fairly linear structure. The results
from the field studies show that the children tend to interpret comic strip
programs as sequential stories. Still, the program examples presented show that
comic strip programs look similar to and have a direct visual mapping to the
runtime appearance. The conclusion is that the language conventions of comics
can be used to represent social agent programs in a visually direct way, but
that children have to learn the intended interpretation of comic strips as
potentially non-linear and concurrent events to program more simulation-oriented
and open-ended games.
No. 823
ON DEVELOPMENT OF INFORMATION SYSTEMS WITH GIS FUNCTIONALITY IN PUBLIC HEALTH
INFORMATICS: A REQUIREMENTS ENGINEERING APPROACH
Christina Ölvingson
Public health informatics has in recent years emerged as a field of its own
from medical informatics. Since public health informatics is newly established
and also new to public health professionals, previous research in the field is
relatively scarce. Even if the overlap with medical informatics is large, there
are differences between the two fields. Public health is, for example, more
theoretical and more multi-professional than most clinical fields and the focus
is on populations rather than individuals. These characteristics result in a
complex setting for development of information systems. To our knowledge there
exist few systems that support the collaborative process that constitutes the
foundation of public health programs. Moreover, most applications that do
support public health practitioners are small-scale, developed for a specific
purpose and have not gained any wider recognition.
The main objective of this thesis is to explore a novel approach to
identifying the requirements for information system support with geographical
information system (GIS) functionality in public health informatics. The work is
based on four case studies that are used to provide the foundation for the
development of an initial system design. In the first study, problems that
public health practitioners experience in their daily work were explored. The
outcome of the study was in terms of descriptions of critical activities. In the
second study, the use case map notation was exploited for modeling the process
of public health programs. The study provides a contextual description of the
refinement of data to information that could constitute a basis for both
political and practical decision in complex inter-organizational public health
programs. In the third study, ethical conflicts that arose when sharing
geographically referenced data in public health programs were analyzed to find
out how these affect the design of information systems. The results pointed out
issues that have to be considered when developing public health information
systems. In the fourth study, the use of information systems with GIS
functionality in WHO Safe Communities in Sweden and the need for improvements
were explored. The study resulted in identification of particular needs
concerning information system support among public health practitioners.
From these studies, general knowledge about the issues public health
practitioners experience in daily practice was gained and the requirements
identified were used as a starting-point for the design of information systems
for Motala WHO Safe Community.
The main contributions of the thesis involve two areas: public health
informatics and requirements engineering. First, a novel approach to system
development in public health informatics is presented. Second, the application
of use case maps as a tool for requirements engineering in complex settings such
as public health programs is presented. Third, the introduction of requirements
engineering in public health informatics has been exemplified. The contributions
of the thesis should enhance the possibility to perform more adequate
requirements engineering in the field of public health informatics. As a result,
it should be possible to develop information systems that better meet the needs
in the field of public health. Hence, it contributes to making the public health
programs more effective, which in the long run will improve public health.
No. 828
MEMORY EFFICIENT HARD REAL-TIME GARBAGE COLLECTION
Tobias Ritzau
As the development of hardware progresses, computers are expected to solve
increasingly complex problems. However, solving more complex problems requires
more complex software. To be able to develop these software systems, new
programming languages with new features and higher abstraction levels are
introduced. These features are designed to ease development, but sometimes they
also make the runtime behavior unpredictable. Such features can not be used in
real-time systems.
A feature that traditionally has been unpredictable is garbage collection.
Moreover, even though a garbage collector frees unused memory, almost all such
methods require large amounts of additional memory. Garbage collection relieves
developers of the responsibility to reclaim memory that is no longer used by the
application. This is very tedious and error prone if done manually. Since
garbage collection increases productivity and decreases programming errors,
developers find it attractive, also in the real-time domain.
This thesis presents a predictable garbage collection method, real-time
reference counting, that increases memory efficiency by about 50 % compared
to the most memory efficient previously presented predictable garbage collector.
To increase performance, an optimization technique called object ownership
that eliminates redundant reference count updates is presented. Object
ownership is designed for reference counters, but can also be used to increase
the performance of other incremental garbage collectors.
Finally, a static garbage collector is presented. The static garbage
collector can allocate objects statically or on the runtime stack, and insert
explicit instructions to reclaim memory allocated on the heap. It makes it
possible to eliminate the need for runtime garbage collection for a large class
of Java applications. The static garbage collection method can also be used to
remove costly synchronization instructions. Competing static garbage collection
methods with reasonable analysis time are restricted to stack allocation, and
thus handle a smaller class of applications.
No. 833
ANALYSIS AND SYNTHESIS OF COMMUNICATION-INTENSIVE HETEROGENEOUS REAL-TIME
SYSTEMS
Paul Pop
Embedded computer systemsare now everywhere: from alarm clocks to PDAs, from
mobile phones to cars, almost all the devices we use are controlled by embedded
computer systems. An important class of embedded computer systems is that of
real-time systems, which have to fulfill strict timing requirements. As
real-time systems become more complex, they are often implemented using
distributed heterogeneous architectures.
The main objective of the thesis is to develop analysis and synthesis methods
for communication-intensive heterogeneous hard real-time systems. The systems
are heterogeneous not only in terms of platforms and communication protocols,
but also in terms of scheduling policies. Regarding this last aspect, in this
thesis we consider time-driven systems, event-driven systems, and a combination
of both, called multi-cluster systems. The analysis takes into account the
heterogeneous interconnected nature of the architecture, and is based on an
application model that captures both the dataflow and the flow of control. The
proposed synthesis techniques derive optimized implementations of the system
that fulfill the design constraints. An important part of the system
implementation is the synthesis of the communication infrastructure, which has a
significant impact on the overall system performance and cost.
To reduce the time-to-market of products, the design of real-time systems
seldom starts from scratch. Typically, designers start from an already existing
system, running certain applications, and the design problem is to implement new
functionality on top of this system. Hence, in addition to the analysis and
synthesis methods proposed, we have also considered mapping and scheduling
within such an incremental design process.
The analysis and synthesis techniques proposed have been thoroughly evaluated
using a solid experimental platform. Besides the evaluations, performed using a
large number of generated example applications, we have also validated our
approaches using a realistic case study consisting of a vehicle cruise
controller.
No. 852
OBSERVING THE DYNAMIC BEHAVIOUR OF LARGE DISTRIBUTED SYSTEMS TO IMPROVE
DEVELOPMENT AND TESTING - AN EMPIRICAL STUDY IN SOFTWARE
Johan Moe
Knowledge about software systems' dynamics is a prerequisite for the
successful design, testing and maintenance of viable products. This work has
evolved a number of tools based on observation of software system dynamics in a
commercial environment, resulting in a method and a toolbox that can be used by
testers and maintainers to improve both the system and its test environment.
The toolbox uses interceptors to observe the object interaction on the CORBA
level during execution. With interceptors it is possible to intercept object
communication without the need of the source code and with low impact on system
performance. Intercepting a series of messages between the various objects can
create an image of specific dynamic aspects of a running system. Observation can
also be combined with simulation via active probing. Here active probing denote
delays in communication and other simulated resource limitations that can be
injected into the system for capacity testing purposes.
The method conceptually supports plan-do-study-act promoted by Shewhart. The
method is created to handle at least four different development activities:
system tuning, testing, test evaluation, usage evaluation and increasing
software understanding in general. System tuning can be activities such as
performance enhancements or load balancing. The method also serves user
profiling if it can run at a customer site. With coverage measurements, for
example, how each internal function is covered during testing, one gets a
measure of test quality and a way to specify goals for testing. With active
probing, it will be possible to effect execution of a program. This can be used
for system-robustness testing or as an oracle as how the system will react in
different real-life situations. The need of a general understanding is
documented with an interview series of software professionals. Yet another
interview series with professionals using the tool shows how understanding can
be enhanced.
The method has been developed and evaluated in several case studies at
different branches of ERICSSON AB in Linköping and Stockholm, Sweden. It is
planned to become an integrated part of ERICSSON's O&M platform from 2004.
No. 867
AN APPROACH TO SYSTEMS ENGINEERING TOOL DATA REPRESENTATION AND EXCHANGE
Erik Herzog
Over the last decades computer based tools have been introduced to facilitate
systems engineering processes. There are computer based tools for assisting
engineers in virtually every aspect of the systems engineering process from
requirement elicitation and analysis, over functional analysis, synthesis,
implementation and verification. It is not uncommon for a tool to provide many
services covering more than one aspect of systems engineering. There exist
numerous situations where information exchanges across tool boundaries are
valuable, e.g., exchange of specifications between organisations using
heterogeneous tool sets, exchange of specifications from legacy to modern tools,
exchange of specifications to tools that provide more advanced modelling or
analysis capabilities than the originating tool or storage of specification data
in a neutral format such that multiple tools can operate on the data.
The focus in this thesis is on the analysis, design and implementation of a
method and tool neutral information model for enabling systems engineering tool
data exchange. The information model includes support for representation of
requirements, system functional architecture and physical architecture, and
verification and validation data. There is also support for definition of
multiple system viewpoints, representation of system architecture, traceability
information and version and configuration management. The applicability of the
information model for data exchange has been validated through implementation of
tool interfaces to COTS and proprietary systems engineering tools, and exchange
of real specifications in different scenarios. The results obtained from the
validation activities indicate that systems engineering tool data exchange may
decrease the time spent for exchanging specifications between partners
developing complex systems and that the information model approach described in
the thesis is a compelling alternative to tool specific interfaces.
No. 869
TELECOMMUTING’S IMPLICATIONS ON TRAVEL AND TRAVEL PATTERNS
Jo Skåmedal
The subject field is within technology and social change, with focus
particularly on telecommuting and the possible changes that arises in the travel
patterns as a result of the telecommuting situation. When a person starts
working from home once or twice a week instead of commuting back and forth to
the main work place, a number of changes in the telecommuters’ distribution of
travel can and most probably will arise. The commute trip is often excluded,
which leads to the so-called substitution effect. Non-work related trips might
be generated and the mix of different types of trips as well as the trips
temporal and modal choices is affected. On the aggregate, urban congestion may
be reduced and the work form may contribute to the urban sprawl, which may lead
to an increase in vehicle kilometres travelled. These and some other travel
pattern changes due to telecommuting are the topics studied in the thesis. The
comprehensive purpose is to: “Describe how telecommuting affects telecommuters’
travel and travel patterns by exploring the work form’s travel implications,
their mutual interaction and explaining the consequent travel outcome”.
The thesis has confirmed the work forms net travel reducing effect. Commute
trips obviously decreases when working from home, but telecommuting is also
expected to lead to an increase in non-commute trips, which it does too, but the
work form even reduces a number of non-commute trips, with the probable total
outcome of a net travel reduction even for the non-commute trips. A discovery
that makes the travel reduction less than initially believed however is the
substantial amount of telecommuters frequently practising half-day
telecommuting. Half-day telecommuting does in turn stimulate travel mode
changes, with increased car usage for commuting in preference of public
transportation. For non-commutes, the travel mode tends to shift from cars to
non-motorised travel means, such as bicycles and walks instead.
A conceptual model is constructed in order to increase the understanding of
the underlying causes for the interrelations between telecommuting and travel
and the accordingly travel effects. Further, the relations and connections
between telecommuting and long distance telecommuting is contextually discussed
with regards to how rural telecommuters travel pattern potentially
differentiates from urban telecommuters. The discussion resulted in 18
hypothetical differences between urban and rural telecommuters’ travel patterns,
which provide a foundation on which to develop future studies.
No. 870
THE ROLES OF IT STUDIES OF ORGANISING WHEN IMPLEMENTING AND USING ENTERPRISE
SYSTEMS
Linda Askenäs
This study concerns implementation and use of enterprise systems (ERP systems)
m complex organisations. The purpose of this thesis is to problematise and
understand the social organising of information technology in organisations, by
studying the implementation and use of enterprise system. This is done by using
a multi-theoretical perspective and studying cases of complex organisations with
a qualitative and interpretive research method.
The study manages to give a more profound understanding of the roles of the
technology. It is found that the enterprise systems act as Bureaucrat,
Manipulator, Administrative assistant, Consultant or is dismissed, in the sense
that intended users chose to avoid using them. These roles of information
technology are formed in a rather complex organising process. A Structuration
Theory Analytical Model and Procedure (STAMP) is developed, that serves to
illuminate the dynamic relationships of individuals' or groups' interpretations,
power and norms and how that affects the implementation and use of enterprise
systems. The roles were also found to be different for individuals in similar
work conditions. This was due to how they learned their job, what understanding
of the job they developed, and what competences they developed. The different
kinds of competences found, requested different support from the technology and
it also made the individuals take a different approach towards how to use the
technology. The study also explores why emotions appear and what they affect,
and identifies patterns of emotions and emotional transitions that appear during
implementation and use of an enterprise system.
The social aspect of using technology is in focus in this thesis. And thus,
the technology is not just a tool to make excellent use of; it becomes something
more - an actor with different roles. The main contribution is the development
of a language and an approach to how to understand
the use and implementation of enterprise systems.
No. 872
AUGMENTING THE REMOTE CONTROL: STUDIES IN COMPLEX INFORMATION NAVIGATION FOR
DIGITAL TV
Aseel Berglund
The transition to digital TV is changing the television set into an
entertainment as well as information supplier device that provides two-way
communication with the viewer. However, the present remote control device is not
appropriate for navigation through the huge amount of services and information
provided by the future digital TV, presumably also a device for accessing the
Internet. One possibility for coping with the complex information navigation
required by TV viewers is an augmentation of the interaction tools currently
available for TV. Two approaches to such an augmentation are investigated in
this thesis: linking paper-based TV guides to the digital TV and enhancing the
remote control unit with speech interaction.
Augmentation of paper-based TV guides is a futuristic research approach based
on the integration of paper-based TV guides into computation technology. This
solution provides interactive paper-based TV guides that also function as a
remote control for the TV. A prototype system is developed and explorative
studies are conducted to investigate this approach. These studies indicate the
benefits of integrating paper-based TV guides into the TV set. They also
illuminate the potential to provide innovative solutions for home information
systems. Integrating familiar physical artifacts, such as paper and pen into TV
technology may provide easy access to information services usually provided by
PCs and the Internet. Thus, the same augmentation needed for TV as an
entertainment device also opens up new communication channels for providing
society information to citizens who do not feel comfortable with conventional
computers.
The thesis also reports on studies of speech interfaces for TV information
navigation. Traditional speech interfaces have several common problems, such as
user acceptance and misinterpretation of user input. These problems are
investigated in empirical and explorative studies with implementation of
mock-ups and running research systems. We have found that the pragmatic solution
of augmenting remote control devices by speech is a suitable solution that eases
information navigation and search.
No. 873
DEBUGGING TECHNIQUES FOR EQUATION-BASED LANGUAGES
Peter Bunus
Mathematical modeling and simulation of complex physical systems is emerging
as a key technology in engineering. Modern approaches to physical system
simulation allow users to specify simulation models with the help of
equation-based languages. Such languages have been designed to allow automatic
generation of efficient simulation code from declarative specifications. Complex
simulation models are created by combining available model components from
user-defined libraries. The resulted models are compiled in a simulation
environment for efficient execution.
The main objective of this thesis work is to develop significantly improved
declarative debugging techniques for equation-based languages. Both static and
dynamic debugging methods have been considered in this context.
A typical problem which often appears in physical system modeling and
simulation is when too many/few equations are specified in a systems of
equations. This leads to a situation where the simulation model is inconsistent
and therefore cannot be compiled and executed. The user should deal with over-
and under-constrained situation by identifying the minimal set of equations or
variables that should be removed/added from the equation system in order to make
the remaining set of equations solvable.
In this context, this thesis proposes new methods for debugging over- and
under-constrained systems of equations. We propose a methodology for detecting
and repairing over- and underconstrained situations based on graph theoretical
methods. Components and equations that cause the irregularities are
automatically isolated, and meaningful error messages for the user are
presented. A major contribution of the thesis is our approach to reduce the
potentially large number of error fixing alternatives by applying filtering
rules extracted from the modeling language semantics.
The thesis illustrates that it is possible to localize and repair a
significant number of errors during static analysis of an object-oriented
equation-based model without having to execute the simulation model. In this way
certain numerical failures can be avoided later during the execution process.
The thesis proves that the result of structural static analysis performed on the
underlying system of equations can effectively be used to statically debug real
models.
A semi-automated algorithmic debugging framework is proposed for dynamic fault
localization and behavior verification of simulation models. The run-time
debugger is automatically invoked when an assertion generated from a formal
specification of the simulation model behavior is violated. Analysis of the
execution trace decorated with data dependency graph in the form of the Block
Lower Triangular Dependency Graph (BLTDG) extracted from the language compiler
is the basis of the debugging algorithm proposed in the thesis. We show how
program slicing and dicing performed at the intermediate code level combined
with assertion checking techniques to a large extent can automate the error
finding process and behavior verification for physical system simulation models.
Via an interactive session, the user is able to repair errors caused by
incorrectly specified equations and incorrect parameter values.
The run-time algorithmic debugger described in the thesis represents the first
major effort in adapting automated debugging techniques to equation-based
languages. To our knowledge none of the existing simulation environments
associated with such languages provides support for run-time declarative
automatic debugging.
This thesis makes novel contributions to the structure and design of
easy-to-use equation-based modeling and simulation environments and illustrates
the role of structural static analysis and algorithmic automated debugging in
this setting. The scope and feasibility of the approach is demonstrated by a
prototype environment for compiling and debugging a subset of the Modelica
language. We claim that the techniques developed and proposed in this thesis are
suitable for a wide range of equation-based languages and not only for the
Modelica language. These techniques can be easily adapted to the specifics of a
particular simulation environment.
No. 874
DESIGN AND USE OF ONTOLOGIES IN INFORMATION-PROVIDING DIALOGUE SYSTEMS
Annika Flych-Eriksson
In this thesis, the design and use of ontologies as domain knowledge sources
in information-providing dialogue systems are investigated. The research is
divided into two parts, theoretical investigations that have resulted in a
requirements specifications on the design of ontologies to be used in
information-providing dialogue systems, and empirical work on the development of
a framework for use of ontologies in information-providing dialogue systems.
The framework includes three models: A model for ontology-based semantic
analysis of questions. A model for ontology-based dialogue management,
specifically focus management and clarifications. A model for ontology-based
domain knowledge management, specifically transformation of user requests to
system oriented concepts used for information retrieval.
In this thesis, it is shown that using ontologies to represent and reason on
domain knowledge in dialogue systems has several advantages. A deeper semantic
analysis is possible in several modules and a more natural and efficient
dialogue can be achieved. Another important aspect is that it facilitates
portability; to be able to reuse adapt the dialogue system to new tasks and
domains, since the domain-specific knowledge is separated form generic features
in the dialogue system architecture. Other advantages are that it reduces the
complexity of linguistic produced in various domains.
No. 876
RESOURCE-PREDICABLE AND EFFICIENT MONITORING OF EVENTS
Jonas Mellin
We present a formally specified event specification language (Solicitor).
Solicitor is suitable for realtime systems, since it results in
resource-predictable and efficient event monitors. In event monitoring, event
expressions defined in an event specification language control the monitoring by
matching incoming streams of event occurrences against the event expressions.
When an event expression has a complete set of matching event occurrences, the
event type that this expression defines has occurred. Each event expression is
specified by combining contributing event types with event operators such as
sequence, conjunction, disjunction; contributing event types may be primitive,
representing happenings of interest in a system, or composite, specified by
event expressions.
The formal specification of Solicitor is based on a formal schema that
separates two important aspects of an event expression; these aspects are event
operators and event contexts. The event operators aspect addresses the relative
constraints between contributing event occurrences, whereas the event contexts
aspect addresses the selection of event occurrences from an event stream with
respect to event occurrences that are used or invalidated during event
monitoring. The formal schema also contains an abstract model of event
monitoring. Given this formal specification, we present realization issues of, a
time complexity study of, as well as a proof of limited resource requirements of
event monitoring.
We propose an architecture for resource-predictable and efficient event
monitoring. In particular, this architecture meets the requirements of real-time
systems by defining how event monitoring and tasks are associated. A declarative
way of specifying this association is proposed within our architecture.
Moreover, an efficient memory management scheme for event composition is
presented. This scheme meets the requirements of event monitoring in distributed
systems. This architecture has been validated by implementing an executable
component prototype that is part of the DeeDS prototype.
The results of the time complexity study are validated by experiments. Our
experiments corroborate the theory in terms of complexity classes of event
composition in different event contexts. However, the experimental platform is
not representative of operational real-time systems and, thus, the constants
derived from our experiments cannot be used for such systems.
No. 882
DISFLUENCY IN SWEDISH HUMAN–HUMAN AND HUMAN–MACHINE TRAVEL BOOKING DIALOGUES
Robert Eklund
This thesis studies disfluency in spontaneous Swedish speech, i.e., the
occurrence of hesitation phenomena like eh, öh, truncated words, repetitions and
repairs, mispronunciations, truncated words and so on. The thesis is divided
into three parts:
PART I provides the background, both concerning scientific,
personal and industrial– academic aspects in the Tuning in quotes, and the
Preamble and Introduction (chapter 1).
PART II consists of one chapter only, chapter 2, which dives
into the etiology of disfluency. Consequently it describes previous research on
disfluencies, also including areas that are not the main focus of the present
tome, like stuttering, psychotherapy, philosophy, neurology, discourse
perspectives, speech production, application-driven perspectives, cognitive
aspects, and so on. A discussion on terminology and definitions is also
provided. The goal of this chapter is to provide as broad a picture as possible
of the phenomenon of disfluency, and how all those different and varying
perspectives are related to each other.
PART III describes the linguistic data studied and analyzed
in this thesis, with the following structure: Chapter 3 describes how the speech
data were collected, and for what reason. Sum totals of the data and the
post-processing method are also described. Chapter 4 describes how the data were
transcribed, annotated and analyzed. The labeling method is described in detail,
as is the method employed to do frequency counts. Chapter 5 presents the
analysis and results for all different categories of disfluencies. Besides
general frequency and distribution of the different types of disfluencies, both
inter- and intra-corpus results are presented, as are co-occurrences of
different types of disfluencies. Also, inter- and intra-speaker differences are
discussed. Chapter 6 discusses the results, mainly in light of previous
research. Reasons for the observed frequencies and distribution are proposed, as
are their relation to language typology, as well as syntactic, morphological and
phonetic reasons for the observed phenomena. Future work is also envisaged, both
work that is possible on the present data set and work that is possible on the
present data set given extended labeling: work that I think should be carried
out, but where the present data set fails, in one way or another, to meet the
requirements of such studies.
Appendices 1–4 list the sum total of all data analyzed in
this thesis (apart from Tok Pisin data). Appendix 5 provides an
example of a full human–computer dialogue.
No. 883
COMPUTING AT THE SPEED OF PAPER: UBIQUITOUS COMPUTING ENVIRONMENTS FOR
HEALTHCARE PROFESSIONALS
Magnus Bång
Despite the introduction of computers in most work environments, the
anticipated paperless workplace has not yet emerged. Research has documented
that material objects are essential in the organization of thought and that they
support everyday collaborative processes performed by staff members. However,
modern desktop computing systems with abstract graphical user interfaces fail to
support the tangible dimension. This work presents a novel approach to clinical
computing that goes beyond the traditional user-interface paradigm and relieves
clinicians of the burden of the mouse and keyboard.
The activities of people working in an emergency room were examined
empirically to ascertain how clinicians use real paper objects. The results
showed that the professionals arranged their workplaces and created material
structures that increased cognitive and collaborative performance. Essential
factors in these strategies were the availability of physical tools such as
paper-based patient records and forms that could be spatially positioned to
constitute reminders and direct the attention of the team, and to form shared
displays of the work situation.
NOSTOS is an experimental ubiquitous computing environment for co-located
healthcare teams. In this system, several interaction devices, including
paper-based interfaces, digital pens, walk-up displays, and a digital desk, form
a workspace that seamlessly blends virtual and physical objects. The objective
of the design was to enhance familiar workplace tools to function as user
interfaces to the computer in order to retain established cognitive and
collaborative routines.
A study was also conducted to compare the tangible interaction model for
clinical computing with a traditional computer-based patient record system with
a graphical user interface. The analysis suggests that, in ordinary clinical
environments, cognitive and collaborative strategies are better supported by the
tangible augmented paper approach and a digital desk than the traditional
desktop computing method with its graphical user interfaces. In conclusion, the
present findings indicate that tangible paper-based user interfaces and basic
augmented environments will prove to be successful in future clinical
workplaces.
No. 887
ENGLISH AND OTHER FOREIGN LINGUISTIC ELEMENTS IN SPOKEN SWEDISH : STUDIES OF
PRODUCTIVE PROCESSES AND THEIR MODELLING USING FINITE-STATE TOOLS
Anders Lindström
This thesis addresses the question of what native speakers of Swedish do when
items originating in English and several other foreign languages occur in their
native language. This issue is investigated at the phonological and
morphological levels of contemporary Swedish. The perspective is descriptive and
the approach employed is empirical, involving analyses of several corpora of
spoken and some of written Swedish. The focus is on naturally occurring but not
yet well-described phonological and morphological patterns which are critical
to, and can be applied in, speech technology applications. The phonetic and
phonological aspects are investigated in two studies. In a spoken language
production study, well-known foreign names and words were recorded by 491
subjects, yielding almost 24,000 segments of potential interest, which were
later transcribed and analyzed at the phonetic level. In a transcription study
of proper names, 61,500 of the most common names in Sweden were transcribed
under guidelines allowing extensions of the allophonic repertoire. The
transcription conventions were developed jointly during the course of the
exercise by four phonetically trained experts. An analysis of the transcriptions
shows that several such extensions were deemed necessary for speech generation
in a number of instances and as possible pronunciation variants, that should all
be allowed in speech recognition, in even more instances. A couple of
phonotactically previously impermissible sequences in Swedish are also
encountered and judged as necessary to introduce. Some additional speech sounds
were also considered possible but not encountered so far in the sample of names
covered. At the morphological level, it is shown how English word elements take
pan in Swedish morphological processes such as inflection, derivation and
compounding. This is illustrated using examples from several corpora of both
spoken and written Swedish. Problems in acquiring enough spoken language data
for the application of data-driven methods are also discussed, and it is shown
that knowledge-based strategies may in fact be better suited to tackle the task
than data-driven alternatives, due to fundamental frequency properties of large
corpora.
The overall results suggest that any description of contemporary spoken
Swedish (regardless of whether it is formal, pedagogical or technical) needs to
be extended with both phonological and morphological material at least of
English origin. Socio-Iinguistic and other possible underlying factors governing
the variability observed in the data are examined and it is shown that education
and age play a significant role, in the sense that subjects with higher
education as well as those between the ages of 25-45 produced significantly more
segments that extend beyond the traditional Swedish allophone set. Results also
show that the individual variability is large and it is suggested that
interacting phonological constraints and their relaxation may be one way of
explaining this. Drawing on the results from the studies made, consequences for
Swedish speech technology applications are discussed and a set of requirements
is proposed. The conventions for lexical transcription that were developed and
subsequently implemented and evaluated in the case of proper names are also used
in the implementation of a lexical component, where one publicly available
Finite-State tool is first tried out in a pilot study, but shown to be
inadequate in terms of the linguistic description it may entail. Therefore, a
more flexible toolbox is used in a larger scale proof-of-concept experiment
using data from one of the previously analyzed corpora. The requirements arrived
at in this thesis have previously been used in the development of a
concatenative demi-syllable-based synthesizer for Swedish, and as one possible
strand of future research, it is suggested that the present results be combined
with recent advancements in speech alignment/recognition technology on the one
hand and unit selection-based synthesis techniques, on the other. In order to be
able to choose between different renderings of a particular name, e.g. echoing
the user's own pronunciation in a spoken dialogue system, both recognition,
dictionary resources, speech alignment and synthesis procedures need to be
controlled.
No. 889
CAPACITY-CONSTRAINED PRODUCTION-INVENTORY SYSTEMS : MODELLING AND ANALYSIS IN
BOTH A TRADITIONAL AND AN E-BUSINESS CONTEXT
Zhiping Wang
This thesis addresses issues in production-inventory systems in both
a traditional and an e-business context, with an emphasis on capacity
considerations. The general aim of the thesis is to model capacity-constrained
production-inventory systems and thereby provide theoretical frameworks for
control as well as design of these systems.
The research has been conducted in two different contexts. In the traditional
context, an extensive survey of the literature on capacity-constrained
production-inventory systems is first presented. Production-inventory systems
with capacity limitations are then modelled using the Laplace transform and
input-output analysis for deterministic and stochastic demand situations,
respectively. In the formulation of the model for the deterministic demand
situations, the focus is on the way in which the fundamental balance equations
for inventory and backlogs need to be modified. In the formulation for
stochastic demand situations, the model extends previous theory in the direction
of capacity considerations combined with uncertainty in external demand. The
results of the modelling and analysis in the traditional context contribute to
the development of a theoretical background for production-inventory system
control applying the Laplace transform approach and input-output analysis.
In the e-business context, those aspects which are affected by e-business
based customer ordering systems, and hence influence production-inventory
systems, are studied. A mathematical model based on the assumption of a simple
e-business model of direct sales channels is presented. Since e-business
significantly facilitates customisation, models of two different production
systems which consider customisation are provided. The production-inventory
systems are then developed into an extended system where customers are included.
For this system, a multiple objective formulation is obtained, in view of the
importance of customers. The results of the modelling and analysis in the e-
business context contribute insights and perspectives on ways to design and
control e- business influenced production- inventory systems.
No. 893
EYES ON MULTIMODAL INTERACTION
Pernilla Qvarfordt
Advances in technology are making it possible for users to interact with
computers by various modalities, often through speech and gesture. Such
multimodal interaction is attractive because it mimics the patterns and skills
in natural human-human communication. To date, research in this area has
primarily focused on giving commands to computers. The focus of this thesis
shifts from commands to dialogue interaction. The work presented here is divided
into two parts. The first part looks at the impact of the characteristics of the
spoken feedback on users’ experience of a multimodal dialogue system. The second
part investigates whether and how eye-gaze can be utilized in multimodal
dialogue systems.
Although multimodal interaction has attracted many researchers, little
attention has been paid to how users experience such systems. The first part of
this thesis investigates what makes multimodal dialogue systems either
human-like or tool-like, and what qualities are most important to users. In a
simulated multimodal timetable information system users were exposed to
different levels of spoken feedback. The results showed that the users preferred
the system to be either clearly tool-like, with no spoken words, or clearly
human-like, with complete and natural utterances. Furthermore, the users’
preference for a human-like multimodal system tended to be much higher after
they had actual experience than beforehand based on imagination.
Eye-gaze plays a powerful role in human communication. In a computer-mediated
collaborative task involving a tourist and a tourist consultant, the second part
of this thesis starts with examining the users’ eye-gaze patterns and their
functions in deictic referencing, interest detection, topic switching, ambiguity
reduction, and establishing common ground in a dialogue. Based on the results of
this study, an interactive tourist advisor system that encapsulates some of the
identified patterns and regularities was developed. In a “stress test”
experiment based on eye-gaze patterns only, the developed system conversed with
users to help them plan their conference trips. Results demonstrated that
eye-gaze can play an assistive role in managing future multimodal human-computer
dialogues.
No. 900
SHADES OF USE: THE DYNAMICS OFINTERACTION DESIGN FOR SOCIABLE USE
Mattias Arvola
Computers are used in sociable situations, for example during customer
meetings. This is seldom recognized in design, which means that computers often
become a hindrance in the meeting. Based on empirical studies and socio-cultural
theory, this thesis provides perspectives on sociable use and identifies
appropriate units of analysis that serve as critical tools for understanding and
solving interaction design problems. Three sociable situations have been
studied: customer meetings, design studios and domestic environments. In total,
49 informants were met with during 41 observation and interview sessions and 17
workshops; in addition, three multimedia platforms were also designed. The
empirical results show that people need to perform individual actions while
participating in joint action, in a spontaneous fashion and in consideration of
each other. The consequence for design is that people must be able to use
computers in different manners to control who has what information. Based on the
empirical results, five design patterns were developed to guide interaction
design for sociable use. The thesis demonstrates that field studies can be used
to identify desirable use qualities that in turn can be used as design
objectives and forces in design patterns. Re-considering instrumental,
communicational, aesthetical, constructional and ethical aspects can furthermore
enrich the understanding of identified use qualities. With a foundation in the
field studies, it is argued that the deliberation of dynamic characters and use
qualities is an essential component of interaction design. Designers of
interaction are required to work on three levels: the user interface, the
mediating artefact and the activity of use. It is concluded that doing
interaction design is to provide users with perspectives, resources and
constraints on their space for actions; the complete design is not finalized
until the users engage in action. This is where the fine distinctions and, what
I call ‘shades of use’ appear.
No. 910
IN THE BORDERLAND BETWEEN STRATEGY AND MANAGEMENT CONTROL – THEORETICAL
FRAMEWORK AND EMPIRICAL EVIDENCE
Magnus Kald
Strategy and management control are two fields of research that have become
increasingly inter-linked. Research in strategy has shown, for instance, that
strategies are of no effect unless they permeate the entire organization, and
that they become obsolete if not renewed as the business environment changes.
Similarly, research in management control has shown that management control
loses its relevance if it does not reflect strategy or is not useful in
operations. This dissertation considers a number of theoretical approaches to
corporate and business strategies and their connection to management control.
The reasoning is also examined in light of empirical data collected from major
Swedish firms in various industries. One finding is that some combinations of
corporate and business strategies and management control are more congruent than
other combinations. An additional question discussed in the dissertation is how
different types of business strategy could be changed and combined; these
possibilities are studied empirically on the basis of data taken from annual
reports of Nordic paper and pulp companies. The results show that the nature of
business strategy can be changed over time, but that different kinds of business
strategies can seldom be combined within the same business unit. Further, the
dissertation treats the relationship between different perspectives on business
strategies. Another central element of the dissertation is the design and use of
performance measurement. On the basis of extensive empirical material from large
Nordic firms in a variety of industries, performance measurement at Nordic firms
is described, noting differences between countries and between dissimilar
business strategies. According to the findings, the Nordic firms used a broad
spectrum of measures, which according to theory should be more closely related
to strategy than would financial measures alone.
No. 918
SHAPING ELECTRONIC NEWS GENRE PERSPECTIVES ON INTERACTION DESIGN
Jonas Lundberg
This thesis describes and analyzes implications of going from hypertext news
to hypermedia news through a process of design, involving users and producers.
As in any product development, it is difficult to conceive design of a novel
news format that does not relate to earlier genres, and thus to antecedent
designs. The hypothesis is that this problem can be addressed by explicitly
taking a genre perspective to guide interaction design. This thesis draws on
genre theory, which has previously been used in rhetoric, literature, and
information systems. It is also informed by theories from human-computer
interaction. The methodological approach is a case study of the ELIN project, in
which new tools for online hypermedia newspaper production were developed and
integrated. The study follows the project from concept design to interaction
design and implementation of user interfaces, over three years. The thesis makes
three contributions. Firstly, a genre perspective on interaction design is
described, revealing broadly in what respects genre affects design. Secondly,
the online newspaper genre is described. Based on a content analysis of online
newspaper frontpages, and interviews with users and producers, genre specific
design recommendations regarding hypertext news front-page design are given. A
content analysis of Swedish online newspapers provides a basis for a design
rationale of the context stream element, which is an important part of the news
context on article pages. Regarding hypervideo news, design rationale is given
for the presentation of hypervideo links, in the context of a hypermedia news
site. The impact on news production in terms of dynamics of convergence is also
discussed. Thirdly, the design processes in cooperative scenario building
workshops are evaluated, regarding how the users and producers were able to
contribute. It provides implications and lessons learned for the workshop phase
model. A discourse analysis also reveals important facilitator skills and how
participants relied on genre in the design process.
No. 920
VERIFICATION AND SCHEDULING TECHNIQUES FOR REAL-TIME EMBEDDED SYSTEMS
Luis Alejandro Cortés
Embedded computer systems have become ubiquitous. They are used in a wide
spectrum of applications, ranging from household appliances and mobile devices
to vehicle controllers and medical equipment.
This dissertation deals with design and verification of embedded systems, with
a special emphasis on the real-time facet of such systems, where the time at
which the results of the computations are produced is as important as the
logical values of these results. Within the class of real-time systems two
categories, namely hard real-time systems and soft real-time systems, are
distinguished and studied in this thesis.
First, we propose modeling and verification techniques targeted towards hard
realtime systems, where correctness, both logical and temporal, is of prime
importance. A model of computation based on Petri nets is defined. The model can
capture explicit timing information, allows tokens to carry data, and supports
the concept of hierarchy. Also, an approach to the formal verification of
systems represented in our modeling formalism is introduced, in which model
checking is used to prove whether the system model satisfies its required
properties expressed as temporal logic formulas. Several strategies for
improving verification efficiency are presented and evaluated.
Second, we present scheduling approaches for mixed hard/soft real-time
systems. We study systems that have both hard and soft real-time tasks and for
which the quality of results (in the form of utilities) depends on the
completion time of soft tasks. Also, we study systems for which the quality of
results (in the form of rewards) depends on the amount of computation allotted
to tasks. We introduce quasi-static techniques, which are able to exploit at low
cost the dynamic slack caused by variations in actual execution times, for
maximizing utilities/rewards and for minimizing energy.
Numerous experiments, based on synthetic benchmarks and realistic case
studies, have been conducted in order to evaluate the proposed approaches. The
experimental results show the merits and worthiness of the techniques introduced
in this thesis and demonstrate that they are applicable on real-life examples.
No. 929
PERFORMANCE STUDIES OF FAULT-TOLERANT MIDDLEWARE
Diana Szentiványi
Today’s software engineering and application development trend is to take
advantage of reusable software. Much effort is directed towards easing the task
of developing complex, distributed, network based applications with reusable
components. To ease the task of the distributed systems’ developers, one can use
middleware, i.e. a software layer between the operating system and the
application, which handles distribution transparently.
A crucial feature of distributed server applications is high availability.
This implies that they must be able to continue activity even in presence of
crashes. Embedding fault tolerance mechanisms in the middleware on top of which
the application is running, offers the potential to reduce application code size
thereby reducing developer effort. Also, outage times due to server crashes can
be reduced, as failover is taken care of automatically by middleware. However, a
trade-off is involved: during periods with no failures, as information has to be
collected for the automatic failover, client requests are serviced with higher
latency. To characterize the real benefits of middleware, this trade-off needs
to be studied. Unfortunately, to this date, few trade-off studies involving
middleware that supports fault tolerance with application to realistic cases
have been conducted.
The contributions of the thesis are twofold: (1) insights based on empirical
studies and (2) a theoretical analysis of components in a middleware equipped
with fault tolerance mechanisms.
In connection with part (1) the thesis describes detailed implementation of
two platforms based on CORBA (Common Object Request Broker Architecture) with
fault tolerance capabilities: one built by following the FT-CORBA standard,
where only application failures are taken care of, and a second obtained by
implementing an algorithm that ensures uniform treatment of infrastructure and
application failures. Based on empirical studies of the availability/performance
trade-off, several insights were gained, including the benefits and drawbacks of
the two infrastructures. The studies were performed using a realistic
(telecommunication) application set up to run on top of both extended middleware
platforms. Further, the thesis proposes a technique to improve performance in
the FT-CORBA based middleware by exploiting application knowledge; to enrich
application code with fault tolerance mechanisms we use aspect-oriented
programming. In connection with part (2) the thesis models elements of an
FT-CORBA like architecture mathematically, in particular by using queuing
theory. The model is then used to study the relation between different
parameters. This provides the means to configure one middleware parameter,
namely the checkpointing interval, leading to maximal availability or minimal
response time.
No. 933
MANAGEMENT ACCOUNTING AS CONSTRUCTING AND OPPOSING CUSTOMER FOCUS: THREE CASE
STUDIES ON MANAGEMENT ACCOUNTING AND CUSTOMER RELATIONS
Mikael Cäker
This thesis is on the relation between management accounting and customer
focus and relates to discussions about how internal managing processes in
organizations are interrelated with interorganizational relations, specifically
constructions of customers. Both a normative and a descriptive perspective on
the relation are taken within the different parts of the thesis, which consists
of a licentiate thesis, three articles and a synthesizing text. The purpose is
to understand the interaction between management accounting and customer focus
on operative levels in industrial organizations, where focus on customer has a
central place. The results are also discussed with respect to its possible
importance in the development of managing processes of organizations. The thesis
is based on three cases, which have been studied with mainly a qualitative
approach.
In the licentiate thesis, traditional accounting models, originally developed
to provide information for analyzing products, are revisited under the
assumption that customers and products are equally interesting to analyze. The
three articles explore the role of management accounting in interpreting
customers and the interface to customers. In the first article, strong customer
accountability is found to override the intentions from managers, as
communicated through the accounting system. Arguments of how management
accounting can be perceived as inaccurate then have a central place in
motivating how customers are prioritized in a way not favored by managers.
Furthermore, in the second article, customers are experienced as both catalysts
and frustrators to change processes and changes in management accounting are
found in co-development with the different customer relations and how different
customers prefer to communicate. The third article explores how coordination
mechanisms operate in relation to each other in coordination of
customer-supplier relationships. A strong market mechanism creates space for
bureaucratic control and use of management accounting. However, the use of
bureaucratic control in turn relies on social coordination between actors in
order to function.
These four parts are revisited and related to each other in a synthesizing
part of the thesis. The relation between management accounting and customer
focus is approached in two ways. Management accounting may construct customer
focus, making it impossible to distinguish between the two. However, another
interpretation of the relation is management accounting and customer focus as
opposing logics, where management accounting represents hierarchical influence,
homogeneous control processes and cost efficient operations, and customer focus
represents customer influence; control processes adapted to the customer and
customized operations.
No. 937
TALPLANNER AND OTHER EXTENSIONS TO TEMPORAL ACTION LOGIC
Jonas Kvarnström
Though the exact definition of the boundary between intelligent and
non-intelligent artifacts has been a subject of much debate, one aspect of
intelligence that many would deem essential is deliberation: Rather
than reacting ``instinctively'' to its environment, an intelligent system should
also be capable of reasoning
about it, reasoning about the effects of actions performed by itself and
others, and creating and executing plans, that is, determining which actions to
perform in order to achieve certain goals. True deliberation is a complex topic,
requiring support from several different sub-fields of artificial intelligence.
The work presented in this thesis spans two of these partially overlapping
fields, beginning with reasoning about action and change and eventually
moving over towards
planning.
The qualification problem relates to the difficulties inherent in providing,
for each action available to an agent, an exhaustive list of all qualifications
to the action, that is, all the conditions that may prevent the action from
being executed in the intended manner. The first contribution of this thesis is
a framework for modeling qualifications in Temporal Action Logic (TAL).
As research on reasoning about action and change proceeds, increasingly
complex and interconnected domains are modeled in increasingly greater detail.
Unless the resulting models are structured consistently and coherently, they
will be prohibitively difficult to maintain. The second contribution is a
framework for structuring TAL domains using object-oriented concepts.
Finally, the second half of the thesis is dedicated to the task of
planning. TLplan pioneered the idea of using domain-specific control
knowledge in a temporal logic to constrain the search space of a
forward-chaining planner. We develop a new planner called TALplanner, based on
the same idea but with fundamental differences in the way the planner verifies
that a plan satisfies control formulas. TALplanner generates concurrent plans
and can take resource constraints into account. The planner also applies several
new automated domain analysis techniques to control formulas, further increasing
performance by orders of magnitude for many problem domains.
No. 938
FUZZY GAIN-SCHEDULED VISUAL SERVOING FOR AN UNMANNED HELICOPTER
Bourhane Kadmiry
The overall objective of the Wallenberg Laboratory for Information Technology
and Autonomous Systems (WITAS) at Linköping University is the development of an
intelligent command and control system, containing active-vision sensors, which
supports the operation of an unmanned air vehicle (UAV). One of the UAV
platforms of choice is the R50 unmanned helicopter, by Yamaha.
The present version of the UAV platform is augmented with a camera system.
This is enough for performing missions like site mapping, terrain exploration,
in which the helicopter motion can be rather slow. But in tracking missions, and
obstacle avoidance scenarios, involving high-speed helicopter motion, robust
performance for the visual-servoing scheme is desired. Robustness in this case
is twofold: 1) w.r.t time delays introduced by the image processing; and 2)
w.r.t disturbances, parameter uncertainties and unmodeled dynamics which reflect
on the feature position in the image, and the camera pose.
With this goal in mind, we propose to explore the possibilities for the design
of fuzzy controllers, achieving stability, robust and minimal-cost performance
w.r.t time delays and unstructured uncertainties for image feature tracking, and
test a control solution in both simulation and on real camera platforms. Common
to both are model-based design by the use of nonlinear control approaches. The
performance of these controllers is tested in simulation using the nonlinear
geometric model of a pin-hole camera. Then we implement and test the resulting
controller on the camera platform mounted on the UAV.
No. 945
HYBRID BUILT-IN SELF-TEST AND TEST GENERATION TECHNIQUES FOR DIGITAL SYSTEMS
Gert Jervan
The technological development is enabling the production of increasingly
complex electronic systems. All such systems must be verified and tested to
guarantee their correct behavior. As the complexity grows, testing has become
one of the most significant factors that contribute to the total development
cost. In recent years, we have also witnessed the inadequacy of the established
testing methods, most of which are based on low-level representations of the
hardware circuits. Therefore, more work has to be done at abstraction levels
higher than the classical gate and register-transfer levels. At the same time,
the automatic test equipment based solutions have failed to deliver the required
test quality. As a result, alternative testing methods have been studied, which
has led to the development of built-in self-test (BIST) techniques.
In this thesis, we present a novel hybrid BIST technique that addresses
several areas where classical BIST methods have shortcomings. The technique
makes use of both pseudorandom and deterministic testing methods, and is devised
in particular for testing modern systems-on-chip. One of the main contributions
of this thesis is a set of optimization methods to reduce the hybrid test cost
while not sacrificing test quality. We have developed several optimization
algorithms for different hybrid BIST architectures and design constraints. In
addition, we have developed hybrid BIST scheduling methods for an
abort-on-first-fail strategy, and proposed a method for energy reduction for
hybrid BIST.
Devising an efficient BIST approach requires different design modifications,
such as insertion of scan paths as well as test pattern generators and signature
analyzers. These modifications require careful testability analysis of the
original design. In the latter part of this thesis, we propose a novel
hierarchical test generation algorithm that can be used not only for
manufacturing tests but also for testability analysis. We have also investigated
the possibilities of generating test vectors at the early stages of the design
cycle, starting directly from the behavioral description and with limited
knowledge about the final implementation.
Experiments, based on benchmark examples and industrial designs, have been
carried out to demonstrate the usefulness and efficiency of the proposed
methodologies and techniques.
No. 946
INTELLIGENT SEMI-STRUCTURED INFORMATION EXTRACTION A USER-DRIVEN APPROACH TO
INFORMATION EXTRACTION
Anders Arpteg
The number of domains and tasks where information extraction tools can be used
needs to be increased. One way to reach this goal is to design user-driven
information extraction systems where non-expert users are able to adapt them to
new domains and tasks. It is difficult to design general extraction systems that
do not require expert skills or a large amount of work from the user. Therefore,
it is difficult to increase the number of domains and tasks. A possible
alternative is to design user-driven systems, which solve that problem by
letting a large number of non-expert users adapt the systems themselves. To
accomplish this goal, the systems need to become more intelligent and able to
learn to extract with as little given information as possible.
The type of information extraction system that is in focus for this thesis is
semi-structured information extraction. The term semi-structured refers to
documents that not only contain natural language text but also additional
structural information. The typical application is information extraction from
World Wide Web hypertext documents. By making effective use of not only the link
structure but also the structural information within each such document,
user-driven extraction systems with high performance can be built.
There are two different approaches presented in this thesis to solve the
user-driven extraction problem. The first takes a machine learning approach and
tries to solve the problem using a modified $Q(\lambda)$ reinforcement learning
algorithm. A problem with the first approach was that it was difficult to handle
extraction from the hidden Web. Since the hidden Web is about 500 times larger
than the visible Web, it would be very useful to be able to extract information
from that part of the Web as well. The second approach is called the hidden
observation approach and tries to also solve the problem of extracting from the
hidden Web. The goal is to have a user-driven information extraction system that
is also able to handle the hidden Web. The second approach uses a large part of
the system developed for the first approach, but the additional information that
is silently obtained from the user presents other problems and possibilities.
An agent-oriented system was designed to evaluate the approaches presented in
this thesis. A set of experiments was conducted and the results indicate that a
user-driven information extraction system is possible and no longer just a
concept. However, additional work and research is necessary before a
fully-fledged user-driven system can be designed.
No. 947
CONSTRUCTING ALGORITHMS FOR CONSTRAINT SATISFACTION AND RELATED PROBLEMS
-METHODS AND APPLICATIONS
Ola Angelsmark
In this thesis, we will discuss the construction of algorithms for solving
Constraint Satisfaction Problems (CSPs), and describe two new ways of
approaching them. Both approaches are based on the idea that it is sometimes
faster to solve a large number of restricted problems than a single, large,
problem. One of the strong points of these methods is that the intuition behind
them is fairly simple, which is a definite advantage over many techniques
currently in use.
The first method, the covering method, can be described as follows:
We want to solve a CSP with n variables, each having a domain with
d elements. We have access to an algorithm which solves problems
where the domain has size e<d, and we want to cover the
original problem using a number of restricted instances, in such a way that the
solution set is preserved. There are two ways of doing this, depending on the
amount of work we are willing to invest; either we construct an explicit
covering and end up with a deterministic algorithm for the problem, or
we use a probabilistic reasoning and end up with a probabilistic
algorithm.
The second method, called the partitioning method, relaxes the demand
on the underlying algorithm. Instead of having a single algorithm for solving
problems with domain less than d, we allow an arbitrary number of them,
each solving the problem for a different domain size. Thus by splitting, or
partitioning, the domain of the large problem, we again solve a large
number of smaller problems before arriving at a solution.
Armed with these new techniques, we study a number of different problems; the
decision problems (d,l)-CSP and k-COLOURABILITY,
together with their counting counterparts, as well as the optimisation
problems MAX IND CSP, MAX VALUE CSP, MAX CSP, and MAX HAMMING CSP. Among the
results, we find a very fast, polynomial space algorithm for determining k-colourability
of graphs.
No. 963
UTILITY-BASED OPTIMISATION OF RESOURCE ALLOCATION FOR WIRELESS NETWORKS
Calin Curescu
From providing only voice communications, wireless networks aim to provide a
wide range of services in which soft real-time, high priority critical data, and
best effort connections seamlessly integrate. Some of these applications and
services have firm resource requirements in order to function properly (e.g.
videoconferences), others are flexible enough to adapt to whatever is available
(e.g. FTP). Providing differentiation and resource assurance is often referred
to as providing quality of service (QoS). In this thesis we study how novel
resource allocation algorithms can improve the offered QoS of dynamic,
unpredictable, and resource constrained distributed systems, such as a wireless
network, during periods of overload.
We propose and evaluate several bandwidth allocation schemes in the context of
cellular, hybrid and pure ad hoc networks. Acceptable quality levels for a
connection are specified using resource-utility functions, and our allocation
aims to maximise accumulated systemwide utility. To keep allocation optimal in
this changing environment, we need to periodically reallocate resources. The
novelty of our approach is that we have augmented the utility function model by
identifying and classifying the way reallocations affect the utility of
different application classes. We modify the initial utility functions at
runtime, such that connections become comparable regardless of their flexibility
to reallocations or age-related importance.
Another contribution is a combined utility/price-based bandwidth allocation
and routing scheme for ad hoc networks. First we cast the problem of utility
maximisation in a linear programming form. Then we propose a novel distributed
allocation algorithm, where every flow bids for resources on the end-to-end path
depending on the resource ``shadow price'', and the flow's ``utility
efficiency''. Our periodic (re)allocation algorithms represent an iterative
process that both adapts to changes in the network, and recalculates and
improves the estimation of resource shadow prices.
Finally, problems connected to allocation optimisation, such as modelling
non-critical resources as costs, or using feedback to adapt to uncertainties in
resource usage and availability, are addressed.
No. 972
JOINT CONTROL IN DYNAMIC SITUATIONS
Björn Johansson
This thesis focuses on the cooperative and communicative aspects of control
over dynamic situations such as emergency management and military operations.
Taking a stance in Cognitive Systems Engineering, Decision making and
Communication studies, the role of information systems as tools for
communication in dynamic situations is examined. Three research questions are
examined; 1 ) How new forms of information technology affects joint control
tasks in dynamic situations, and how/if microworld simulations can be used to
investigate this. 2 ) What the characteristics of actual use of information
systems for joint control are in dynamic situations? 3 ) What the pre-requisites
are for efficient communication in joint control tasks and especially in
dynamic, high-risk situations?
Four papers are included. A study performed with a microworld simulation
involving military officers as participants is presented, and the method of
using microworlds for investigating the effects of new technology is discussed.
Field observations from an emergency call centre are used to exemplify how
information systems actually are used in a cooperative task. An interview study
with military officers from a UN-mission describes the social aspects of
human-human communication in a dynamic, high risk environment.
Finally, an elaborated perspective on the role of information systems as tools
for communication, and especially the relation between the social,
organisational and technical layers of a joint control activity is presented.
No. 974
AN APPROACH TO DIAGNOSABILITY ANALYSIS FOR INTERACTING FINITE STATE SYSTEMS
Dan Lawesson
Fault isolation is the process of reasoning required to find the cause of a
system failure. In a model-based approach, the available information is a model
of the system and some observations. Using knowledge of how the system generally
behaves, as given in the system model, together with partial observations of the
events of the current situation the task is to deduce the failure causing
event(s). In our setting, the observable events manifest themselves in a message
log.
We study post mortem fault isolation for moderately concurrent discrete event
systems where the temporal order of logged messages contains little information.
To carry out fault isolation one has to study the correlation between observed
events and fault events of the system. In general, such study calls for
exploration of the state space of the system, which is exponential in the number
of system components.
Since we are studying a restricted class of all possible systems we may apply
aggressive specialized abstraction policies in order to allow fault isolation
without ever considering the often intractably large state space of the system.
In this thesis we describe a mathematical framework as well as a prototype
implementation and an experimental evaluation of such abstraction techniques.
The method is efficient enough to allow for not only post mortem fault isolation
but also design time diagnosability analysis of the system, which can be seen as
a non-trivial way of analyzing all possible observations of the system versus
the corresponding fault isolation outcome.
No. 979
SECURITY AND TRUST MECHANISMS FOR GROUPS IN DISTRIBUTED SERVICES
Claudiu Duma
Group communication is a fundamental paradigm in modern distributed services,
with applications in domains such as content distribution, distributed games,
and collaborative workspaces. Despite the increasing interest in group-based
services and the latest developments in efficient and reliable multicast, the
secure management of groups remains a major challenge for group communication.
In this thesis we propose security and trust mechanisms for supporting secure
management of groups within the contexts of controlled and of self-organizing
settings.
Controlled groups occur in services, such as multicast software delivery,
where an authority exists that enforces a group membership policy. In this
context we propose a secure group key management approach which assures that
only authorized users can access protected group resources. In order to scale to
large and dynamic groups, the key management scheme must also be efficient.
However, security and efficiency are competing requirements. We address this
issue by proposing two flexible group key management schemes which can be
configured to best meet the security and efficiency requirements of applications
and services. One of the schemes can also be dynamically tuned, at system
runtime, to adapt to possible requirement changes.
Self-organizing groups occur in services, such as those enabled by
peer-to-peer (P2P) and wireless technologies, which adopt a decentralized
architecture. In the context of self-organizing groups, with no authority to
dictate and control the group members' interactions, group members might behave
maliciously and attempt to subvert other members in the group. We address this
problem by proposing a reputation-based trust management approach that enables
group members to distinguish between wellbehaving and malicious members.
We have evaluated our group key management and trust mechanisms analytically
and through simulation. The evaluation of the group key management schemes shows
cost advantages for rekeying and key storage. The evaluation of the
reputation-based trust management shows that our trust metric is resilient to
group members maliciously changing their behavior and flexible in that it
supports different types of trust dynamics. As a proof of concept, we have
incorporated our trust mechanism into a P2Pbased intrusion detection system. The
test results show an increase in system resiliency to attacks.
No. 983
ANALYSIS AND OPTIMISATION OF REAL-TIME SYSTEMS WITH STOCHASTIC BEHAVIOUR
Sorin Manolache
Embedded systems have become indispensable in our life: household appliances,
cars, airplanes, power plant control systems, medical equipment,
telecommunication systems, space technology, they all contain digital computing
systems with dedicated functionality. Most of them, if not all, are real-time
systems, i.e. their responses to stimuli have timeliness constraints.
The timeliness requirement has to be met despite some unpredictable,
stochastic behaviour of the system. In this thesis, we address two causes of
such stochastic behaviour: the application and platform-dependent stochastic
task execution times, and the platform-dependent occurrence of transient faults
on network links in networks-on-chip.
We present three approaches to the analysis of the deadline miss ratio of
applications with stochastic task execution times. Each of the three approaches
fits best to a different context. The first approach is an exact one and is
efficiently applicable to monoprocessor systems. The second approach is an
approximate one, which allows for designer-controlled trade-off between analysis
accuracy and analysis speed. It is efficiently applicable to multiprocessor
systems. The third approach is less accurate but sufficiently fast in order to
be placed inside optimisation loops. Based on the last approach, we propose a
heuristic for task mapping and priority assignment for deadline miss ratio
minimisation.
Our contribution is manifold in the area of buffer and time constrained
communication along unreliable on-chip links. First, we introduce the concept of
communication supports, an intelligent combination between spatially and
temporally redundant communication. We provide a method for constructing a
sufficiently varied pool of alternative communication supports for each message.
Second, we propose a heuristic for exploring the space of communication support
candidates such that the task response times are minimised. The resulting time
slack can be exploited by means of voltage and/or frequency scaling for
communication energy reduction. Third, we introduce an algorithm for the
worst-case analysis of the buffer space demand of applications implemented on
networks-on-chip. Last, we propose an algorithm for communication mapping and
packet timing for buffer space demand minimisation.
All our contributions are supported by sets of experimental results obtained
from both synthetic and real-world applications of industrial size.
No. 986
STANDARDS-BASED APPLICATION INTEGRATION FOR BUSINESS-TO-BUSINESS
COMMUNICATIONS
Yuxiao Zhao
Fierce competitions and pursuits of excellence require all applications,
internal or external, to be integrated for business-to-business communications.
Standards play an important role in fulfilling the requirement. This
dissertation focuses on Business-to-Business standards or XML-based frameworks
for Internet commerce.
First, we analyse what the standards are and how they interact. Fifteen
influential standards are selected: eCo Framework, ebXML, UBL, UDDI, SOAP, WSDL,
RosettaNet, Open Applications Group, VoiceXML, cXML, Wf-XML, OFX, ICE, RDF, and
OWL. Basically each standard has to provide transport mechanisms and message
structures for general and/or specific applications. In terms of message
structure definition, they can be classified as two types: syntax-based, for
defining messages syntactically and semantics-based, like RDF and OWL for
defining messages semantically.
Second, for transiting from syntax to semantics, we provide a reuse-based
method to develop the ontology by reusing existing XML-based standards. This is
a kind of knowledge reuse, for instance, using previously developed structures,
naming conventions and relationships between concepts.
Third, we exploit how to use these standards. We propose an approach that
combines RDF and OWL with SOAP for building semantic Web services. Three levels
of combinations are provided: Simple Reference, Direct Embedment, and Semantic
SOAP.
Fourth, we design an infrastructure aimed at creating a formal channel by
using Web services and semantic Web for establishing buyer awareness, i.e.,
buyers first become aware of products/promotions offered by sellers. We propose
the concept of personal ontology, emphasizing that ontology is not only a common
and shared conceptualization but also can be used to specify personal views of
the products by sellers and buyers. The agreements between buyers and sellers
can be described in XML schema or ontology in OWL. A semantic matchmaking
algorithm is designed and implemented considering synonym, polysemy and partial
matching.
No 1004
ADMISSIBLE HEURISTICS FOR AUTOMATED PLANNING
Patrick Haslum
The problem of domain-independent automated planning has been a topic of
research in Artificial Intelligence since the very beginnings of the field. Due
to the desire not to rely on vast quantities of problem specific knowledge, the
most widely adopted approach to automated planning is search. The topic of this
thesis is the development of methods for achieving effective search control for
domain-independent optimal planning through the construction of admissible
heuristics. The particular planning problem considered is the so called
‘‘classical’’ AI planning problem, which makes several restricting assumptions.
Optimality with respect to two measures of plan cost are considered: in planning
with additive cost, the cost of a plan is the sum of the costs of the actions
that make up the plan, which are assumed independent, while in planning with
time, the cost of a plan is the total execution time -- makespan -- of the plan.
The makespan optimization objective can not, in general, be formulated as a sum
of independent action costs and therefore necessitates a problem model slightly
different from the classical one. A further small extension to the classical
model is made with the introduction of two forms of capacitated resources.
Heuristics are developed mainly for regression planning, but based on principles
general enough that heuristics for other planning search spaces can be derived
on the same basis. The thesis describes a collection of methods, including the
hm, additive hm and improved pattern
database heuristics, and the relaxed search and boosting techniques for
improving heuristics through limited search, and presents two extended
experimental analyses of the developed methods, one comparing heuristics for
planning with additive cost and the other concerning the relaxed search
technique in the context of planning with time, aimed at discovering the
characteristics of problem domains that determine the relative effectiveness of
the compared methods. Results indicate that some plausible such characteristics
have been found, but are not entirely conclusive.
No. 1005
DEVELOPING REUSABLE AND RECONFIGURABLE REAL-TIME SOFTWARE USING ASPECTS AND
COMPONENTS
Aleksandra Tešanovic
Our main focus in this thesis is on providing guidelines, methods, and tools
for design, configuration, and analysis of configurable and reusable real-time
software, developed using a combination of aspect-oriented and component-based
software development. Specifically, we define a econfigurable real-time
component model (RTCOM) that describes how a real-time component, supporting
aspects and enforcing information hiding, could efficiently be designed and
implemented. In this context, we outline design guidelines for development of
real-time systems using components and aspects, thereby facilitating static
configuration of the system, which is preferred for hard real-time systems. For
soft real-time systems with high availability requirements we provide a method
for dynamic system reconfiguration that is especially suited for
resourceconstrained real-time systems and it ensures that components and aspects
can be added, removed, or exchanged in a system at run-time. Satisfaction of
real-time constraints is essential in the real-time domain and, for real-time
systems built of aspects and components, analysis is ensured by: (i) a method
for aspectlevel worst-case execution time analysis; (ii) a method for formal
verification of temporal properties of reconfigurable real-time components; and
(iii) a method for maintaining quality of service, i.e., the specified level of
performance, during normal system operation and after dynamic reconfiguration.
We have implemented a tool set with which the designer can efficiently
configure a real-time system to meet functional requirements and analyze it to
ensure that non-functional requirements in terms of temporal constraints and
available memory are satisfied.
In this thesis we present a proof-of-concept implementation of a configurable
embedded real-time database, called COMET. The implementation illustrates how
our methods and tools can be applied, and demonstrates that the proposed
solutions have a positive impact in facilitating efficient development of
families of realtime systems.
No. 1008
ROLE, IDENTITY AND WORK:EXTENDING THE DESIGN AND DEVELOPMENT AGENDA
David Dinka
In order to make technology easier to handle for its users, the field of HCI
(Human- Computer Interaction) has recently often turned the environment and the
context of use. In this thesis the focus is on the relation between the user and
the technology. More specifically, this thesis explores how roles and
professional identity effects the use and views of the technology used. The
exploration includes two different domains, a clinical setting and a media
production setting, where the focus is on the clinical setting. These are
domains that have strong professional identities in common, in the clinical
setting neurosurgeons and physicists, and the media setting journalists. These
settings also have a strong technological profile, in the clinical setting the
focus has been on a specific neurosurgical tool called Leksell GammaKnife and in
the journalistic setting the introduction of new media technology in general has
been in focus. The data collection includes interviews, observations and
participatory design oriented workshops. The data collected were analyzed with
qualitative methods inspired by grounded theory. The work with the Leksell
GammaKnife showed that there were two different approaches towards the work, the
tool and development, depending on the work identity. Depending on if the user
were a neurosurgeon or a physicist, the definition of the work preformed was
inline with their identity, even if the task preformed was the same. When it
comes to the media production tool, the focus of the study was a participatory
design oriented development process. The outcome of the process turned out to be
oriented towards the objectives that were inline with the users identity, more
than with the task that were to be preformed. At some level, even the task was
defined from the user identity.
No. 1009
CONTRIBUTIONS TO THE MODELING AND SIMULATION OF MECHANICAL SYSTEMS WITH
DETAILED CONTACT ANALYSIS
Iakov Nakhimovski
The motivation for this thesis was the needs of multibody dynamics simulation
packages focused on detailed contact analysis. The three parts of the thesis
make contributions in three different directions:
Part I summarizes the equations, algorithms and design
decisions necessary for dynamics simulation of flexible bodies with moving
contacts. The assumed general shape function approach is presented. The approach
is expected to be computationally less expensive than FEM approaches and easier
to use than other reduction techniques. Additionally, the described technique
enables stud-ies of the residual stress release during grinding of flexible
bodies. The proposed set of mode shapes was also successfully applied for
modeling of heat flow.
The overall software system design for a flexible multibody simulation system
SKF BEAST (Bearing Simulation Tool) is presented and the specifics of the
flexible modeling is specially addressed.
An industrial application example is described. It presents results from a
case where the developed system is used for simulation of flexible ring grinding
with material removal.
Part II is motivated by the need to reduce the computation
time. The avail-ability of the new cost-efficient multiprocessor computers
triggered the develop-ment of the presented hybrid parallelization framework.
The framework includes a multilevel scheduler implementing work-stealing
strategy and two feedback based loop schedulers. The framework is designed to be
easily portable and can be implemented without any system level coding or
compiler modifications.
Part III is motivated by the need for inter-operation with
other simulation tools. A co-simulation framework based on the Transmission Line
Modeling (TLM) technology was developed. The main contribution here is the
framework design. This includes a communication protocol specially developed to
support coupling of variable time step differential equations solvers.
The framework enables integration of several different simulation compo-nents
into a single time-domain simulation with minimal effort from the simula-tion
components developers. The framework was successfully used for connect-ing
MSC.ADAMS and SKF BEAST simulation models. Some of the test runs are presented
in the text.
Throughout the thesis the approach was to present a practitioner’s roadmap.
The detailed description of the theoretical results relevant for a real software
implementation are put in focus. The software design decisions are discussed and
the results of real industrial simulations are presented.
No. 1013
EXACT ALGORITHMS FOR EXACT SATISFIABILITY PROBLEMS
Wilhelm Dahllöf
This thesis presents exact means to solve a family of NP-hard problems.
Starting with the well-studied Exact Satisfiability problem (XSAT) parents,
siblings and daughters are derived and studied, each with interesting practical
and theoretical properties. While developing exact algorithms to solve the
problems, we gain new insights into their structure and mutual similarities and
differences.
Given a Boolean formula in CNF, the XSAT problem asks for an assignment to the
variables such that each clause contains exactly one true literal. For this
problem we present an O(1.1730n) time algorithm, where n is
the number of variables. XSAT is a special case of the General Exact
Satisfiability problem which asks for an assignment such that in each clause
exactly i literals be true. For this problem we present an algorithm which runs
in O(2(1-e)n) time, with 0 < e < 1 for every fixed
i; for i=2, 3 and 4 we have running times in O(1.4511n),
O(1.6214n) and O(1.6848n) respectively.
For the counting problems we present an O(1.2190n) time
algorithm which counts the number of models for an XSAT instance. We also
present algorithms for #2SATw and #3SATw,
two well studied Boolean problems. The algorithms have running times in O(1.2561n)
and O(1.6737n) respectively.
Finally we study optimisation problems: As a variant of the Maximum Exact
Satisfiability problem, consider the problem of finding an assignment exactly
satisfying a maximum number of clauses while the rest are left with no true
literal. This problem is reducible to #2SATw without the
addition of new variables and thus is solvable in time O(1.2561n).
Another interesting optimisation problem is to find two XSAT models which differ
in as many variables as possible. This problem is shown to be solvable in
O(1.8348n) time.
No. 1016
Levon Saldamli
PDEMODELICA - A HIGH- LEVEL LANGUAGE FOR MODELING WITH PARTIAL DIFFERENTIAL
EQUATIONS
This thesis describes work on a new high-level mathematical modeling language
and framework called PDEModelica for modeling with partial differential
equations. It is an extension to the current Modelica modeling language for
object-oriented, equation-based modeling based on differential and algebraic
equations. The language extensions and the framework presented in this thesis
are consistent with the concepts of Modelica while adding support for partial
differential equations and space-distributed variables called fields.
The specification of a partial differential equation problem consists of three
parts:
1) the description of the definition domain, i.e., the geometric region where
the equations are defined,
2) the initial and boundary conditions, and
3) the actual equations. The known and unknown distributed variables in the
equation are represented by field variables in PDEModelica. Domains are defined
by a geometric description of their boundaries. Equations may use the Modelica
derivative operator extended with support for partial derivatives, or vector
differential operators such as divergence and gradient, which can be defined for
general curvilinear coordinates based on coordinate system definitions.
The PDEModelica system also allows the partial differential equation models to
be defined using a coefficient-based approach, where PDE models from a library
are instantiated with different parameter values. Such a library contains both
continuous and discrete representations of the PDE model. The user can
instantiate the continuous parts and define the parameters, and the discrete
parts containing the equations are automatically instantiated and used to solve
the PDE problem numerically.
Compared to most earlier work in the area of mathematical modeling languages
supporting PDEs, this work provides a modern object-oriented component-based
approach to modeling with PDEs, including general support for hierarchical
modeling, and for general, complex geometries. It is possible to separate the
geometry definition from the model definition, which allows geometries to be
defined separately, collected into libraries, and reused in new models. It is
also possible to separate the analytical continuous model description from the
chosen discretization and numerical solution methods. This allows the model
description to be reused, independent of different numerical solution
approaches.
The PDEModelica field concept allows general declaration of spatially
distributed variables. Compared to most other approaches, the field concept
described in this work affords a clearer abstraction and defines a new type of
variable. Arrays of such field variables can be defined in the same way as
arrays of regular, scalar variables. The PDEModelica language supports a clear,
mathematical syntax that can be used both for equations referring to fields and
explicit domain specifications, used for example to specify boundary conditions.
Hierarchical modeling and decomposition is integrated with a general connection
concept, which allows connections between ODE/DAE and PDE based models.
The implementation of a Modelica library needed for PDEModelica and a
prototype implementation of field variables are also described in the thesis.
The PDEModelica library contains internal and external solver implementations,
and uses external software for mesh generation, requisite for numerical solution
of the PDEs. Finally, some examples modeled with PDEModelica and solved using
these implementations are presented.
No. 1017
VERIFICATION OF COMPONENT-BASED EMBEDDED SYSTEM DESIGNS
Daniel Karlsson
Embedded systems are becoming increasingly common in our everyday lives. As
technology progresses, these systems become more and more complex. Designers
handle this increasing complexity by reusing existing components. At the same
time, the systems must fulfill strict functional and non-functional
requirements.
This thesis presents novel and efficient techniques for the verification of
component-based embedded system designs. As a common basis, these techniques
have been developed using a Petri net based modelling approach, called PRES+.
Two complementary problems are addressed: component verification and
integration verification. With component verification the providers verify their
components so that they function correctly if given inputs conforming to the
assumptions imposed by the components on their environment. Two techniques for
component verification are proposed in the thesis.
The first technique enables formal verification of SystemC designs by
translating them into the PRES+ representation. The second technique involves a
simulation based approach into which formal methods are injected to boost
verification efficiency.
Provided that each individual component is verified and is guaranteed to
function correctly, the components are interconnected to form a complete system.
What remains to be verified is the interface logic, also called glue logic, and
the interaction between components.
Each glue logic and interface cannot be verified in isolation. It must be put
into the context in which it is supposed to work. An appropriate environment
must thus be derived from the components to which the glue logic is connected.
This environment must capture the essential properties of the whole system with
respect to the properties being verified. In this way, both the glue logic and
the interaction of components through the glue logic are verified. The thesis
presents algorithms for automatically creating such environments as well as the
underlying theoretical framework and a step-by-step roadmap on how to apply
these algorithms.
Experimental results have proven the efficiency of the proposed techniques and
demonstrated that it is feasible to apply them on real-life examples.
No 1018
COMMUNICATION AND NETWORKING TECHNIQUES FOR TRAFFIC SAFETY SYSTEMS
loan Chisalita
Accident statistics indicate that every year a significant number of casualties
and extensive property losses occur due to traffic accidents. Consequently,
efforts are directed towards developing passive and active safety systems that
help reduce the severity of crashes, or prevent vehicles from colliding with one
another. To develop these systems, technologies such as sensor systems, computer
vision and vehicular communication have been proposed. Safety vehicular
communication is defined as the exchange of data between vehicles with the goal
of providing in-vehicle safety systems with enough information to permit
detection of traffic dangers. Inter-vehicle communication is a key safety
technology, especially as a complement to other technologies such as radar, as
the information it provides cannot be gathered in any other way. However, due to
the specifics of the traffic environment, the design of efficient safety
communication systems poses a series of major technical challenges.
In this thesis we focus on the design and development of a safety communication
system that provides support for active safety systems such as collision warning
and collision avoidance. We begin by providing a method for designing the
support system for active safety systems. Within our study, we investigate
different safety aspects of traffic situations. For performing traffic
investigations, we have developed ECAM, a temporal reasoning system for modeling
and analyzing accident scenarios. Next, we focus on the communication system
design. We investigate approaches that can be applied to implement safety
vehicular communication, as well as design aspects of such systems, including
networking techniques and transmission procedures. We then propose a new
solution for vehicular communication in the form of a distributed communication
protocol that allows the vehicles to organize themselves in virtual clusters
according to their common interest in traffic safety. To disseminate the
information used for organizing the network and for assessing dangers in
traffic, we develop an anonymous context-based broadcast protocol. This protocol
requires the receivers to determine whether they are the intended destination
for sent messages based on knowledge about their current situation in traffic.
This communication system is then augmented with a reactive operation mode,
where warnings can be issued and forwarded by vehicles. A vehicular
communication platform that provides an implementation framework for the
communication system, and integrates it within a vehicle, is also proposed.
Experiments have been conducted, under various conditions, to test communication
performance and the system's ability to reduce accidents. The results indicate
that that the proposed communication system can efficiently provide the exchange
of safety information between vehicles.
No. 1019
THE PUZZLE OF SOCIAL ACTIVITY : THE SIGNIFICANCE OF TOOLS IN COGNITION AND
COOPERATION
Tarja Susi
This dissertation addresses the role of tools in social interactions, or more
precisely the significance of tools in cognition and cooperation, from a
situated cognition perspective. While mainstream cognitive science focuses on
the internal symbolic representations and computational thought processes inside
the heads of individuals, situated cognition approaches instead emphasise the
central role of the interaction between agents and their material and social
environment. This thesis presents a framework regarding tools and (some) of
their roles in social interactions, drawing upon work in cognitive science,
cultural-historical theories, animal tool use, and different perspectives on the
subject-object relationship. The framework integrates interactions between
agents and their environment, or agentagent-object interaction,
conceptualisations regarding the function of tools, and different ways in which
agents adapt their environments to scaffold individual and social processes. It
also invokes stigmergy (tool mediated indirect interactions) as a mechanism that
relates individual actions and social activity. The framework is illustrated by
two empirical studies that consider tool use from a social interaction
perspective, carried out in settings where tools assume a central role in the
ongoing collaborative work processes; a children’s admission unit in a hospital
and the control room of a grain silo. The empirical studies illustrate
theoretical issues discussed in the background chapters, but also reveal some
unforeseen aspects of tool use. Lastly, the theoretical implications for the
study of individual and social tool use in cognitive science are summarised and
the practical relevance for applications in human-computer interaction and
artificial intelligence is outlined.
No. 1021
INTEGRATED OPTIMAL CODE GENERATION FOR DIGITAL SIGNAL PROCESSORS
Andrzej Bednarski
In this thesis we address the problem of optimal code generation for irregular
architectures such as Digital Signal Processors (DSPs).
Code generation consists mainly of three interrelated optimization tasks:
instruction selection (with resource allocation), instruction scheduling and
register allocation. These tasks have been discovered to be NP-hard for most
architectures and most situations. A common approach to code generation consists
in solving each task separately, i.e. in a decoupled manner, which is easier
from a software engineering point of view. Phase-decoupled compilers produce
good code quality for regular architectures, but if applied to DSPs the
resulting code is of significantly lower performance due to strong
interdependences between the different tasks.
We developed a novel method for fully integrated code generation at the basic
block level, based on dynamic programming. It handles the most important tasks
of code generation in a single optimization step and produces an optimal code
sequence. Our dynamic programming algorithm is applicable to small, yet not
trivial problem instances with up to 50 instructions per basic block if data
locality is not an issue, and up to 20 instructions if we take data locality
with optimal scheduling of data transfers on irregular processor architectures
into account. For larger problem instances we have developed heuristic
relaxations.
In order to obtain a retargetable framework we developed a structured
architecture specification language, xADML, which is based on XML. We
implemented such a framework, called OPTIMIST that is parameterized by an xADML
architecture specification.
The thesis further provides an Integer Linear Programming formulation of fully
integrated optimal code generation for VLIW architectures with a homogeneous
register file. Where it terminates
successfully, the ILP-based optimizer mostly works faster than the dynamic
programming approach; on the other hand, it fails for several larger examples
where dynamic programming still provides a solution. Hence, the two approaches
complement each other. In particular, we show how the dynamic programming
approach can be used to precondition the ILP formulation.
As far as we know from the literature, this is for the first time that the
main tasks of code generation are solved optimally in a single and fully
integrated optimization step that additionally
considers data placement in register sets and optimal scheduling of data
transfers between different registers sets.
No. 1022
AUTOMATIC PARALLELIZATION OF EQUATION-BASED SIMULATION PROGRAMS
Peter Aronsson
Modern equation-based object-oriented modeling languages which have emerged
during the past decades make it easier to build models of large and complex
systems. The increasing size and complexity of modeled systems requires high
performance execution of the simulation code derived from such models. More
efficient compilation and code optimization techniques can help to some extent.
However, a number of heavy-duty simulation applications require the use of high
performance parallel computers in order to obtain acceptable execution times.
Unfortunately, the possible additional performance offered by parallel computer
architectures requires the simulation program to be expressed in a way that
makes the potential parallelism accessible to the parallel computer. Manual
parallelization of computer programs is generally a tedious and error prone
process. Therefore, it would be very attractive to achieve automatic
parallelization of simulation programs.
This thesis presents solutions to the research problem of finding practically
usable methods for automatic parallelization of simulation codes produced from
models in typical equation-based object-oriented languages. The methods have
been implemented in a tool to automatically translate models in the Modelica
modeling language to parallel codes which can be efficiently executed on
parallel computers. The tool has been evaluated on several application models.
The research problem includes the problem of how to extract a sufficient amount
of parallelism from equations represented in the form of a data dependency graph
(task graph), requiring analysis of the code at a level as detailed as
individual expressions. Moreover, efficient clustering algorithms for building
clusters of tasks from the task graph are also required. One of the major
contributions of this thesis work is a new approach for merging fine-grained
tasks by using a graph rewrite system. Results from using this method show that
it is efficient in merging task graphs, thereby decreasing their size, while
still retaining a reasonable amount of parallelism. Moreover, the new
task-merging approach is generally applicable to programs which can be
represented as static (or almost static) task graphs, not only to code from
equation-based models.
An early prototype called DSBPart was developed to perform parallelization of
codes produced by the Dymola tool. The final research prototype is the ModPar
tool which is part of the OpenModelica framework. Results from using the DSBPart
and ModPar tools show that the amount of parallelism of complex models varies
substantially between different application models, and in some cases can
produce reasonable speedups. Also, different optimization techniques used on the
system of equations from a model affect the amount of parallelism of the model
and thus influence how much is gained by parallelization.
No 1030
A MUTATION-BASED FRAMEWORK FOR AUTOMATED TESTING OF TIMELINESS
Robert Nilsson
Timeliness is a property that is unique for real-time systems and
deserves special consideration during both design and testing. A problem when
testing timeliness of dynamic real-time systems is that response times depend on
the execution order of concurrent tasks. Other existing testing methods ignore
task interleaving and timing and, thus, do not help determine what kind of test
cases are meaningful for testing timeliness. This thesis presents several
contributions for automated testing of timeliness for dynamic real-time systems.
In particular, a model-based testing method, founded in mutation testing theory,
is proposed and evaluated for revealing failures arising from timeliness faults.
One contribution in this context is that mutation-based testing and models
developed for generic schedulabilty analysis, can be used to express testing
criteria for timeliness and for automatic generation of test cases. Seven basic
mutation operators are formally defined and validated to represent error types
that can lead to timeliness failures. These operators can subsequently be used
to set up testing criteria. Two approaches for automatically generating test
cases for timeliness are defined. One approach reliably generates test cases
that can distinguish a correct system from a system with a hypothesized
timeliness error. The second approach is designed to be extendible for many
target platforms and has the potential to generate test cases that target errors
in the interaction between real-time control systems and physical processes,
modelled in MATLAB/Simulink. In addition, this thesis outlines a scheme for
prefix-based test case execution and describes how such a method can exploit the
information from mutation-based test case generation to focus testing on the
most relevant scenarios. The contributions outlined above are put into context
by a framework for automated testing of timeliness. This framework specifies
activities, techniques and important issues for conducting mutation-based
timeliness testing -- from system modelling to automated test case execution.
The validation of the proposed testing approaches is done iteratively through
case studies and proof-of-concept implementations. In particular, the
mutation-based testing criteria is validated through a model-checking
experiment. The simulation based test case generation method is evaluated in a
set of test case generation experiments with different target system
characteristics. These experiments indicate that the approach is applicable for
generating test cases for non-trivial dynamic real-time systems and real-time
control systems with mixed task loads. This was not possible using previously
existing methods due to problems with the size of the reachable state space and
limitations in tool support. Finally, the proposed framework for testing of
timeliness is demonstrated on a small robot control application running on
Linux/RTAI. This case study indicates that the mutation-based test cases, that
are generated using assumptions of the internal structure of the real-time
system, are more effective than both naively constructed stress tests and a
suite of randomly generated test cases that are approximately ten times larger.
No 1034
TECHNIQUES FOR AUTOMATIC GENERATION OF TESTS FROM PROGRAMS AND SPECIFICATIONS
Jon Edvardsson
Software testing is complex and time consuming. One way to reduce the effort
associated with testing is to generate test data automatically. This thesis is
divided into three parts. In the first part a mixed-integer constraint solver
developed by Gupta et. al is studied. The solver, referred to as the Unified
Numerical Approach (UNA), is an important part of their generator and it is
responsible for solving equation systems that correspond to the program path
currently under test.
In this thesis it is shown that, in contrast to traditional optimization
methods, the UNA is not bounded by the size of the solved equation system.
Instead, it depends on how the system is composed. That is, even for very simple
systems consisting of one variable we can easily get more than a thousand
iterations. It is also shown that the UNA is not complete, that is, it does not
always find a mixed-integer solution when there is one. It is found that a
better approach is to use a traditional optimization method, like the simplex
method in combination with branch-and-bound and/or a cutting-plane algorithm as
a constraint solver.
The second part explores a specification-based approach for generating tests
developed by Meudec. Tests are generated by partitioning the specification input
domain into a set of subdomains using a rule-based automatic partitioning
strategy. An important step of Meudec’s method is to reduce the number of
generated subdomains and find a minimal partition. This thesis shows that
Meudec’s minimal partition algorithm is incorrect. Furthermore, two new
efficient alternative algorithms are developed. In addition, an algorithm for
finding the upper and lower bound on the number of subdomains in a partition is
also presented.
Finally, in the third part, two different designs of automatic testing tools are
studied. The first tool uses a specification as an oracle. The second tool, on
the other hand, uses a reference program. The fault-detection effectiveness of
the tools is evaluated using both randomly and systematically generated inputs.
No 1035
INTEGRATION OF BIOLOGICAL DATA
Vaida Jakoniene
Data integration is an important procedure underlying many research tasks in
the life sciences, as often multiple data sources have to be accessed to collect
the relevant data. The data sources vary in content, data format, and access
methods, which often vastly complicates the data retrieval process. As a result,
the task of retrieving data requires a great deal of effort and expertise on the
part of the user. To alleviate these difficulties, various information
integration systems have been proposed in the area. However, a number of issues
remain unsolved and new integration solutions are needed.
The work presented in this thesis considers data integration at three
different levels. 1) Integration of biological data sources deals with
integrating multiple data sources from an information integration system point
of view. We study properties of biological data sources and existing integration
systems. Based on the study, we formulate requirements for systems integrating
biological data sources. Then, we define a query language that supports queries
commonly used by biologists. Also, we propose a high-level architecture for an
information integration system that meets a selected set of requirements and
that supports the specified query language. 2) Integration of ontologies deals
with finding overlapping information between ontologies. We develop and evaluate
algorithms that use life science literature and take the structure of the
ontologies into account. 3) Grouping of biological data entries deals with
organizing data entries into groups based on the computation of similarity
values between the data entries. We propose a method that covers the main steps
and components involved in similarity-based grouping procedures. The
applicability of the method is illustrated by a number of test cases. Further,
we develop an environment that supports comparison and evaluation of different
grouping strategies.
No 1045
GENERALIZED HEBBIAN ALGORITHM FOR DIMENSIONALITY REDUCTION IN NATURAL LANGUAGE
PROCESSING
Genevieve Gorrell
The current surge of interest in search and comparison tasks in natural language
processing has brought with it a focus on vector space approaches and vector
space dimensionality reduction techniques. Presenting data as points in
hyperspace provides opportunities to use a variety of well-developed tools
pertinent to this representation. Dimensionality reduction allows data to be
compressed and generalised. Eigen decomposition and related algorithms are one
category of approaches to dimensionality Reduction, providing a principled way
to reduce data dimensionality that has time and again shown itself capable of
enabling access to powerful generalisations in the data.
Issues with the approach, however, include computational complexity and
limitations on the size of dataset that can reasonably be processed in this way.
Large datasets are a persistent feature of natural language processing tasks.
This thesis focuses on two main questions. Firstly, in what ways can eigen
decomposition and related techniques be extended to larger datasets?
Secondly, this having been achieved, of what value is the resulting approach to
information retrieval and to Statistical language modelling at the n-gram level?
The applicability of eigen decomposition is shown to be extendable through the
use of an extant algorithm; the Generalized Hebbian Algorithm (GHA), and the
novel extension of this algorithm to paired data; the Asymmetric Generalized
Hebbian Algorithm (AGHA). Several original extensions to the these algorithms
are also presented, Improving their applicability in various domains. The
applicability of GHA to Latent Semantic Analysis-style tasks is investigated.
Finally, AGHA is used to investigate the value of singular value decomposition,
an eigen decomposition variant, to n-gram language
modelling. A sizeable perplexity reduction is demonstrated.
No 1051
HAVING A NEW PAIR OF GLASSES - APPLYING SYSTEMATIC ACCIDENT MODELS ON ROAD
SAFETY
Yu-Hsing Huang
The main purpose of the thesis is to discuss the accident models which underlie
accident prevention in general and road safety in particular, and the
consequences of relying on a particular model have for actual preventive work.
The discussion centres on two main topics. The first topic is whether the
underlying accident model, or paradigm, of traditional road safety should be
exchanged for a more complex accident model, and if so, which model(s) are
appropriate. From a discussion of current developments in modern road traffic,
it is concluded that the traditional accident model of road safety needs
replacing. An analysis of three general accident model types shows that the work
of traditional road safety is based on a sequential accident model. Since
research in industrial safety has shown that such model are unsuitable for
complex systems, it needs to be replaced by a systemic model, which better
handles the complex interactions and dependencies of modern road traffic.
The second topic of the thesis is whether the focus of road safety should shift
from accident investigation to accident prediction. Since the goal of accident
prevention is to prevent accidents in the future, its focus should theoretically
be on how accidents will happen rather than on how they did happen. Despite
this, road safety traditionally puts much more emphasis on accident
investigation than prediction, compared to areas such as nuclear power plant
safety and chemical industry safety. It is shown that this bias towards the past
is driven by the underlying sequential accident model. It is also shown that
switching to a systemic accident model would create a more balanced perspective
including both investigations of the past and predictions of the future, which
is seen as necessary to deal with the road safety problems of the future.
In the last chapter, more detailed effects of adopting a systemic perspective is
discussed for four important areas of road safety, i.e. road system modelling,
driver modelling, accident/incident investigations and road safety strategies.
These descriptions contain condensed versions of work which has been done in the
FICA and the AIDE projects, and which can be found in the attached papers.
No 1054
PERCEIVE THOSE THINGS WHICH CANNOT BE SEEN - A COGNITIVE SYSTEMS ENGINEERING
PERSPECTIVE ON REQUIREMENTS MANAGEMENT
Åsa Hedenskog
Non-functional requirements contribute to the overall quality of software,
and should therefore be a part of any development effort. However, in practice
they are often considered to be too difficult to handle.
The purpose of this thesis is to gain understanding of where the nature and
origin of these difficulties may lie. The focus is on a specific type of
non-functional requirements: usability requirements. The basis for the thesis is
two case studies, the results of which are presented herein:
The first case study describes the work environment of radio network
optimizers by presenting needs regarding task performance and the use of
knowledge, qualities of current tools, and the expected qualities of new
technology. The original purpose of this study was to investigate how a higher
level of automation in the software tools used for managing the radio network
would impact the optimizers’ work. As a result of the ethnographical method
used, the first study revealed that there was a body of user requirements that
were not addressed in the tool development.
This led to the second case study, specifically examining the
difficulties of managing usability requirements. The study took place over the
course of two years, at a company that is a large supplier of systems for radio
network control. The purpose was to seek knowledge about the requirements
engineering process at the studied company, in order to better understand the
environment, people and tasks involved in controlling this process. The
motivation for this was to find an answer to the question of why some
requirements are not addressed in the tool development, even though they are
important to the tool users. It was also the ambition to identify and describe
areas in the requirements engineering process that might be improved.
The requirements engineering process was analyzed from a cognitive systems
engineering perspective, which is suitable for analysis and design of complex,
high variety systems, such as the system that controls the requirements
management process.
The result from the second case study is a description of the difficulties of
handling requirements, specifically usability requirements. The impacts of the
process, the organization, and the culture are discussed, as is the overall task
of controlling the requirements engineering process.
The study concludes that:
- The engineering production culture impacts the way non-functional (especially usability-) requirements are addressed in software development.
- Lack of knowledge of potential problems with usability requirements can be a state which is maintained by a self-reinforcing process.
- A discrepancy between where responsibility for managing requirements is, and where resources are, can cause problems where usability requirements are concerned.
It was also empirically verified that:
- A cognitive systems engineering approach can be successfully applied to this type of system, and easily incorporates cultural aspects in the analysis.
No 1061
AN EVALUATION PLATFORM FOR SEMANTIC WEB TECHNOLOGY
Cécile Åberg
The vision of the Semantic Web aims at enhancing today's Web in order to
provide a more efficient and reliable environment for both providers and
consumers of Web resources (i.e. information and services). To deploy the
Semantic Web, various technologies have been developed, such as machine
understandable description languages, language parsers, goal matchers, and
resource composition algorithms. Since the Semantic Web is just emerging, each
technology tends to make assumptions about different aspects of the Semantic
Web's architecture and use, such as the kind of applications that will be
deployed, the resource descriptions, the consumers' and providers' requirements,
and the existence and capabilities of other technologies. In order to ensure the
deployment of a robust and useful Semantic Web and the applications that will
rely on it, several aspects of the technologies must be investigated, such as
whether the assumptions made are reasonable, whether the existing technologies
allow construction of a usable Semantic Web, and the systematic identification
of which technology to use when designing new applications.
In this thesis we provide a means of investigating these aspects for service
discovery, which is a critical task in the context of the Semantic Web. We
propose a simulation and evaluation platform for evaluating current and future
Semantic Web technology with different resource sets and consumer and provider
requirements. For this purpose we provide a model to represent the Semantic Web,
a model of the evaluation platform, an implementation of the evaluation platform
as a multi-agent system, and an illustrative use of the platform to evaluate
some service discovery technology in a travel scenario. The implementation of
the platform shows the feasibility of our evaluation approach. We show how the
platform provides a controlled setting to support the systematic identification
of bottlenecks and other challenges for new Semantic Web applications. Finally,
the evaluation shows that the platform can be used to assess technology with
respect to both hardware issues such as the kind and number of computers
involved in a discovery scenario, and other issues such as the evaluation of the
quality of the service discovery result.
No 1073
HANDLING COMBINATORIAL EXPLOSION IN SOFTWARE TESTING
Mats Grindal
In this thesis, the overall conclusion is that combination strategies, (i.e.,
test case selection methods that manage the combinatorial explosion of possible
things to test), can improve the software testing in most organizations. The
research underlying this thesis emphasizes relevance by working in close
relationship with industry.
Input parameter models of test objects play a crucial role for combination
strategies. These models consist of parameters with corresponding parameter
values and represent the input space and possibly other properties, such as
state, of the test object. Test case selection is then defined as the selection
of combinations of parameter values from these models.
This research describes a complete test process, adapted to combination
strategies. Guidelines and step-by-step descriptions of the activities in
process are included in the presentation. In particular, selection of suitable
combination strategies, input parameter modeling and handling of conflicts in
the input parameter models are addressed. It is also shown that several of the
steps in the test process can be automated.
The test process is validated through a set of experiments and case studies
involving industrial testers as well as actual test problems as they occur in
industry. In conjunction with the validation of the test process, aspects of
applicability of the combination strategy test process (e.g., usability,
scalability and performance) are studied. Identification and discussion of
barriers for the introduction of the combination strategy test process in
industrial projects are also included.
This research also presents a comprehensive survey of existing combination
strategies, complete with classifications and descriptions of their different
properties. Further, this thesis contains a survey of the testing maturity of
twelve software-producing organizations. The data indicate low test maturity in
most of the investigated organizations. Test managers are often aware of this
but have trouble improving. Combination strategies are suitable improvement
enablers, due to their low introduction costs.
No 1075
USABLE SECURITY POLICIES FOR RUNTIME ENVIRONMENTS
Almut Herzog
The runtime environments provided by application-level virtual machines such
as the Java Virtual Machine or the .NET Common Language Runtime are attractive
for Internet application providers because the applications can be deployed on
any platform that supports the target virtual machine. With Internet
applications, organisations as well as end users face the risk of viruses,
trojans, and denial of service attacks. Virtual machine providers are aware of
these Internet security risks and
provide, for example, runtime monitoring of untrusted code and access control
to sensitive resources.
Our work addresses two important security issues in runtime environments. The
first issue concerns resource or release control. While many virtual machines
provide runtime access control to resources, they do not provide any means of
limiting the use of a resource once access is granted; they do not provide
so-called resource control. We have addressed the issue of resource control in
the example of the Java Virtual Machine. In contrast to others' work, our
solution builds on and enhancement to the existing security architecture. We
demonstrate that resource control permissions for Java-mediated resources can be
integrated into the regular Java security architecture, thus leading to a clean
design and a single external security policy.
The second issue that we address is the usability and security of the set-up
of security policies for runtime environments. Access control decisions are
based on external configuration files, the security policy, which must be set up
by the end user. This set-up is security-critical but also complicated and
error-prone for a lay end user and supportive, usable tools are so far missing.
After one of our usability studies signalled that offline editing of the
configuration file is inefficient and difficult for end users, we conducted a
usability study of personal firewalls to identify usable ways of setting up a
security policy at runtime. An analysis of general user help techniques together
with the results from the two previous studies resulted in a proposal of design
guidelines for applications that need to set up a security policy. Our
guidelines have been used for the design and implementation of the tool JPerM
that sets the Java security policy at runtime. JPerM evaluated positively in a
usability study and supports the validity of our design guidelines.
No 1079
ALGORITHMS, MEASURES, AND UPPER BOUNDS FOR SATISFIABILITY AND RELATED PROBLEMS
Magnus Wahlström
The topic of exact, exponential-time algorithms for NP-hard problems has
received a lot of attention, particularly with the focus of producing algorithms
with stronger theoretical guarantees, e.g. upper bounds on the running time on
the form O(c^n) for some c. Better methods of analysis may have an impact not
only on these bounds, but on the nature of the algorithms as well.
The most classic method of analysis of the running time of DPLL-style
("branching" or "backtracking") recursive algorithms consists of counting the
number of variables that the algorithm removes at every step. Notable
improvements include Kullmann's work on complexity measures, and Eppstein's work
on solving multivariate recurrences through quasiconvex analysis. Still,
one limitation that remains in Eppstein's framework is that it is difficult to
introduce (non-trivial) restrictions on the applicability of a possible
recursion.
We introduce two new kinds of complexity measures, representing two ways to add
such restrictions on applicability to the analysis. In the first measure, the
execution of the algorithm is viewed as moving between a finite set of states
(such as the presence or absence of certain structures or properties), where the
current state decides which branchings are applicable, and each branch of a
branching contains information about the resultant state. In the second measure,
it is instead the relative sizes of the modelled attributes (such as the average
degree or other concepts of density) that controls the applicability of
branchings.
We adapt both measures to Eppstein's framework, and use these tools to provide
algorithms with stronger bounds for a number of problems. The problems we treat
are satisfiability for sparse formulae, exact 3-satisfiability, 3-hitting set,
and counting models for 2- and 3-satisfiability formulae, and in every case the
bound we prove is stronger than previously known bounds.
No 1083
DYNAMIC SOFTWARE ARCHITECTURES
Jesper Andersson
Software architecture is a software engineering discipline that provides
notations and processes for high-level partitioning of systems' responsibilities
early in the software design process. This thesis is concerned with a specific
subclass of systems, systems with a dynamic software architecture. They have
practical applications in various domains such as high-availability systems and
ubiquitous computing.
In a dynamic software architecture, the set of architectural elements and the
configuration of these elements may change at run-time. These modifications are
motivated by changed system requirements or by changed execution environments.
The implications of change events may be the addition of new functionality or
re-configuration to meet new Quality of Service requirements.
This thesis investigates new modeling and implementation techniques for dynamic
software architectures. The field of Dynamic Architecture is surveyed and a
common ground defined. We introduce new concepts and techniques that simplify
understanding, modeling, and implementation of systems with a dynamic
architecture, with this common ground as our starting point. In addition, we
investigate practical use and reuse of quality implementations, where a dynamic
software architecture is a fundamental design principle.
The main contributions are a taxonomy, a classification, and a set of
architectural patterns for dynamic software architecture. The taxonomy and
classification support analysis, while the patterns affect design and
implementation work directly. The investigation of practical applications of
dynamic architectures identifies several issues concerned with use and reuse,
and discusses alternatives and solutions where possible.
The results are based on surveys, case studies, and exploratory development of
dynamic software architectures in different application domains using several
approaches. The taxonomy, classification and architecture patterns are evaluated
through several experimental prototypes, among others, a high-performance
scientific computing platform.
No 1086
OBTAINING ACCURATE AND COMPREHENSIBLE DATA MINING MODELS - AN EVOLUTIONARY
APPROACH
Ulf Johansson
When performing predictive data mining, the use of ensembles is claimed
to virtually guarantee increased accuracy compared to the use of single models.
Unfortunately, the problem of how to maximize ensemble accuracy is far from
solved. In particular, the relationship between ensemble diversity and accuracy
is not completely understood, making it hard to efficiently utilize diversity
for ensemble creation. Furthermore, most high-accuracy predictive models are
opaque, i.e. it is not possible for a human to follow and understand the logic
behind a prediction. For some domains, this is unacceptable, since models need
to be comprehensible. To obtain comprehensibility, accuracy is often sacrificed
by using simpler but transparent models; a trade-off termed the accuracy vs.
comprehensibility trade-off. With this trade-off in mind, several researchers
have suggested rule extraction algorithms, where opaque models are transformed
into comprehensible models, keeping an acceptable accuracy.
In this thesis, two novel algorithms based on Genetic Programming are suggested.
The first algorithm (GEMS) is used for ensemble creation, and the second (G-REX)
is used for rule extraction from opaque models. The main property of GEMS is the
ability to combine smaller ensembles and individual models in an almost
arbitrary way. Moreover, GEMS can use base models of any kind and the
optimization function is very flexible, easily permitting inclusion of, for
instance, diversity measures. In the experimentation, GEMS obtained accuracies
higher than both straightforward design choices and published results for Random
Forests and AdaBoost. The key quality of G-REX is the inherent ability to
explicitly control the accuracy vs. comprehensibility trade-off. Compared to the
standard tree inducers C5.0 and CART, and some well-known rule extraction
algorithms, rules extracted by G-REX are significantly more accurate and
compact. Most importantly, G-REX is thoroughly evaluated and found to meet all
relevant evaluation criteria for rule extraction algorithms, thus establishing
G-REX as the algorithm to benchmark against.
No 1089
ANALYSIS AND OPTIMISATION OF DISTRIBUTED EMBEDDED SYSTEMS WITH HETEROGENEOUS
SCHEDULING POLICIES
Traian Pop
The growing amount and diversity of functions to be implemented by the current
and future embedded applications (like for example, in automotive electronics)
have shown that, in many cases, time-triggered and event-triggered functions
have to coexist on the computing nodes and to interact over the communication
infrastructure. When time-triggered and event-triggered activities have
to share the same processing node, a natural way for the execution support can
be provided through a hierarchical scheduler. Similarly, when such
heterogeneous applications are mapped over a distributed architecture, the
communication infrastructure should allow for message exchange in both
time-triggered and event-triggered manner in order to ensure a straightforward
interconnection of heterogeneous functional components.
This thesis studies aspects related to the analysis and design optimisation
for safety-critical hard real-time applications running on hierarchically
scheduled distributed embedded systems. It first provides the basis for
the timing analysis of the activities in such a system, by carefully taking
into consideration all the interferences that appear at run-time between the
processes executed according to different scheduling policies. Moreover,
due to the distributed nature of the architecture, message delays are also
taken into consideration during the timing analysis. Once the schedulability
analysis has been provided, the entire system can be optimised by adjusting
its configuration parameters. In our work, the entire optimisation process is
directed by the results from the timing analysis, with the goal that in the
end the timing constraints of the application are satisfied.
The analysis and design methodology proposed in the first part of the thesis
is applied next on the particular category of distributed systems that use
FlexRay as a communication protocol. We start by providing a
schedulability analysis for messages transmitted over a FlexRay bus, and
then by proposing a bus access optimisation algorithm that aims at improving
the timing properties of the entire system. Experiment have been carried
out in order to measure the efficiency of the proposed techniques
No 1091
COMPLEXITY DICHOTOMIES FOR CSP-RELATED PROBLEMS
Gustav Nordh
Ladner's theorem states that if P is not equal to NP, then there are problems
in NP that are neither in P nor NP-complete. CSP(S) is a class of problems
containing many well-studied combinatorial problems in NP. CSP(S) problems are
of the form: given a set of variables constrained by a set of constraints from
the set of allowed constraints S, is there an assignment to the variables
satisfying all constraints? A famous, and in the light of Ladner's theorem,
surprising conjecture states that there is a complexity dichotomy for CSP(S);
that is, for any fixed finite S, the CSP(S) problem is either in P or
NP-complete.
In this thesis we focus on problems expressible in the CSP(S) framework with
different computational goals, such as: counting the number of solutions,
deciding whether two sets of constraints have the same set of solutions,
deciding whether all minimal solutions of a set of constraints satisfies an
additional constraint etc. By doing so, we capture a host of problems ranging
from fundamental problems in nonmonotonic logics, such as abduction and
circumscription, to problems regarding the equivalence of systems of linear
equations. For several of these classes of problem, we are able to give complete
complexity classifications and rule out the possibility of problems of
intermediate complexity. For example, we prove that the inference problem in
propositional variable circumscription, parameterized by the set of allowed
constraints S, is either in P, coNP-complete, or complete for the second level
of the polynomial hierarchy. As a by-product of these classifications, new
tractable cases and hardness results for well-studied problems are discovered.
The techniques we use to obtain these complexity classifications are to a
large extent based on connections between algebraic clone theory and the
complexity of CSP(S). We are able to extend these powerful algebraic techniques
to several of the problems studied in this thesis. Hence, this thesis also
contributes to the understanding of when these algebraic techniques are
applicable and not.
No 1106
DISCRETE AND CONTINUOUS SHAPE WRITING FOR TEXT ENTRY AND CONTROL
Per Ola Kristensson
Mobile devices gain increasing computational power and storage
capabilities, and there are already mobile phones that can show movies, act as
digital music players and offer full-scale web browsing. The bottleneck for
information flow is however limited by the inefficient communication channel
between the user and the small device. The small mobile phone form factor has
proven to be surprisingly difficult to overcome and limited text entry
capabilities are in effect crippling mobile devices’ use experience. The desktop
keyboard is too large for mobile phones, and the keypad too limited. In recent
years, advanced mobile phones have come equipped with touch-screens that enable
new text entry solutions. This dissertation explores how software keyboards on
touch-screens can be improved to provide an efficient and practical text and
command entry experience on mobile devices. The central hypothesis is that it is
possible to combine three elements: software keyboard, language redundancy and
pattern recognition, and create new effective interfaces for text entry and
control. These are collectively called “shape writing” interfaces. Words form
shapes on the software keyboard layout. Users write words by articulating the
shapes for words on the software keyboard. Two classes of shape writing
interfaces are developed and analyzed: discrete and continuous shape writing.
The former recognizes users’ pen or finger tapping motion as discrete patterns
on the touch-screen. The latter recognizes users’ continuous motion patterns.
Experimental results show that novice users can write text with an average entry
rate of 25 wpm and an error rate of 1% after 35 minutes of practice. An
accelerated novice learning experiment shows that users can exactly copy a
single well-practiced phrase with an average entry rate of 46.5 wpm, with
individual phrase entry rate measurements up to 99 wpm. When used as a control
interface, users can send commands to applications 1.6 times faster than using
de-facto standard linear pull-down menus. Visual command preview leads to
significantly less errors and shorter gestures for unpracticed commands. Taken
together, the quantitative results show that shape writing is among the fastest
mobile interfaces for text entry and control, both initially and after practice,
that are currently known.
No 1110
ALIGNING BIOMEDICAL ONTOLOGIES
He Tan
The amount of biomedical information that is disseminated over the Web increases
every day. This rich resource is used to find solutions to challenges across the
life sciences. The Semantic Web for life sciences shows promise for effectively
and efficiently locating, integrating, querying and inferring related
information that is needed in daily biomedical research. One of the key
technologies in the Semantic Web is ontologies, which furnish the semantics of
the Semantic Web. A large number of biomedical ontologies have been developed.
Many of these ontologies contain overlapping information, but it is unlikely
that eventually there will be one single set of standard ontologies to which
everyone will conform. Therefore, applications often need to deal with multiple
overlapping ontologies, but the heterogeneity of ontologies hampers
interoperability between different ontologies. Aligning ontologies, i.e.
identifying relationships between different ontologies, aims to overcome this
problem.
A number of ontology alignment systems have been developed. In these systems
various techniques and ideas have been proposed to facilitate identification of
alignments between ontologies. However, there still is a range of issues to be
addressed when we have alignment problems at hand. The work in this thesis
contributes to three different aspects of identification of high quality
alignments: 1) Ontology alignment strategies and systems. We surveyed the
existing ontology alignment systems, and proposed a general ontology alignment
framework. Most existing systems can be seen as instantiations of the framework.
Also, we developed a system for aligning biomedical ontologies (SAMBO) according
to this framework. We implemented various alignment strategies in the system.
2) Evaluation of ontology alignment strategies. We developed and implemented the
KitAMO framework for comparative evaluation of different alignment strategies,
and we evaluated different alignment strategies using the implementation. 3)
Recommending optimal alignment strategies for different applications. We
proposed a method for making recommendations.
No 1112
MINDING THE BODY - INTERACTING SOICALLY TROUGH EMBODIED ACTION
Jessica Lindblom
This dissertation clarifies the role and relevance of the body in social
interaction and cognition from an embodied cognitive science
perspective. Theories of embodied cognition have during the past two decades
offered a radical shift in explanations of the human mind, from traditional
computationalism which considers cognition in terms of internal symbolic
representations and computational processes, to emphasizing the way cognition is
shaped by the body and its sensorimotor interaction with the surrounding social
and material world. This thesis presents a framework for the embodied nature of
social interaction and cognition, which is based on an interdisciplinary
approach that ranges historically in time and across different disciplines. It
includes work in cognitive science, artificial intelligence, phenomenology,
ethology, developmental psychology, neuroscience, social psychology,
communication, gesture studies, and linguistics. The theoretical framework
presents a thorough and integrated understanding that supports and explains the
embodied nature of social interaction and cognition. It claims that embodiment
is the part and parcel of social interaction and cognition in the most general
and specific ways, in which dynamically embodied actions themselves have meaning
and agency. The framework is illustrated by empirical work that provides some
detailed observational fieldwork on embodied actions captured in three different
episodes of spontaneous social interaction in situ. Besides illustrating the
theoretical issues discussed in the thesis, the empirical work also reveals some
novel characteristics of embodied action in social interaction and cognition.
Furthermore, the ontogeny of social interaction and cognition is considered from
an embodied perspective, in which social scaffolding and embodied experience
play crucial roles during child development. In addition, the issue what it
would take for an artificial system to be (socially) embodied is discussed from
the perspectives of cognitive modeling and engineering. Finally, the theoretical
contributions and implications of the study of embodied actions in social
interaction and cognition for cognitive science and related disciplines are
summed up. The practical relevance for applications to artificial intelligence
and human-computer interaction is also outlined as well as some aspects for
future work.
No 1113
DIALOGUE BEHAVIOR MANAGEMENT IN CONVERSATIONAL RECOMMENDER SYSTEMS
Pontus Wärnestål
This thesis examines recommendation dialogue, in the context of dialogue
strategy design for conversational recommender systems. The purpose of a
recommender system is to produce personalized recommendations of potentially
useful items from a large space of possible options. In a conversational
recommender system, this task is approached by utilizing natural language
recommendation dialogue for detecting user preferences, as well as for providing
recommendations. The fundamental idea of a conversational recommender system is
that it relies on dialogue sessions to detect, continuously update, and utilize
the user’s preferences in order to predict potential interest in domain items
modeled in a system. Designing the dialogue strategy management is thus one of
the most important tasks for such systems. Based on empirical studies as well as
design and implementation of conversational recommender systems, a
behavior-based dialogue model called bcorn is presented. bcorn is based on three
constructs, which are presented in the thesis. It utilizes a user preference
modeling framework (preflets) that supports and utilizes natural
language dialogue, and allows for descriptive, comparative, and superlative
preference statements, in various situations. Another component of bcorn is its
message-passing formalism, pcql, which is a notation used when describing
preferential and factual statements and requests. bcorn is designed to be a
generic recommendation dialogue strategy with conventional,
information-providing, and recommendation capabilities, that each describes a
natural chunk of a recommender agent’s dialogue strategy, modeled in
dialogue behavior diagrams that are run in parallel to give rise to
coherent, flexible, and effective dialogue in conversational recommender
systems.
Three empirical studies have been carried out in order to explore the problem
space of recommendation dialogue, and to verify the solutions put forward in
this work. Study I is a corpus study in the domain of movie recommendations. The
result of the study is a characterization of recommendation dialogue, and forms
a base for a first prototype implementation of a human-computer recommendation
dialogue control strategy. Study II is an end-user evaluation of the acorn
system that implements the dialogue control strategy and results in a
verification of the effectiveness and usability of the dialogue strategy. There
are also implications that influence the refinement of the model that are used
in the bcorn dialogue strategy model. Study III is an overhearer evaluation of a
functional conversational recommender system called CoreSong, which implements
the bcorn model. The result of the study is indicative of the soundness of the
behavior-based approach to conversational recommender system design, as well as
the informativeness, naturalness, and coherence of the individual bcorn dialogue
behaviors.
No 1120
MANAGEMENT OF REAL-TIME DATA CONSISTENCY AND TRANSIENT OVERLOADS IN EMBEDDED
SYSTEMS
Thomas Gustafsson
This thesis addresses the issues of data management in embedded systems'
software. The complexity of developing and maintaining software has increased
over the years due to increased availability of resources, e.g., more powerful
CPUs and larger memories, as more functionality can be accommodated using these
resources.
In this thesis, it is proposed that part of the increasing complexity can be
addressed by using a real-time database since data management is one constituent
of software in embedded systems. This thesis investigates which functionality a
real-time database should have in order to be suitable for embedded software
that control an external environment. We use an engine control software as a
case study of an embedded system.
The findings are that a real-time database should have support for keeping data
items up-to-date, providing snapshots of values, i.e., the values are derived
from the same system state, and overload handling. Algorithms are developed for
each one of these functionalities and implemented in a real-time database for
embedded systems. Performance evaluations are conducted using the database
implementation. The evaluations show that the real-time performance is improved
by utilizing the added functionality.
Moreover, two algorithms for examining whether the system may become overloaded
are also outlined; one algorithm for off-line use and the second algorithm
for on-line use. Evaluations show the algorithms are accurate and fast and can
be used for embedded systems.
No 1127
ENERGY EFFICIENT AND PREDICTABLE DESIGN OF REAL-TIME EMBEDDED SYSTEMS
Alexandru Andrei
This thesis addresses several issues related to the design and optimization of
embedded systems. In particular, in the context of time-constrained embedded
systems, the thesis investigates two problems: the minimization of the energy
consumption and the implementation of predictable applications on multiprocessor
system-on-chip platforms.
Power consumption is one of the most limiting factors in electronic systems
today. Two techniques that have been shown to reduce the power consumption
effectively are dynamic voltage selection and adaptive body biasing. The
reduction is achieved by dynamically adjusting the voltage and performance
settings according to the application needs. Energy minimization is addressed
using both offline and online optimization approaches. Offline, we solve
optimally the combined supply voltage and body bias selection problem for
multiprocessor systems with imposed time constraints, explicitly taking into
account the transition overheads implied by changing voltage levels. The voltage
selection technique is applied not only to processors, but also to buses. While
the methods mentioned above minimize the active energy, we propose an approach
that combines voltage selection and processor shutdown in order to optimize the
total energy. In order to take full advantage of slack that arises from
variations in the execution time, it is important to recalculate the voltage and
performance settings during run-time, i.e., online. This, however, is
computationally expensive. To overcome the online complexity, we propose a
quasi-static voltage selection scheme, with a constant online time.
Worst-case execution time (WCET) analysis and, in general, the predictability of
real-time applications implemented on multiprocessor systems has been addressed
only in very restrictive and particular contexts. One important aspect that
makes the analysis difficult is the estimation of the system's communication
behavior. The traffic on the bus does not solely originate from data transfers
due to data dependencies between tasks, but is also affected by memory transfers
as result of cache misses. As opposed to the analysis performed for a single
processor system, where the cache miss penalty is constant, in a multiprocessor
system each cache miss has a variable penalty, depending on the bus contention.
In this context, we propose, an approach to worst-case execution time analysis
and system scheduling for real-time applications implemented on multiprocessor
SoC architectures.
No 1139
ELICITING KNOWLEDGE FROM EXPERTS IN MODELING OF COMPLEX SYSTEMS: MANAGING
VARIATION AND INTERACTIONS
Per Wikberg
The thematic core of the thesis is about how to manage modeling
procedures in real settings. The view taken in this thesis is that modeling is a
heuristic tool to outline a problem, often conducted in a context of a larger
development process. Examples of applications, in which modeling are used,
include development of software and business solutions, design of experiments
etc. As modeling often is used in the initial phase of such processes, then
there is every possibility of failure, if initial models are false or
inaccurate. Modeling often calls for eliciting knowledge from experts. Access to
relevant expertise is limited, and consequently, efficient use of time and
sampling of experts is crucial. The process is highly interactive, and data are
often of qualitative nature rather than quantitative. Data from different
experts often vary, even if the task is to describe the same phenomenon. As with
quantitative data, this variation between data sources can be treated as a
source of error as well as a source of information. Irrespective of specific
modeling technique, variation and interaction during the model development
process should be possible to characterize in order to estimate the elicited
knowledge in terms of correctness and comprehensiveness. The aim of the thesis
is to explore a methodological approach on how to manage such variations and
interactions. Analytical methods tailored for this purpose have the potential to
impact on the quality of modeling in the fields of application. Three studies
have been conducted, in which principles for eliciting, controlling, and judging
the modeling procedures were explored. The first one addressed the problem of
how to characterize and handle qualitative variations between different experts,
describing the same modeling object. The judgment approach, based on a
subjective comparison between different expert descriptions, was contrasted with
a criterion-based approach, using a predefined structure to explicitly estimate
the degree of agreement. The results showed that much of the basis for the
amalgamation of models used in the judgment-approach was concealed, even if a
structured method was used to elicit the criteria for the independent experts’
judgment. In contrast, by using the criterion-based approach the nature of the
variation was possible to characterize explicitly. In the second study, the same
approach was used to characterize variation between, as well as within,
different modeling objects, analogical to a one-way statistical analysis of
variance. The results of the criterion-based approach indicated a substantial
difference between the two modeling subjects. Variances within each of the
modeling tasks were about the same and lower than the variance between modeling
tasks. The result supports the findings from the first study and indicates that
the approach can be generalized as a way of comparing modeling tasks. The third
study addressed the problem of how to manage the interaction between experts in
team modeling. The aim was to explore the usability of an analytical method with
on-line monitoring of the team communication. Could the basic factors of task,
participants, knowledge domains, communication form, and time be used to
characterize and manipulate team modeling? Two contrasting case studies of team
modeling were conducted. The results indicated that the taxonomy of the
suggested analytical method was sensitive enough to capture the distinctive
communication patterns for the given task conditions. The results also indicate
that an analytical approach can be based on the relatively straightforward task
of counting occurrences, instead of the relatively more complex task of
establish sequences of occurrence.
No 1143
QOS CONTROL OF REAL-TIME DATA SERVICES UNDER UNCERTAIN WORKLOAD
Mehdi Amirijoo
Real-time systems comprise computers that must generate correct
results in a timely manner. This involves a wide spectrum of computing systems
found in our everyday life ranging from computers in rockets to our mobile
phones. The criticality of producing timely results defines the different types
of real-time systems. On one hand, we have the so-called hard real-time systems,
where failing to meet deadlines may result in a catastrophe. In this thesis we
are, however, concerned with firm and soft real-time systems, where missing
deadlines is acceptable at the expense of degraded system performance. The usage
of firm and soft real-time systems has increased rapidly during the last years,
mainly due to the advent of applications in multimedia, telecommunication, and
e-commerce. These systems are typically data-intensive, with the data normally
spanning from low-level control data, typically acquired from sensors, to
high-level management and business data. In contrast to hard realtime systems,
the environments in which firm and soft real-time systems operate in are
typically open and highly unpredictable. For example, the workload applied on a
web server or base station in telecommunication systems varies according to the
needs of the users, which is hard to foresee. In this thesis we are concerned
with quality of service (QoS) management of data services for firm and soft
real-time systems. The approaches and solutions presented aim at providing a
general understanding of how the QoS can be guaranteed according to a given
specification, even if the workload varies unpredictably. The QoS specification
determines the desired QoS during normal system operation, and the worst-case
system performance and convergence rate toward the desired setting in the face
of transient overloads. Feedback control theory is used to control QoS since
little is known about the workload applied on the system. Using feedback control
the difference between the measured QoS and the desired QoS is formed and fed
into a controller, which computes a change to the operation of the real-time
system. Experimental evaluation shows that using feedback control is highly
effective in managing QoS such that a given QoS specification is satisfied. This
is a key step toward automatic management of intricate systems providing
real-time data services.
No 1150
OPTIMISTIC REPLICATION WITH FORWARD CONFLICT RESOLUTION IN DISTRIBUTED
REAL-TIME DATABASES
Sanny Syberfeldt
In this thesis a replication protocol - PRiDe - is presented, which supports
optimistic replication in distributed real-time databases with deterministic
detection and forward resolution of transaction conflicts. The protocol is
designed to emphasize node autonomy, allowing individual applications to proceed
without being affected by distributed operation. For conflict management, PRiDe
groups distributed operations into generations of logically concurrent and
potentially conflicting operations. Conflicts between operations in a generation
can be resolved with no need for coordination among nodes, and it is shown that
nodes eventually converge to mutually consistent states. A generic framework for
conflict resolution is presented that allows semantics-based conflict resolution
policies and application-specific compensation procedures to be plugged in by
the database designer and application developer.
It is explained how transaction semantics are supported by the protocol, and how
applications can tolerate exposure to temporary database inconsistencies.
Transactions can detect inconsistent reads and compensate for inconsistencies
through callbacks to application-specific compensation procedures. A tool -
VADer - has been constructed, which allows database designers and application
programmers to quickly construct prototype applications, conflict resolution
policies and compensation procedures. VADer can be used to simulate application
and database behavior, and supports run-time visualization of relationships
between concurrent transactions. Thus, VADer assists the application programmer
in conquering the complexity inherent in optimistic replication and forward
conflict resolution.
No 1155
ENVISIONING A FUTURE DECISION SUPPORT SYSTEM FOR REQUIREMENTS ENGINEERING: A
HOLISTIC AND HUMAN-CENTRED PERSPECTIVE
Beatrice Alenljung
Complex decision-making is a prominent aspect of requirements engineering (RE)
and the need for improved decision support for RE decision-makers has been
identified by a number of authors in the research literature. The fundamental
viewpoint that permeates this thesis is that RE decision-making can be
substantially improved by RE decision support systems (REDSS) based on the
actual needs of RE decision-makers as well as the actual generic human
decision-making activities that take place in the RE decision processes. Thus, a
first step toward better decision support in requirements engineering is to
understand complex decision situations of decision-makers. In order to gain a
holistic view of the decision situation from a decision-maker’s perspective, a
decision situation framework has been created. The framework evolved through an
analysis of decision support systems literature and decision-making theories.
The decision situation of RE decision-makers has been studied at a systems
engineering company and is depicted in this thesis. These situations are
described in terms of, for example, RE decision matters, RE decision-making
activities, and RE decision processes. Factors that affect RE decision-makers
are also identified. Each factor consists of problems and difficulties. Based on
the empirical findings, a number of desirable characteristics of a visionary
REDSS are suggested. Examples of characteristics are to reduce the cognitive
load, to support creativity and idea generation, and to support decision
communication. One or more guiding principles are proposed for each
characteristic and available techniques are described. The purpose of the
principles and techniques is to direct further efforts concerning how to find a
solution that can fulfil the characteristic. Our contributions are intended to
serve as a road map that can direct the efforts of researchers addressing RE
decision-making and RE decision support problems. Our intention is to widen the
scope and provide new lines of thought about how decision-making in RE can be
supported and improved.
No 1156
TYPES FOR XML WITH APPLICATION TO XCERPT
Artur Wilk
XML data is often accompanied by type information, usually expressed by some
schema language. Sometimes XML data can be related to ontologies defining
classes of objects, such classes can also be interpreted as types. Type systems
proved to be extremely useful in programming languages, for instance to
automatically discover certain kinds of errors. This thesis deals with an XML
query language Xcerpt, which originally has no underlying type system nor any
provision for taking advantage of existing type information. We provide a type
system for Xcerpt; it makes possible type inference and checking type
correctness.
The system is descriptive: the types associated with Xcerpt constructs are
sets of data terms and approximate the semantics of the constructs. A formalism
of Type Definitions is adapted to specify such sets. The formalism may be seen
as a simplification and abstraction of XML schema languages. The type inference
method, which is the core of this work, may be seen as abstract interpretation.
A non standard way of assuring termination of fixed point computations is
proposed, as standard approaches are too inefficient. The method is proved
correct wrt. the formal semantics of Xcerpt.
We also present a method for type checking of programs. A success of type
checking implies that the program is correct wrt. its type specification. This
means that the program produces results of the specified type whenever it is
applied to data of the given type. On the other hand, a failure of type checking
suggests that the program may be incorrect. Under certain conditions (on the
program and on the type specification), the program is actually incorrect
whenever the proof attempt fails.
A prototype implementation of the type system has been developed and
usefulness of the approach is illustrated on example programs.
In addition, the thesis outlines possibility of employing semantic types
(ontologies) in Xcerpt. Introducing ontology classes into Type Definitions makes
possible discovering some errors related to the semantics of data queried by
Xcerpt. We also extend Xcerpt with a mechanism of combining XML queries with
ontology queries. The approach employs an existing Xcerpt engine and an ontology
reasoner; no modifications are required.
No 1183
INTEGRATED MODEL-DRIVEN DEVELOPMENT ENVIRONMENTS FOR EQUATION-BASED
OBJECT-ORIENTED LANGUAGES
Adrian Pop
Integrated development environments are essential for efficient realization of
complex industrial products, typically consisting of both software and hardware
components. Powerful equation-based object-oriented (EOO) languages such as
Modelica are successfully used for modeling and virtual prototyping increasingly
complex physical systems and components, whereas software modeling approaches
like UML, especially in the form of domain specific language subsets, are
increasingly used for software systems modeling.
A research hypothesis investigated to some extent in this thesis is if EOO
languages can be successfully generalized also to support software modeling,
thus addressing whole product modeling, and if integrated environments for such
a generalized EOO language tool support can be created and effectively used on
real-sized applications.
However, creating advanced development environments is still a
resource-consuming error-prone process that is largely manual. One rather
successful approach is to have a general framework kernel, and use meta-modeling
and meta-programming techniques to provide tool support for specific languages.
Thus, the main goal of this research is the development of a meta-modeling
approach and its associated meta-programming methods for the synthesis of
model-driven product development environments that includes support for modeling
and simulation. Such environments include components like model editors,
compilers, debuggers and simulators. This thesis presents several contributions
towards this vision in the context of EOO languages, primarily the Modelica
language.
Existing state-of-the art tools supporting EOO languages typically do not
satisfy all user requirements with regards to analysis, management, querying,
transformation, and configuration of models. Moreover, tools such as
model-compilers tend to become large and monolithic. If instead it would be
possible to model desired tool extensions with meta-modeling and
meta-programming, within the application models themselves, the kernel tool
could be made smaller, and better extensibility, modularity and flexibility
could be achieved.
We argue that such user requirements could be satisfied if the equation-based
object-oriented languages are extended with meta-modeling and meta-programming.
This thesis presents a new language that unifies EOO languages with term pattern
matching and transformation typically found in functional and logic programming
languages. The development, implementation, and performance of the unified
language are also presented.
The increased ease of use, the high abstraction, and the expressivity of the
unified language are very attractive properties. However, these properties come
with the drawback that programming and modeling errors are often hard to find.
To overcome these issues, several methods and integrated frameworks for run-time
debugging of the unified language have been designed, analyzed, implemented, and
evaluated on non-trivial industrial applications.
To fully support development using the unified language, an integrated
model-driven development environment based on the Eclipse platform is proposed,
designed, implemented, and used extensively. The development environment
integrates advanced textual modeling, code browsing, debugging, etc. Graphical
modeling is also supported by the development environment based on a proposed
ModelicaML Modelica/UML/SysML profile. Finally, serialization, composition, and
transformation operations on models are investigated.
No 1185
GIFTING TECHNOLOGIES - ETHNOGRAPHIC STUDIES OF END-USERS AND SOCIAL MEDIA
SHARING
Jörgen Skågeby
This thesis explores what dimensions that can be used to describe and compare
the sociotechnical practice of content contribution in online sharing networks.
Data was collected through online ethnographical methods, focusing on end-users
in three large media sharing networks. The method includes forum message
elicitation, online interviews, and application use and observation. Gift-giving
was used as an applied theoretical framework and the data was analyzed by
theory-informed thematic analysis. The results of the analysis recount four
interrelated themes: what kind of content is given; to whom is it given; how is
it given; and why is it given? The five papers in this thesis covers the four
themes accordingly: Paper I presents the research area and proposes some initial
gifting dimensions that are developed over the following papers. Paper II
proposes a model for identifying conflicts of interest that arise for end-users
when considering different types of potential receivers. Paper III presents five
analytical dimensions for representing how online content is given. The
dimensions are: direction (private-public); identification
(anonymous-identified); initiative (active-passive); incentive
(voluntary-enforced); and limitation (open-restricted). Paper IV investigates
photo-sharing practices and reveals how social metadata, attached to media
objects, are included in sharing practices. The final paper further explores how
end-users draw on social metadata to communicate bonding intentions when gifting
media content. A general methodological contribution is the utilization of
sociotechnical conflicts as units of analysis. These conflicts prove helpful in
predicting, postulating and researching end-user innovation and conflict
coordination. It is suggested that the conflicts also provide potent ways for
interaction design and systems development to take end-user concerns and
intentions on board.
No 1187
ANALYTICAL TOOLS AND INFORMATION-SHARING METHODS SUPPORTING ROAD SAFETY
ORGANIZATIONS
Imad-Eldin Ali Abugessais
A prerequisite for improving road safety are reliable and consistent sources
of information about traffic and accidents, which will help assess the
prevailing situation and give a good indication of their severity. In many
countries there is under-reporting of road accidents, deaths and injuries, no
collection of data at all, or low quality of information. Potential knowledge is
hidden, due to the large accumulation of traffic and accident data. This limits
the investigative tasks of road safety experts and thus decreases the
utilization of databases. All these factors can have serious effects on the
analysis of the road safety situation, as well as on the results of the
analyses.
This dissertation presents a three-tiered conceptual model to support the
sharing of road safety–related information and a set of applications and
analysis tools. The overall aim of the research is to build and maintain an
information-sharing platform, and to construct mechanisms that can support road
safety professionals and researchers in their efforts to prevent road accidents.
GLOBESAFE is a platform for information sharing among road safety organizations
in different countries developed during this research.
Several approaches were used, First, requirement elicitation methods were used
to identify the exact requirements of the platform. This helped in developing a
conceptual model, a common vocabulary, a set of applications, and various access
modes to the system. The implementation of the requirements was based on
iterative prototyping. Usability methods were introduced to evaluate the users’
interaction satisfaction with the system and the various tools. Second, a
system-thinking approach and a technology acceptance model were used in the
study of the Swedish traffic data acquisition system. Finally, visual data
mining methods were introduced as a novel approach to discovering hidden
knowledge and relationships in road traffic and accident databases. The results
from these studies have been reported in several scientific articles.
No 1204
A REPRESENTATION SCHEME FOR DESCRIPTION AND RECONSTRUCTION OF OBJECT
CONFIGURATIONS BASED ON QUALITATIVE RELATIONS
H. Joe Steinhauer
One reason Qualitative Spatial Reasoning (QSR) is becoming increasingly
important to Artificial Intelligence (AI) is the need for a smooth ‘human-like’
communication between autonomous agents and people. The selected, yet general,
task motivating the work presented here is the scenario of an object
configuration that has to be described by an observer on the ground using only
relational object positions. The description provided should enable a second
agent to create a map-like picture of the described configuration in order to
recognize the configuration on a representation from the survey perspective, for
instance on a geographic map or in the landscape itself while observing it from
an aerial vehicle. Either agent might be an autonomous system or a person.
Therefore, the particular focus of this work lies on the necessity to develop
description and reconstruction methods that are cognitively easy to apply for a
person.
This thesis presents the representation scheme QuaDRO (Qualitative Description
and Reconstruction of Object configurations). Its main contributions are a
specification and qualitative classification of information available from
different local viewpoints into nine qualitative equivalence classes. This
classification allows the preservation of information needed for reconstruction
nto a global frame of reference. The reconstruction takes place in an underlying
qualitative grid with adjustable granularity. A novel approach for representing
objects of eight different orientations by two different frames of reference is
used. A substantial contribution to alleviate the reconstruction process is that
new objects can be inserted anywhere within the reconstruction without the need
for backtracking or rereconstructing. In addition, an approach to reconstruct
configurations from underspecified descriptions using conceptual
neighbourhood-based reasoning and coarse object relations is presented.
No 1222
TEST OPTIMIZATION FOR CORE-BASED SYSTEM-ON-CHIP
Anders Larsson
The semiconductor technology has enabled the fabrication of integrated
circuits (ICs), which may include billions of transistors and can contain all
necessary electronic circuitry for a complete system, so-called System-on-Chip
(SOC). In order to handle design complexity and to meet short time-to-market
requirements, it is increasingly common to make use of a modular design approach
where an SOC is composed of pre-designed and pre-verified blocks of logic,
called cores. Due to imperfections in the fabrication process, each IC must be
individually tested. A major problem is that the cost of test is increasing and
is becoming a dominating part of the overall manufacturing cost. The cost of
test is strongly related to the increasing test-data volumes, which lead to
longer test application times and larger tester memory requirement. For ICs
designed in a modular fashion, the high test cost can be addressed by adequate
test planning, which includes test-architecture design, test scheduling,
test-data compression, and test sharing techniques. In this thesis, we analyze
and explore several design and optimization problems related to core-based SOC
test planning. We perform optimization of test sharing and test-data
compression. We explore the impact of test compression techniques on test
application time and compression ratio. We make use of analysis to explore the
optimization of test sharing and test-data compression in conjunction with
test-architecture design and test scheduling. Extensive experiments, based on
benchmarks and industrial designs, have been performed to demonstrate the
significance of our techniques.
No 1238
PROCESSES AND MODELS FOR CAPACITY REQUIREMENTS IN TELECOMMUNICATION SYSTEMS
Andreas Borg
Capacity is an essential quality factor in telecommunication systems. The
ability to develop systems with the lowest cost per subscriber and transaction,
that also meet the highest availability requirements and at the same time allow
for scalability, is a true challenge for a telecommunication systems provider.
This thesis describes a research collaboration between Linköping University and
Ericsson AB aimed at improving the management, representation, and
implementation of capacity requirements in large-scale software engineering.
An industrial case study on non-functional requirements in general was
conducted to provide the explorative research background, and a richer
understanding of identified difficulties was gained by dedicating subsequent
investigations to capacity. A best practice inventory within Ericsson regarding
the management of capacity requirements and their refinement into design and
implementation was carried out. It revealed that capacity requirements crosscut
most of the development process and the system lifecycle, thus widening the
research context considerably. The interview series resulted in the
specification of 19 capacity sub-processes; these were represented as a method
plug-in to the OpenUP software development process in order to construct a
coherent package of knowledge as well as to communicate the results. They also
provide the basis of an empirically grounded anatomy which has been validated in
a focus group. The anatomy enables the assessment and stepwise improvement of an
organization’s ability to develop for capacity, thus keeping the initial cost
low. Moreover, the notion of capacity is discussed and a pragmatic approach for
how to support model-based, function-oriented development with capacity
information by its annotation in UML models is presented. The results combine
into a method for how to improve the treatment of capacity requirements in
large-scale software systems.
No 1240
DYKNOW: A STREAM-BASED KNOWLEDGE PROCESSING MIDDLEWARE FRAMEWORK
Fredrik Heintz
As robotic systems become more and more advanced the need to integrate existing
deliberative functionalities such as chronicle recognition, motion planning,
task planning, and execution monitoring increases. To integrate such
functionalities into a coherent system it is necessary to reconcile the
different formalisms used by the functionalities to represent information and
knowledge about the world. To construct and integrate these representations and
maintain a correlation between them and the environment it is necessary to
extract information and knowledge from data collected by sensors. However,
deliberative functionalities tend to assume symbolic and crisp knowledge about
the current state of the world while the information extracted from sensors
often is noisy and incomplete quantitative data on a much lower level of
abstraction. There is a wide gap between the information about the world
normally acquired through sensing and the information that is assumed to be
available for reasoning about the world.
As physical autonomous systems grow in scope and complexity, bridging the gap in an ad-hoc manner becomes impractical and inefficient. Instead a principled and systematic approach to closing the sensereasoning gap is needed. At the same time, a systematic solution has to be sufficiently flexible to accommodate a wide range of components with highly varying demands. We therefore introduce the concept of knowledge processing middleware for a principled and systematic software framework for bridging the gap between sensing and reasoning in a physical agent. A set of requirements that all such middleware should satisfy is also described.
A stream-based knowledge processing middleware framework called DyKnow is then presented. Due to the need for incremental refinement of information at different levels of abstraction, computations and processes within the stream-based knowledge processing framework are modeled as active and sustained knowledge processes working on and producing streams. DyKnow supports the generation of partial and context dependent stream-based representations of past, current, and potential future states at many levels of abstraction in a timely manner.
To show the versatility and utility of DyKnow two symbolic reasoning engines are integrated into Dy-Know. The first reasoning engine is a metric temporal logical progression engine. Its integration is made possible by extending DyKnow with a state generation mechanism to generate state sequences over which temporal logical formulas can be progressed. The second reasoning engine is a chronicle recognition engine for recognizing complex events such as traffic situations. The integration is facilitated by extending DyKnow with support for anchoring symbolic object identifiers to sensor data in order to collect information about physical objects using the available sensors. By integrating these reasoning engines into DyKnow, they can be used by any knowledge processing application. Each integration therefore extends the capability of DyKnow and increases its applicability.
To show that DyKnow also has a potential for multi-agent knowledge processing, an extension is presented which allows agents to federate parts of their local DyKnow instances to share information and knowledge.
Finally, it is shown how DyKnow provides support for the functionalities on the different levels in the JDL Data Fusion Model, which is the de facto standard functional model for fusion applications. The focus is not on individual fusion techniques, but rather on an infrastructure that permits the use of many different fusion techniques in a unified framework.
The main conclusion of this thesis is that the DyKnow knowledge processing middleware framework provides appropriate support for bridging the sense-reasoning gap in a physical agent. This conclusion is drawn from the fact that DyKnow has successfully been used to integrate different reasoning engines into complex unmanned aerial vehicle (UAV) applications and that it satisfies all the stated requirements for knowledge processing middleware to a significant degree.
No 1241
TESTABILITY OF DYNAMIC REAL-TIME SYSTEMS
Birgitta Lindström
This dissertation concerns testability of event-triggered real-time systems.
Real-time systems are known to be hard to test because they are required to
function correct both with respect to what the system does and when it does it.
An event-triggered real-time system is directly controlled by the events that
occur in the environment, as opposed to a time-triggered system, which behavior
with respect to when the system does something is constrained, and therefore
more predictable. The focus in this dissertation is the behavior in the time
domain and it is shown how testability is affected by some factors when the
system is tested for timeliness.
This dissertation presents a survey of research that focuses on software
testability and testability of real-time systems. The survey motivates both the
view of testability taken in this dissertation and the metric that is chosen to
measure testability in an experiment. We define a method to generate sets of
traces from a model by using a meta algorithm on top of a model checker.
Defining such a method is a necessary step to perform the experiment. However,
the trace sets generated by this method can also be used by test strategies that
are based on orderings, for example execution orders.
An experimental study is presented in detail. The experiment investigates how
testability of an event-triggered real-time system is affected by some
constraining properties of the execution environment. The experiment
investigates the effect on testability from three different constraints
regarding preemptions, observations and process instances. All of these
constraints were claimed in previous work to be significant factors for the
level of testability. Our results support the claim for the first two of the
constraints while the third constraint shows no impact on the level of
testability.
Finally, this dissertation discusses the effect on the event-triggered semantics
when the constraints are applied on the execution environment. The result from
this discussion is that the first two constraints do not change the semantics
while the third one does. This result indicates that a constraint on the number
of process instances might be less useful for some event-triggered real-time
systems.
No 1244
SEMI-AUTOMATIC ONTOLOGY CONSTRUCTION BASED ON PATTERNS
Eva Blomqvist
This thesis aims to improve the
ontology engineering process, by providing better semiautomatic support for
constructing ontologies and introducing knowledge reuse through ontology
patterns. The thesis introduces a typology of patterns, a general framework of
pattern-based semi-automatic ontology construction called OntoCase, and provides
a set of methods to solve some specific tasks within this framework.
Experimental results indicate some benefits and drawbacks of both ontology
patterns, in general, and semi-automatic ontology engineering using patterns,
the OntoCase framework, in particular.
The general setting of this thesis is the field of information logistics, which focuses on how to provide the right information at the right moment in time to the right person or organisation, sent through the right medium. The thesis focuses on constructing enterprise ontologies to be used for structuring and retrieving information related to a certain enterprise. This means that the ontologies are quite 'light weight' in terms of logical complexity and expressiveness.
Applying ontology content design patterns within semi-automatic ontology construction, i.e. ontology learning, is a novel approach. The main contributions of this thesis are a typology of patterns together with a pattern catalogue, an overall framework for semi-automatic patternbased ontology construction, specific methods for solving partial problems within this framework, and evaluation results showing the characteristics of ontologies constructed semiautomatically based on patterns. Results show that it is possible to improve the results of typical existing ontology learning methods by selecting and reusing patterns. OntoCase is able to introduce a general top-structure to the ontologies, and by exploiting background knowledge, the ontology is given a richer structure than when patterns are not applied.
No 1249
FUNCTIONAL MODELING OF CONSTRAINT MANAGEMENT IN AVIATION SAFETY AND COMMAND
AND CONTROL
Rogier Woltjer
This thesis has shown that the
concept of constraint management is instrumental in understanding the domains of
command and control and aviation safety. Particularly, functional modeling as a
means to address constraint management provides a basis for analyzing the
performance of socio-technical systems. In addition to the theoretical
underpinnings, six studies are presented.
First, a functional analysis of an exercise conducted by a team of electricity network emergency managers is used to show that a team function taxonomy can be used to analyze the mapping between team tasks and information and communication technology to assess training needs for performance improvement. Second, an analysis of a fire-fighting emergency management simulation is used to show that functional modeling and visualization of constraints can describe behavior vis-à-vis constraints and inform decision support design. Third, analysis of a simulated adversarial command and control task reveals that functional modeling may be used to describe and facilitate constraint management (constraining the adversary and avoiding being constrained by the adversary).
Studies four and five address the domain of civil aviation safety. The analysis of functional resonance is applied to an incident in study four and an accident in study five, based on investigation reports. These studies extend the functional resonance analysis method and accident model. The sixth study documents the utility of this functional modeling approach for risk assessment by evaluating proposed automation for air traffic control, based on observations, interviews, and experimental data.
In sum, this thesis adds conceptual tools and modeling methods to the cognitive systems engineering discipline that can be used to tackle problems of training environment design, decision support, incident and accident analysis, and risk assessment.
No 1260
VISION-BASED LOCALIZATION AND GUIDANCE FOR UNMANNED AERIAL VEHICLES
Gianpaolo Conte
The thesis has been developed as part of the requirements for a PhD degree at
the Artificial Intelligence and Integrated Computer System division (AIICS) in
the Department of Computer and Information Sciences at Linköping University.The
work focuses on issues related to Unmanned Aerial Vehicle (UAV) navigation, in
particular in the areas of guidance and vision-based autonomous flight in
situations of short and long term GPS outage.The thesis is divided into two
parts. The first part presents a helicopter simulator and a path following
control mode developed and implemented on an experimental helicopter platform.
The second part presents an approach to the problem of vision-based state
estimation for autonomous aerial platforms which makes use of geo-referenced
images for localization purposes. The problem of vision-based landing is also
addressed with emphasis on fusion between inertial sensors and video camera
using an artificial landing pad as reference pattern. In the last chapter, a
solution to a vision-based ground object geo-location problem using a fixed-wing
micro aerial vehicle platform is presented.The helicopter guidance and
vision-based navigation methods developed in the thesis have been implemented
and tested in real flight-tests using a Yamaha Rmax helicopter. Extensive
experimental flight-test results are presented.
No 1262
ENABLING TOOL SUPPORT FOR FORMAL ANALYSIS OF ECA RULES
Ann Marie Ericsson
Rule-based
systems implemented as event-condition-action (ECA) rules utilize a powerful and
flexible paradigm when it comes to specifying systems that need to react to
complex situation in their environment. Rules can be specified to react to
combinations of events occurring at any time in any order. However, the behavior
of a rule based system is notoriously hard to analyze due to the rules ability
to interact with each other.
Formal methods are not utilized in their full potential for enhancing software quality in practice. We argue that seamless support in a high-level paradigm specific tool is a viable way to provide industrial system designers with powerful verification techniques. This thesis targets the issue of formally verifying that a set of specified rules behaves as indented.
The prototype tool REX (Rule and Event eXplorer) is developed as a proof of concept of the results of this thesis. Rules and events are specified in REX which is acting as a rule-based front-end to the existing timed automata CASE tool UPPAAL. The rules, events and requirements of application design are specified in REX. To support formal verification, REX automatically transforms the specified rules to timed automata, queries the requirement properties in the model-checker provided by UPPAAL and returns results to the user of REX in terms of rules and events.
The results of this thesis consist of guidelines for modeling and verifying rules in a timed automata model-checker and experiences from using and building a tool implementing the proposed guidelines. Moreover, the result of an industrial case study is presented, validating the ability to model and verify a system of industrial complexity using the proposed approach.
No 1266
EXPLORING TACTICAL COMMAND AND CONTROL: A ROLE-PLAYING SIMULATION APPROACH
Jiri Trnka
This thesis concerns command and control (C2) work at the tactical level in
emergency and crisis response operations. The presented research addresses two
main research questions. The first question is whether it is feasible to
simulate and study C2 work in the initial stages of response operations by
means of role-playing simulations. If so, the second question is how to
develop and execute role-playing simulations in order to explore this type of
C2 work in a methodologically sound way. The presented research is based on
simulations as methodological means for qualitative research. The utilized
simulation approach is scenario-based real-time role-playing simulations
grounded in models of C2 work and response operations. Three simulations have
been conducted based on this methodology and are reported in this thesis.
Simulation I focused on the work practice of cooperating commanders whose
activities may be enhanced by the use of artifacts. Simulation II concerned
the issues of operationalizing advanced technological artifacts in rapid
response expert teams. Simulation III gave attention to the role improvisation
in C2 teams designated for international operations. The results from the
simulations and from the work conducted and presented in this thesis
contribute with knowledge and experience from using role-playing simulations
to study C2 work. This includes the methodological aspects of designing and
conducting role-playing simulations such as scenarios, realism, evaluation and
simulation format and control. It also includes the identification of the main
application and problem areas for which the methodology is suitable, that is
explorative qualitative inquiries and evaluation studies. The thesis provides
new insights in C2 work with respect to adaptive behavior and improvisation.
The thesis also identifies areas that need to be considered in order to
further develop the role-playing simulation approach and its applicability.
No 1268
SUPPORTING COLLABORATIVE WORK THROUG ICT - HOW END-USERS THINK OF AND ADOPT
INTEGRATED HEALTH INFORMATION SYSTEMS
Bahlol Rahimi
Health Information Systems
(HISs) are implemented to support individuals,organizations, and society,
making work processes integrated andcontributing to increase service quality
and patient safety. However, theoutcomes of many HIS implementations in both
primary care and hospitalsettings have either not met yet all the expectations
decision-makersidentified or have failed in their implementation. There is,
therefore, agrowing interest in increasing knowledge about prerequisites to be
fulfilledin order to make the implementation and adoption of HIS more
effective andto improve collaboration between healthcare providers.
The general purpose of the work presented in this thesis is to explore issuesrelated to the implementation, use, and adoption of HISs and its contributionfor improving inter- and intra-organizational collaboration in a healthcarecontext. The studies included have, however, different research objectivesand consequently used different research methods such as case study,literature review, meta-analysis, and surveys. The selection of the researchmethodology has thus depended on the aim of the studies and their expectedresults.
In the first study performed we showed that there is no standard frameworkto evaluate effects and outputs of implementation and use of ICT-basedapplications in the healthcare setting, which makes the comparison ofinternational results not possible yet.
Critical issues, such as techniques employed to teach the staff when usingintegrated system, involvement of the users in the implementation process,and the efficiency of the human computer interface were particularlyreported in the second study included in this thesis. The results of this studyalso indicated that the development of evidence-based implementation processes should be considered in order to diminish unexpected outputs thataffect users, patients and stakeholders.
We learned in the third study, that merely implementing of a HIS will notautomatically increase organizational efficiency. Strategic, tactical, andoperational actions have to be taken into consideration, includingmanagement involvement, integration in healthcare workflow, establishingcompatibility between software and hardware, user involvement, andeducation and training.
When using an Integrated Electronic Prescribing System (IEPS), pharmaciesstaff declared expedited the processing of prescriptions, increased patientsafety, and reduced the risk for prescription errors, as well as the handingover of erroneous medications to patients. However, they stated also that thesystem does not avoid all mistakes or errors and medication errors stilloccur. We documented, however, in general, positive opinions about theIEPS system in the fifth article. The results in this article indicated thatsafety of the system compared to a paper-based one has increased. Theresults showed also an impact on customer relations with the pharmacy; andprevention of errors. However, besides finding an adoption of the IEPS, weidentified a series of undesired and non planned outputs that affect theefficiency and efficacy of use of the system.
Finally, we captured in the sixth study indications for non-optimality in thecomputer provider entry system. This is because; the system was not adaptedto the three-quarters of physicians and one-half of nurses’ specificprofessional practice. Respondents pointed out also human-computerinteraction constrains when using the system. They indicated also the factthat the system could lead to adverse drug events in some circumstances.
The work presented in this thesis contributes to increase knowledge in thearea of health informatics on how ICT supports inter- and intraorganizationalcollaborative work in a healthcare context and to identifyfactors and prerequisites needed to be taken into consideration whenimplementing new generations of HIS.
No 1274
Algorithms and Hardness Results for
Some Valued CSPs
Fredrik Kuivinen
In the Constraint Satisfaction
Problem (CSP) one is supposed to find an assignment to a set of variables so
that a set of given constraints are satisfied. Many problems, both practical
and theoretical, can be modelled as CSPs. As these problems are
computationally hard it is interesting to investigate what kind of
restrictions of the problems implies computational tractability. In this
thesis the computational complexity of restrictions of two optimisation
problems which are related to the CSP is studied. In optimisation problems one
can also relax the requirements and ask for an approximatively good solution,
instead of requiring the optimal one.
The first problem we investigate is Maximum Solution (Max Sol) where one is looking for a solution which satisfies all constraints and also maximises a linear bjective function. The Maximum Solution problem is a generalisation of the well-known integer linear programming problem. In the case when the constraints are equations over an abelian group we obtain tight inapproximability results. We also study Max Sol for so-called maximal constraint languages and a partial classification theorem is obtained in this case. Finally, Max Sol over the boolean domain is studied in a setting where each variable only occurs a bounded number of times.
The second problem is the Maximum Constraint Satisfaction Problem (Max CSP). In this problem one is looking for an assignment which maximises the number of satisfied constraints. We first show that if the constraints are known to give rise to an NP-hard CSP, then one cannot get arbitrarily good approximate solutions in polynomial time, unless P = NP. We use this result to show a similar hardness result for the case when only one constraint relation is used. We also study the submodular function minimisation problem (SFM) on certain finite lattices. There is a strong link between Max CSP and SFM; new tractability results for SFM implies new tractability results for Max CSP. It is conjectured that SFM is the only reason for Max CSP to be tractable, but no one has yet managed to prove this. We obtain new tractability results for SFM on diamonds and evidence which supports the hypothesis that all modular lattices are tractable.
No 1281
Virtual Full Replication for Scalable
Distributed Real-Time Databases
Gunnar Mathiason
A fully replicated distributed
real-time database provides high availability and predictable access times,
independent of user location, since all the data is available at each node.
However, full replication requires that all updates are replicated to every
node, resulting in exponential growth of bandwidth and processing demands with
the number of nodes and objects added. To eliminate this scalability problem,
while retaining the advantages of full replication, this thesis explores
Virtual Full Replication (ViFuR); a technique that gives database users a
perception of using a fully replicated database while only replicating a
subset of the data.
We use ViFuR in a distributed main memory real-time database where timely transaction execution is required. ViFuR enables scalability by replicating only data used at the local nodes. Also, ViFuR enables flexibility by adaptively replicating the currently used data, effectively providing logical availability of all data objects. Hence, ViFuR substantially reduces the problem of non-scalable resource usage of full replication, while allowing timely execution and access to arbitrary data objects.
In the thesis we pursue ViFuR by exploring the use of database segmentation. We give a scheme (ViFuR-S) for static segmentation of the database prior to execution, where access patterns are known a priori. We also give an adaptive scheme (ViFuR-A) that changes segmentation during execution to meet the evolving needs of database users. Further, we apply an extended approach of adaptive segmentation (ViFuR-ASN) in a wireless sensor network - a typical dynamic large-scale and resource-constrained environment. We use up to several hundreds of nodes and thousands of objects per node, and apply a typical periodic transaction workload with operation modes where the used data set changes dynamically. We show that when replacing full replication with ViFuR, resource usage scales linearly with the required number of concurrent replicas, rather than exponentially with the system size.
No 1290
SCHEDULING AND OPTIMIZATION OF FAULT-TOLERANT DISTRIBUTED EMBEDDED SYSTEMS
Viacheslav Izosimov
Safety-critical applications
have to function correctly even in presence of faults. This thesis deals with
techniques for tolerating effects of transient and intermittent faults.
Reexecution, software replication, and rollback recovery with checkpointing
are used to provide the required level of fault tolerance. These techniques
are considered in the context of distributed real-time systems with
non-preemptive static cyclic scheduling.
Safety-critical applications have strict time and cost constrains, which means that not only faults have to be tolerated but also the constraints should be satisfied. Hence, efficient system design approaches with consideration of fault tolerance are required.
The thesis proposes several design optimization strategies and scheduling techniques that take fault tolerance into account. The design optimization tasks addressed include, among others, process mapping, fault tolerance policy assignment, and checkpoint distribution.
Dedicated scheduling techniques and mapping optimization strategies are also proposed to handle customized transparency requirements associated with processes and messages. By providing fault containment, transparency can, potentially, improve testability and debugability of fault-tolerant applications.
The efficiency of the proposed scheduling techniques and design optimization strategies is evaluated with extensive experiments conducted on a number of synthetic applications and a real-life example. The experimental results show that considering fault tolerance during system-level design optimization is essential when designing cost-effective fault-tolerant embedded systems.
No 1294
ASPECTS OF A CONSTRAINT OPTIMISATION PROBLEM
Johan Thapper
In this thesis we study a
constraint optimisation problem called the maximum solution problem,
henceforth referred to as Max Sol. It is defined as the problem of optimising
a linear objective function over a constraint satisfaction problem (Csp)
instance on a finite domain. Each variable in the instance is given a
non-negative rational weight, and each domain element is also assigned a
numerical value, for example taken from the natural numbers. From this point
of view, the problem is seen to be a natural extension of integer linear
programming over a bounded domain. We study both the time complexity of
approximating Max Sol, and the time complexity of obtaining an optimal
solution. In the latter case, we also construct some exponential-time
algorithms.
The algebraic method is a powerful tool for studying Csp-related problems. It was introduced for the decision version of Csp, and has been extended to a number of other problems, including Max Sol. With this technique we establish approximability classifications for certain families of constraint languages, based on algebraic characterisations. We also show how the concept of a core for relational structures can be extended in order to determine when constant unary relations can be added to a constraint language, without changing the computational complexity of finding an optimal solution to Max Sol. Using this result we show that, in a specific sense, when studying the computational complexity of Max Sol, we only need to consider constraint languages with all constant unary relations included.
Some optimisation problems are known to be approximable within some constant ratio, but are not believed to be approximable within an arbitrarily small constant ratio. For such problems, it is of interest to find the best ratio within which the problem can be approximated, or at least give some bounds on this constant. We study this aspect of the (weighted) Max Csp problem for graphs. In this optimisation problem the number of satisfied constraints is supposed to be maximised. We introduce a method for studying approximation ratios which is based on a new parameter on the space of all graphs. Informally, we think of this parameter as an approximation distance; knowing the distance between two graphs, we can bound the approximation ratio of one of them, given a bound for the other. We further show how the basic idea can be implemented also for the Max Sol problem.
No 1306
Augmentation in the Wild: User Centered Development and Evaluation of
Augmented Reality Applications
Susanna Nilsson
Augmented Reality (AR) technology has, despite many applications in the research domain, not made it to a widespread end user market. The one exception is AR applications for mobile phones. One of the main reasons for this development is technological constraints of the non-mobile phone based systems - the devices used are still neither mobile nor lightweight enough or simply not usable enough. This thesis addresses the latter issue by taking a holistic approach to the development and evaluation of AR applications for both single user and multiple user tasks. The main hypothesis is that in order for substantial wide spread use of AR technology, the applications must be developed with the aim to solve real world problems with the end user and goal in focus.
Augmented Reality systems are information systems that merge real and virtual information with the purpose of aiding users in different tasks. An AR system is a general system much like a computer is general; it has potential as a tool for many different purposes in many different situations. The studies in this thesis describe user studies of two different types of AR applications targeting different user groups and different application areas. The first application, described and evaluated, is aimed at giving users instructions for use and assembly of different medical devices. The second application is a study where AR technology has been used as a tool for supporting collaboration between the rescue services, the police and military personnel in a crisis management scenario.
Both applications were iteratively developed with end user representatives involved throughout the process and the results illustrate that users both in the context of medical care, and the emergency management domain, are positive towards AR systems as a technology and as a tool in their work related tasks. The main contributions of the thesis are not only the end results of the user studies, but also the methodology used in the studies of this relatively new type of technology. The studies have shown that involving real end users both in the design of the application and in the user task is important for the user experience of the system. Allowing for an iterative design process is also a key point. Although AR technology development is often driven by technological advances rather than user demands, there is room for a more user centered approach, for single user applications as well as for more dynamic and complex multiple user applications.
No 1313
On the Quality of Feature Models
Christer Thörn
Variability has become an important aspect of modern software-intensive products and systems. In order to reach new markets and utilize existing resources through reuse, it is necessary to have effective management of variants, configurations, and reusable functionality. The topic of this thesis is the construction of feature models that document and describe variability and commonality. The work aims to contribute to methods for creating feature models that are of high quality, suitable for their intended purpose, correct, and usable.
The thesis suggests an approach, complementing existing feature modeling methodologies, that contributes to arriving at a satisfactory modeling result. The approach is based on existing practices to raise quality from other research areas, and targets shortcomings in existing feature modeling methods. The requirements for such an approach were derived from an industrial survey and a case study in the automotive domain. The approach was refined and tested in a second case study in the mobile applications domain.
The main contributions of the thesis are a quality model for feature models, procedures for prioritizing and evaluating quality in feature models, and an initial set of empirically grounded development principles for reaching certain qualities in feature models.
The principal findings of the thesis are that feature models exhibit different qualities, depending on certain characteristics and properties of the model. Such properties can be identified, formalized, and influenced in order to guide development of feature models, and thereby promote certain quality factors of feature models.
No 1321
Temperature Aware and Defect-Probability Driven Test Scheduling for
System-on-Chip
Zhiyuan He
The high complexity of modern electronic systems has resulted in a substantial increase in the time-to-market as well as in the cost of design, production, and testing. Recently, in order to reduce the design cost, many electronic systems have employed a core-based system-on-chip (SoC) implementation technique, which integrates pre-defined and pre-verified intellectual property cores into a single silicon die. Accordingly, the testing of manufactured SoCs adopts a modular approach in which test patterns are generated for individual cores and are applied to the corresponding cores separately. Among many techniques that reduce the cost of modular SoC testing, test scheduling is widely adopted to reduce the test application time. This thesis addresses the problem of minimizing the test application time for modular SoC tests with considerations on three critical issues: high testing temperature, temperature-dependent failures, and defect probabilities.
High temperature occurs in testing modern SoCs and it may cause damages to the cores under test. We address the temperature-aware test scheduling problem aiming to minimize the test application time and to avoid the temperature of the cores under test exceeding a certain limit. We have developed a test set partitioning and interleaving technique and a set of test scheduling algorithms to solve the addressed problem.
Complicated temperature dependences and defect-induced parametric failures are more and more visible in SoCs manufactured with nanometer technology. In order to detect the temperature-dependent defects, a chip should be tested at different temperature levels. We address the SoC multi-temperature testing issue where tests are applied to a core only when the temperature of that core is within a given temperature interval. We have developed test scheduling algorithms for multi-temperature testing of SoCs.
Volume production tests often employ an abort-on-first-fail (AOFF) approach which terminates the chip test as soon as the first fault is detected. Defect probabilities of individual cores in SoCs can be used to compute the expected test application time of modular SoC tests using the AOFF approach. We address the defect-probability driven SoC test scheduling problem aiming to minimize the expected test application time with a power constraint. We have proposed techniques which utilize the defect probability to generate efficient test schedules.
Extensive experiments based on benchmark designs have been performed to demonstrate the efficiency and applicability of the developed techniques.
No 1333
Meta-Languages and Semantics for Equation-Based Modeling and Simulation
David Broman
Performing computational experiments on mathematical models instead of building and testing physical prototypes can drastically reduce the develop cost for complex systems such as automobiles, aircraft, and powerplants. In the past three decades, a new category of equation-based modeling languages has appeared that is based on acausal and object-oriented modeling principles, enabling good reuse of models. However, the modeling languages within this category have grown to be large and complex, where the specifications of the language's semantics are informally defined, typically described in natural languages. The lack of a formal semantics makes these languages hard to interpret unambiguously and to reason about. This thesis concerns the problem of designing the semantics of such equation-based modeling languages in a way that allows formal reasoning and increased correctness. The work is presented in two parts.
In the first part we study the state-of-the-art modeling language Modelica. We analyze the concepts of types in Modelica and conclude that there are two kinds of type concepts: class types and object types. Moreover, a concept called structural constraint delta is proposed, which is used for isolating the faults of an over- or under-determined model.
In the second part, we introduce a new research language called the Modeling Kernel Language (MKL). By introducing the concept of higher-order acausal models (HOAMs), we show that it is possible to create expressive modeling libraries in a manner analogous to Modelica, but using a small and simple language concept. In contrast to the current state-of-the-art modeling languages, the semantics of how to use the models, including meta operations on models, are also specified in MKL libraries. This enables extensible formal executable specifications where important language features are expressed through libraries rather than by adding completely new language constructs. MKL is a statically typed language based on a typed lambda calculus. We define the core of the language formally using operational semantics and prove type safety. An MKL interpreter is implemented and verified in comparison with a Modelica environment.
No 1337
Contributions to Modelling and Visualisation of Multibody Systems Simulations with Detailed Contact Analysis
Alexander Siemers
The steadily increasing performance of modern computer systems is having a large influence on simulation technologies. It enables increasingly detailed simulations of larger and more comprehensive simulation models. Increasingly large amounts of numerical data are produced by these simulations.
This thesis presents several contributions in the field of mechanical system simulation and visualisation. The work described in the thesis is of practical relevance and results have been tested and implemented in tools that are used daily in the industry i.e., the BEAST (BEAring Simulation Tool) tool box. BEAST is a multibody system (MBS) simulation software with special focus on detailed contact calculations. Our work is primarily focusing on these types of systems.
focusing on these types of systems. Research in the field of simulation modelling typically focuses on one or several specific topics around the modelling and simulation work process. The work presented here is novel in the sense that it provides a complete analysis and tool chain for the whole work process for simulation modelling and analysis of multibody systems with detailed contact models. The focus is on detecting and dealing with possible problems and bottlenecks in the work process, with respect to multibody systems with detailed contact models.
The following primary research questions have been formulated:
- How to utilise object-oriented techniques for modelling of multibody systems with special reference tocontact modelling?
- How to integrate visualisation with the modelling and simulation process of multibody systems withdetailed contacts.
- How to reuse and combine existing simulation models to simulate large mechanical systems consistingof several sub-systems by means of co-simulation modelling?
Unique in this work is the focus on detailed contact models. Most modelling approaches for multibody systems focus on modelling of bodies and boundary conditions of such bodies, e.g., springs, dampers, and possibly simple contacts. Here an object oriented modelling approach for multibody simulation and modelling is presented that, in comparison to common approaches, puts emphasis on integrated contact modelling and visualisation. The visualisation techniques are commonly used to verify the system model visually and to analyse simulation results. Data visualisation covers a broad spectrum within research and development. The focus is often on detailed solutions covering a fraction of the whole visualisation process. The novel visualisation aspect of the work presented here is that it presents techniques covering the entire visualisation process integrated with modeling and simulation. This includes a novel data structure for efficient storage and visualisation of multidimensional transient surface related data from detailed contact calculations.
Different mechanical system simulation models typically focus on different parts (sub-systems) of a system. To fully understand a complete mechanical system it is often necessary to investigate several or all parts simultaneously. One solution for a more complete system analysis is to couple different simulation models into one coherent simulation. Part of this work is concerned with such co-simulation modelling. Co-simulation modelling typically focuses on data handling, connection modelling, and numerical stability. This work puts all emphasis on ease of use, i.e., making mechanical system co-simulation modelling applicable for a larger group of people. A novel meta-model based approach for mechanical system co-simulation modelling is presented. The meta-modelling process has been defined and tools and techniques been created to fully support the complete process. A component integrator and modelling environment are presented that support automated interface detection, interface alignment with automated three-dimensional coordinate translations, and three dimensional visual co-simulation modelling. The integrated simulator is based on a general framework for mechanical system co-simulations that guarantees numerical stability.
No 1354
Disconnected Discoveries: Availability Studies in Partitioned Networks
Mikael Asplund
This thesis is concerned with exploring methods for making computingsystems more resilient to problems in the network communication, bothin the setting of existing infrastructure but also in the case whereno infrastructure is available. Specifically, we target a situationcalled network partitions which means that a computer or devicenetwork is split in two or more parts that cannot communicate witheach other.
The first of the two tracks in the thesis is concerned with upholdingsystem availability during a network partition even when there areintegrity constraints on data. This means that the system willoptimistically accept requests since it is impossible to coordinatenodes that have no means of communicating during finiteintervals; thus requiring a reconciliation process to take place oncethe network is healed.
We provide several different algorithms for reconciling divergentstates of the nodes, one of which is able to allow the system tocontinue accepting operations during the reconciliation phase asopposed to having to stop all invocations. The algorithms areevaluated analytically, proving correctness and the conditions fortermination. The performance of the algorithms has been analysedusing simulations and as a middleware plugin in an emulatedsetting.
he second track considers more extreme conditions where the networkis partitioned by its nature. The nodes move around in an area andopportunistically exchange messages with nodes that they meet. This asa model of the situation in a disaster area where thetelecommunication networks are disabled. This scenario poses a numberof challenges where protocols need to be both partition-tolerant andenergy-efficient to handle node mobility, while still providing gooddelivery and latency properties.
We analyse worst-case latency for message dissemination in suchintermittently connected networks. Since the analysis is highlydependent on the mobility of the nodes, we provide a model forcharacterising connectivity of dynamic networks. This model capturesin an abstract way how fast a protocol can spread a message in such asetting. We show how this model can be derived analytically as well asfrom actual trace files.
Finally, we introduce a manycast protocol suited for disaster areanetworks. This protocol has been evaluated using simulations whichshows that it provides very good performance under the circumstances,and it has been implemented as a proof-of-concept on real hardware.
No 1359
Mind Games Extended: Understanding Gameplay as Situated Activity
Jana Rambusch
This thesis addresses computer gameplay activities in terms of the physical handling of a game, players’ meaning-making activities, and how these two processes are closely interrelated. It is examined in greater detail which role the body plays in gameplay, but also how gameplay is shaped by sociocultural factors outside the game, including different kind of tools and players’ participation in community practices. An important step towards an understanding of these key factors and their interaction is the consideration of gameplay as situated activity where players who actively engage with games are situated in both the physical world and the virtual in-game world. To analyse exactly how players interact with both worlds, two case studies on two different games have been carried out, and three different levels of situatedness are identified and discussed in detail in this thesis, on the basis of existing theories within situated cognition research.
No 1373
Head Movement Correlates to Focus Assignment in Swedish
Sonia Sangari
Speech communication normally involves not only speech but also face and head movements. In the present investigation, the correlation between head movement and focus assignment is studied, both in the laboratory and in spontaneous speech, with the aim of finding out what these head movements look like in detail. Specifically addressed questions are whether the head movements are an obligatory signal of focus assignment, and in that case how often a head movement will accompany the prosodic information. Also studied are where in the focused word the head movement has its extreme value, the relationship of that value to the extreme value of the fundamental frequency, and whether it is possible to simulate the head movements that accompany focal accent with a secondary order linear system.
In this study, the head movements are recorded by the Qualisys MacReflex motion tracking system simultaneously with the speech signal. The results show that, for the subjects studied, the head movements that coincide with the signalling of focal accent in the speech signal, in most cases, have their extreme values at the primary stressed syllable of the word carrying focal accent, independent of the word accent type in Swedish. It should be noted that focal accent in Swedish has the fundamental frequency manifestation in words carrying the word accent II on the secondary stressed vowel.
The time required for the head movement to reach the extreme value is longer than the corresponding time for the fundamental frequency rise probably due to the mass of the head in comparison to the structure involved for the fundamental frequency manipulation. The head movements are simulated with a high accuracy by a second order linear system.
No 1374
Using False Alarms when Developing Automotive Active Safety Systems
Jan-Erik Källhammer
This thesis develops and tests an empirical method to quantifying drivers’ level of acceptance for alerts issued by automotive active safety systems. The method uses drivers’ subjective level of acceptance for alerts that are literally false alarms as a measure to guide the development of alerting criteria that can be used by active safety systems. Design for driver acceptance aims at developing systems that overcome drivers’ dislike for false alarms by issuing alerts only when drivers finds them reasonable and therefore are likely to accept them. The method attempts to bridge the gap between field experiments with a high level of ecological validity and lab based experiments with a high level of experimental control. By presenting subjects with video recordings of field data (e.g., traffic incidents and other situations of interest), the method retains high levels of both experimental control and ecological validity.
This thesis first develops the theoretical arguments for the view that false alarms are not only unavoidable, but also that some false alarms are actually useful and, hence, desirable as they provide useful information that can be used (by the proposed method) to assess driver acceptance of active safety systems. The second part of this thesis consists of a series of empirical studies that demonstrates the application of the assessment method. The three empirical studies showed that drivers’ subjective level of acceptance for alerts that are literally false alarms are a useful measure that can guide system designers in defining activation criteria for active safety systems. The method used to collect the driver’s subjective acceptance levels has also been shown to produce reliable and reproducible data that align with the view of the drivers who experienced the situations in the field. By eliciting responses from a large number of observers, we leverage the high cost of field data and generate sample sizes that are amenable to statistical tests of significance.
No 1375
Integrated Code Generation
Mattias Eriksson
Code generation in a compiler is commonly divided into several phases: instruction selection, scheduling, register allocation, spill code generation, and, in the case of clustered architectures, cluster assignment. These phases are interdependent; for instance, a decision in the instruction selection phase affects how an operation can be scheduled. We examine the effect of this separation of phases on the quality of the generated code. To study this we have formulated optimal methods for code generation with integer linear programming; first for acyclic code and then we extend this method to modulo scheduling of loops. In our experiments we compare optimal modulo scheduling, where all phases are integrated, to modulo scheduling where instruction selection and cluster assignment are done in a separate phase. The results show that, for an architecture with two clusters, the integrated method finds a better solution than the non-integrated method for 39% of the instances.
Our algorithm for modulo scheduling iteratively considers schedules with increasing number of schedule slots. A problem with such an iterative method is that if the initiation interval is not equal to the lower bound there is no way to determine whether the found solution is optimal or not. We have proven that for a class of architectures that we call transfer free, we can set an upper bound on the schedule length. I.e., we can prove when a found modulo schedule with initiation interval larger than the lower bound is optimal.
Another code generation problem that we study is how to optimize the usage of the address generation unit in simple processors that have very limited addressing modes. In this problem the subtasks are: scheduling, address register assignment and stack layout. Also for this problem we compare the results of integrated methods to the results of non-integrated methods, and we find that integration is beneficial when there are only a few (1 or 2) address registers available.
No 1381
Affordances and Constraints of Intelligent Decision Support for Military Command and Control – Three Case Studies of Support Systems
Ola Leifler
Researchers in military command and control (C2) have for several decades sought to help commanders by introducing automated, intelligent decision support systems. These systems are still not widely used, however, and some researchers argue that this may be due to those problems that are inherent in the relationship between the affordances of technology and the requirements by the specific contexts of work in military C2. In this thesis, we study some specific properties of three support techniques for analyzing and automating aspects of C2 scenarios that are relevant for the contexts of work in which they can be used.
The research questions we address concern (1) which affordances and constraints of these technologies are of most relevance to C2, and (2) how these affordances and limitations can be managed to improve the utility of intelligent decision support systems in C2. The thesis comprises three case studies of C2 scenarios where intelligent support systems have been devised for each scenario.
The first study considered two military planning scenarios: planning for medical evacuations and similar tactical operations. In the study, we argue that the plan production capabilities of automated planners may be of less use than their constraint management facilities. ComPlan, which was the main technical system studied in the first case study, consisted of a highly configurable, collaborative, constraint-management framework for planning in which constraints could be used either to enforce relationships or notify users of their validity during planning. As a partial result of the first study, we proposed three tentative design criteria for intelligent decision support: transparency, graceful regulation and event-based feedback.
The second study was of information management during planning at the operational level, where we used a C2 training scenario from the Swedish Armed Forces and the documents produced during the scenario as a basis for studying properties of Semantic Desktops as intelligent decision support. In the study, we argue that (1) due to the simultaneous use of both documents and specialized systems, it is imperative that commanders can manage information from heterogeneous sources consistently, and (2) in the context of a structurally rich domain such as C2, documents can contain enough information about domain-specific concepts that occur in several applications to allow them to be automatically extracted from documents and managed in a unified manner. As a result of our second study, we present a model for extending a general semantic desktop ontology with domain-specific concepts and mechanisms for extracting and managing semantic objects from plan documents. Our model adheres to the design criteria from the first case study.
The third study investigated machine learning techniques in general and text clustering in particular, to support researchers who study team behavior and performance in C2. In this study, we used material from several C2 scenarios which had been studied previously. We interviewed the participating researchers about their work profiles, evaluated machine learning approaches for the purpose of supporting their work and devised a support system based on the results of our evaluations. In the study, we report on empirical results regarding the precision possible to achieve when automatically classifying messages in C2 workflows and present some ramifications of these results on the design of support tools for communication analysis. Finally, we report how the prototype support system for clustering messages in C2 communications was conceived by the users, the utility of the design criteria from case study 1 when applied to communication analysis, and the possibilities for using text clustering as a concrete support tool in communication analysis.
In conclusion, we discuss how the affordances and constraints of intelligent decision support systems for C2 relate to our design criteria, and how the characteristics of each work situation demand new adaptations of the way in which intelligent support systems are used.
No 1386
Quality-Driven Synthesis and Optimization of Embedded Control Systems
Soheil Samii
This thesis addresses several synthesis and optimization issues for embedded control systems. Examples of such systems are automotive and avionics systems in which physical processes are controlled by embedded computers through sensor and actuator interfaces. The execution of multiple control applications, spanning several computation and communication components, leads to a complex temporal behavior that affects control quality. The relationship between system timing and control quality is a key issue to consider across the control design and computer implementation phases in an integrated manner. We present such an integrated framework for scheduling, controller synthesis, and quality optimization for distributed embedded control systems.
At runtime, an embedded control system may need to adapt to environmental changes that affect its workload and computational capacity. Examples of such changes, which inherently increase the design complexity, are mode changes, component failures, and resource usages of the running control applications. For these three cases, we present trade-offs among control quality, resource usage, and the time complexity of design and runtime algorithms for embedded control systems.
The solutions proposed in this thesis have been validated by extensive experiments. The experimental results demonstrate the efficiency and importance of the presented techniques.
No 1419
Geographic Routing in Intermittently-connected Mobile Ad Hoc Networks: Algorithms and Performance Models
Erik Kuiper
Communication is a key enabler for cooperation. Thus to support efficient communication humanity has continuously strived to improve the communication infrastructure. This infrastructure has evolved from heralds and ridden couriers to a digital telecommunication infrastructures based on electrical wires, optical fibers and radio links. While the telecommunication infrastructure efficiently transports information all over the world, there are situations when it is not available or operational. In many military operations, and disaster areas, one cannot rely on the telecommunication infrastructure to support communication since it is either broken, or does not exist. To provide communication capability in its absence, ad hoc networking technology can be used to provide a dynamic peer-based communication mechanism. In this thesis we study geographic routing in intermittently connected mobile ad hoc networks (IC-MANETs).
For routing in IC-MANETs we have developed a beacon-less delay-tolerant geographic routing protocol named LAROD (location aware routing for delay-tolerant networks) and the delay-tolerant location service LoDiS (location dissemination service). To be able to evaluate these protocols in a realistic environment we have used a military reconnaissance mission where unmanned aerial vehicles employ distributed coordination of their monitoring using pheromones. To be able to predict routing performance more efficiently than by the use of simulation, we have developed a mathematical framework that efficiently can predict the routing performance of LAROD-LoDiS. This framework, the forward-wait framework, provides a relationship between delivery probability, distance, and delivery time. Provided with scenario specific data the forward-wait framework can predict the expected scenario packet delivery ratio.
LAROD-LoDiS has been evaluated in the network simulator ns-2 against Spray and Wait, a leading delay-tolerant routing protocol, and shown to have a competitive edge, both in terms of delivery ratio and overhead. Our evaluations also confirm that the routing performance is heavily influenced by the mobility pattern. This fact stresses the need for representative mobility models when routing protocols are evaluated.
No 1451
Text Harmonization Strategies for Phrase-Based Statistical Machine Translation
Sara Stymne
In this thesis I aim to improve phrase-based statistical machine translation (PBSMT) in a number of ways by the use of text harmonization strategies. PBSMT systems are built by training statistical models on large corpora of human translations. This architecture generally performs well for languages with similar structure. If the languages are different for example with respect to word order or morphological complexity, however, the standard methods do not tend to work well. I address this problem through text harmonization, by making texts more similar before training and applying a PBSMT system.
I investigate how text harmonization can be used to improve PBSMT with a focus on four areas: compounding, definiteness, word order, and unknown words. For the first three areas, the focus is on linguistic differences between languages, which I address by applying transformation rules, using either rule-based or machine learning-based techniques, to the source or target data. For the last area, unknown words, I harmonize the translation input to the training data by replacing unknown words with known alternatives.
I show that translation into languages with closed compounds can be improved by splitting and merging compounds. I develop new merging algorithms that outperform previously suggested algorithms and show how part-of-speech tags can be used to improve the order of compound parts. Scandinavian definite noun phrases are identified as a problem forPBSMT in translation into Scandinavian languages and I propose a preprocessing approach that addresses this problem and gives large improvements over a baseline. Several previous proposals for how to handle differences in reordering exist; I propose two types of extensions, iterating reordering and word alignment and using automatically induced word classes, which allow these methods to be used for less-resourced languages. Finally I identify several ways of replacing unknown words in the translation input, most notably a spell checking-inspired algorithm, which can be trained using character-based PBSMT techniques.
Overall I present several approaches for extending PBSMT by the use of pre- and postprocessing techniques for text harmonization, and show experimentally that these methods work. Text harmonization methods are an efficient way to improve statistical machine translation within the phrase-based approach, without resorting to more complex models.
No 1455
Modeling the Role of Energy Management in Embodied Cognition
Alberto Montebelli
The quest for adaptive and autonomous robots, flexible enough to smoothly comply with unstructured environments and operate in close interaction with humans, seems to require a deep rethinking of classical engineering methods. The adaptivity of natural organisms, whose cognitive capacities are rooted in their biological organization, is an obvious source of inspiration. While approaches that highlight the role of embodiment in both cognitive science and cognitive robotics are gathering momentum, the crucial role of internal bodily processes as foundational components of the biological mind is still largely neglected.
This thesis advocates a perspective on embodiment that emphasizes the role of non-neural bodily dynamics in the constitution of cognitive processes in both natural and artificial systems. In the first part, it critically examines the theoretical positions that have influenced current theories and the author's own position. The second part presents the author's experimental work, based on the computer simulation of simple robotic agents engaged in energy-related tasks. Proto-metabolic dynamics, modeled on the basis of actual microbial fuel cells for energy generation, constitute the foundations of a powerful motivational engine. Following a history of adaptation, proto-metabolic states bias the robot towards specific subsets of behaviors, viably attuned to the current context, and facilitate a swift re-adaptation to novel tasks. Proto-metabolic dynamics put the situated nature of the agent-environment sensorimotor interaction within a perspective that is functional to the maintenance of the robot's overall `survival'. Adaptive processes tend to convert metabolic constraints into opportunities, branching into a rich and energetically viable behavioral diversity.
No 1465
Biologically-Based Interactive Neural Network Models for Visual Attention and Object Recognition
Mohammad Saifullah
The main focus of this thesis is to develop biologically-based computational models for object recognition. A series of models for attention and object recognition were developed in the order of increasing functionality and complexity. These models are based on information processing in the primate brain, and specially inspired from the theory of visual information processing along the two parallel processing pathways of the primate visual cortex. To capture the true essence of incremental, constraint satisfaction style processing in the visual system, interactive neural networks were used for implementing our models. Results from eye-tracking studies on the relevant visual tasks, as well as our hypothesis regarding the information processing in the primate visual system, were implemented in the models and tested with simulations.
As a first step, a model based on the ventral pathway was developed to recognize single objects. Through systematic testing, structural and algorithmic parameters of these models were fine tuned for performing their task optimally. In the second step, the model was extended by considering the dorsal pathway, which enables simulation of visual attention as an emergent phenomenon. The extended model was then investigated for visual search tasks. In the last step, we focussed on occluded and overlapped object recognition. A couple of eye-tracking studies were conducted in this regard and on the basis of the results we made some hypotheses regarding information processing in the primate visual system. The models were further advanced on the lines of the presented hypothesis, and simulated on the tasks of occluded and overlapped object recognition.
On the basis of the results and analysis of our simulations we have further found that the generalization performance of interactive hierarchical networks improves with the addition of a small amount of Hebbian learning to an otherwise pure error-driven learning. We also concluded that the size of the receptive fields in our networks is an important parameter for the generalization task and depends on the object of interest in the image. Our results show that networks using hard coded feature extraction perform better than the networks that use Hebbian learning for developing feature detectors. We have successfully demonstrated the emergence of visual attention within an interactive network and also the role of context in the search task. Simulation results with occluded and overlapped objects support our extended interactive processing approach, which is a combination of the interactive and top-down approach, to the segmentation-recognition issue. Furthermore, the simulation behavior of our models is in line with known human behavior for similar tasks.
In general, the work in this thesis will improve the understanding and performance of biologically-based interactive networks for object recognition and provide a biologically-plausible solution to recognition of occluded and overlapped objects. Moreover, our models provide some suggestions for the underlying neural mechanism and strategies behind biological object recognition.
No 1490
Testing and Logic Optimization Techniques for Systems on Chip
Tomas Bengtsson
Today it is possible to integrate more than one billion transistors onto a single chip. This has enabled implementation of complex functionality in hand held gadgets, but handling such complexity is far from trivial. The challenges of handling this complexity are mostly related to the design and testing of the digital components of these chips.
A number of well-researched disciplines must be employed in the efficient design of large and complex chips. These include utilization of several abstraction levels, design of appropriate architectures, several different classes of optimization methods, and development of testing techniques. This thesis contributes mainly to the areas of design optimization and testing methods.
In the area of testing this thesis contributes methods for testing of on-chip links connecting different clock domains. This includes testing for defects that introduce unacceptable delay, lead to excessive crosstalk and cause glitches, which can produce errors. We show how pure digital components can be used to detect such defects and how the tests can be scheduled efficiently.
To manage increasing test complexity, another contribution proposes to raise theabstraction level of fault models from logic level to system level. A set of system level faultmodels for a NoC-switch is proposed and evaluated to demonstrate their potential.
In the area of design optimization, this thesis focuses primarily on logic optimization. Two contributions for Boolean decomposition are presented. The first one is a fast heuristic algorithm that finds non-disjoint decompositions for Boolean functions. This algorithm operates on a Binary Decision Diagram. The other contribution is a fast algorithm for detecting whether a function is likely to benefit from optimization for architectures with a gate depth of three with an XOR-gate as the third gate.
No 1481
Improving Software Security by Preventing Known Vulnerabilities
David Byers
From originally being of little concern, security has become a crucial quality factor in modern software. The risk associated with software insecurity has increased dramatically with increased reliance on software and a growing number of threat agents. Nevertheless, developers still struggle with security. It is often an afterthought, bolted on late in development or even during deployment. Consequently the same kinds of vulnerabilities appear over and over again.
Building security in to software from its inception and constantly adapting processes and technology to changing threats and understanding of security can significantly contribute to establishing and sustaining a high level of security.
This thesis presents the sustainable software security process, the S3P, an approach to software process improvement for software security that focuses on preventing known vulnerabilities by addressing their underlying causes, and sustaining a high level of security by adapting the process to new vulnerabilities as they become known. The S3P is designed to overcome many of the known obstacles to software process improvement. In particular, it ensures that existing knowledge can be used to its full potential and that the process can be adapted to nearly any environment and used in conjunction with other other software security processes and security assurance models.
The S3P is a three-step process based on semi-formal modeling of vulnerabilities, ideally supported by collaborative tools. Such proof-of-concept tools were developed for all parts of the process as part of the SHIELDS project.
The first two steps of the S3P consist in determining the potential causes of known vulberabilities at all stages of software development, then identifying measures that would prevent each individual cause. These steps are performed using visual modeling languages with well-defined semantics and a modeling workflow. With tool support, modeling effort can be progressively reduced through collaboration and use of pre-existing models.
Next, the costs of all potential measures are estimated using any suitable method. This thesis uses pairwise comparisons in order to support qualitative judgements. The models and costs yield a boolan optimization problem that is solved using a search-based heuristic, to identify the best set of measures to prevent selected vulnerabilities.
Empirical evaluation of the various steps of the process has verified a number of key aspects: the modeling process is easy to learn and apply, and the method is perceived by developers as providing value and improving security. Early evaluation results were also used to refine certain aspects of the S3P.
The modeling languages that were introduced in the S3P have since been enhanced to support other applications. This thesis presents security goal models (SGMs), a language that subsumes several security-related modeling languages to unify modeling of threats, attacks, vulnerabilities, activities, and security goals. SGMs have formal semantics and are sufficiently expressive to support applications as diverse as automatic run-time testing, static analysis, and code inspection. Proofof-concept implementations of these applications were developed as part of the SHIELDS project.
Finally, the thesis discusses how individual components of the S3P can be used in situations where the full process is inappropriate.
No 1496
Exploiting Structure in CSP-related Problems
Tommy Färnqvist
In this thesis we investigate the computational complexity and approximability of computational problems from the constraint satisfaction framework. An instance of a constraint satisfaction problem (CSP) has three components; a set V of variables, a set D of domain values, and a set of constraints C. The constraints specify a set of variables and associated local conditions on the domain values allowed for each variable, and the objective of a CSP is to assign domain values to the variables, subject to these constraints.
The first main part of the thesis is concerned with studying restrictions on the structure induced by the constraints on the variables for different computational problems related to the CSP. In particular, we examine how to exploit various graph, and hypergraph, acyclicity measures from the literature to find classes of relational structures for which our computational problems become efficiently solvable. Among the problems studied are, such where, in addition to the constraints of a CSP, lists of allowed domain values for each variable are specified (LHom). We also study variants of the CSP where the objective is changed to: counting the number of possible assignments of domain values to the variables given the constraints of a CSP (#CSP), minimising or maximising the cost of an assignment satisfying all constraints given various different ways of assigning costs to assignments (MinHom, Max Sol, and CSP), or maximising the number of satisfied constraints (Max CSP). In several cases, our investigations uncover the largest known (or possible) classes of relational structures for which our problems are efficiently solvable. Moreover, we take a different view on our optimisation problems MinHom and VCSP; instead of considering fixed arbitrary values for some (hyper)graph acyclicity measure associated with the underlying CSP, we consider the problems parameterised by such measures in combination with other basic parameters such as domain size and maximum arity of constraints. In this way, we identify numerous combinations of the considered parameters which make these optimisation problems admit fixed-parameter algorithms.
In the second part of the thesis, we explore the approximability properties of the (weighted) Max CSP problem for graphs. This is a problem which is known to be approximable within some constant ratio, but not believed to be approximable within an arbitrarily small constant ratio. Thus it is of interest to determine the best ratio within which the problem can be approximated, or at least give some bound on this constant. We introduce a novel method for studying approximation ratios which, in the context of Max CSP for graphs, takes the form of a new binary parameter on the space of all graphs. This parameter may, informally, be thought of as a sort of distance between two graphs; knowing the distance between two graphs, we can bound the approximation ratio of one of them, given a bound for the other.
No 1503
Contributions to Specification, Implementation, and Execution of Secure Software
John Wilander
This thesis contributes to three research areas in software security, namely security requirements and intrusion prevention via static analysis and runtime detection.
We have investigated current practice in security requirements by doing a field study of eleven requirement specifications on IT systems. The conclusion is that security requirements are poorly specified due to three things: inconsistency in the selection of requirements, inconsistency in level of detail, and almost no requirements on standard security solutions. A follow-up interview study addressed the reasons for the inconsistencies and the impact of poor security requirements. It shows that the projects had relied heavily on in-house security competence and that mature producers of software compensate for poor requirements in general but not in the case of security and privacy requirements specific to the customer domain.
Further, we have investigated the effectiveness of five publicly available static analysis tools for security. The test results show high rates of false positives for the tools building on lexical analysis and low rates of true positives for the tools building on syntactical and semantical analysis. As a first step toward a more effective and generic solution we propose decorated dependence graphs as a way of modeling and pattern matching security properties of code. The models can be used to characterize both good and bad programming practice as well as visually explain code properties to programmers. We have implemented a prototype tool that demonstrates how such models can be used to detect integer input validation flaws.
Finally, we investigated the effectiveness of publicly available tools for runtime prevention of buffer overflow attacks. Our initial comparison showed that the best tool as of 2003 was effective against only 50 % of the attacks and there were six attack forms which none of the tools could handle. A follow-up study includes the release of a buffer overflow testbed which covers 850 attack forms. Our evaluation results show that the most popular, publicly available countermeasures cannot prevent all of these buffer overflow attack forms.
No 1506
Creating & Enabling the Useful Service Discovery Experience: The Perfect Recommendation Does Not Exist
Magnus Ingmarsson
We are rapidly entering a world with an immense amount of services and devices available to humans and machines. This is a promising future, however there are at least two major challenges for using these services and devices: (1) they have to be found and (2) after being found, they have to be selected amongst. A significant difficulty lies in not only finding most available services, but presenting the most useful ones. In most cases, there may be too many found services and devices to select from.
Service discovery needs to become more aimed towards humans and less towards machines. The service discovery challenge is especially prevalent in ubiquitous computing. In particular, service and device flux, human overloading, and service relevance are crucial. This thesis addresses the quality of use of services and devices, by introducing a sophisticated discovery model through the use of new layers in service discovery. This model allows use of services and devices when current automated service discovery and selection would be impractical by providing service suggestions based on user activities, domain knowledge, and world knowledge. To explore what happens when such a system is in place, a wizard of oz study was conducted in a command and control setting.
To address service discovery in ubiquitous computing new layers and a test platform were developed together with a method for developing and evaluating service discovery systems. The first layer, which we call the Enhanced Traditional Layer (ETL), was studied by developing the ODEN system and including the ETL within it. ODEN extends the traditional, technical service discovery layer by introducing ontology-based semantics and reasoning engines. The second layer, the Relevant Service Discovery Layer, was explored by incorporating it into the MAGUBI system. MAGUBI addresses the human aspects in the challenge of relevant service discovery by employing common-sense models of user activities, domain knowledge, and world knowledge in combination with rule engines.
The RESPONSORIA system provides a web-based evaluation platform with a desktop look and feel. This system explores service discovery in a service-oriented architecture setting. RESPONSORIA addresses a command and control scenario for rescue services where multiple actors and organizations work together at a municipal level. RESPONSORIA was the basis for the wizard of oz evaluation employing rescue services professionals. The result highlighted the importance of service naming and presentation to the user. Furthermore, there is disagreement among users regarding the optimal service recommendation, but the results indicated that good recommendations are valuable and the system can be seen as a partner.
No 1547
Model-Based Verification of Dynamic System Behavior against Requirements: Method, Language, and Tool
Wladimir Schamai
Modeling and simulation of complex systems is at the heart of any modern engineering activity. Engineers strive to predict the behavior of the system under development in order to get answers to particular questions long before physical prototypes or the actual system are built and can be tested in real life.
An important question is whether a particular system design fulfills or violates requirements that are imposed on the system under development. When developing complex systems, such as spacecraft, aircraft, cars, power plants, or any subsystem of such a system, this question becomes hard to answer simply because the systems are too complex for engineers to be able to create mental models of them. Nowadays it is common to use computer-supported modeling languages to describe complex physical and cyber-physical systems. The situation is different when it comes to describing requirements. Requirements are typically written in natural language. Unfortunately, natural languages fail at being unambiguous, in terms of both syntax and semantics. Automated processing of naturallanguage requirements is a challenging task which still is too difficult to accomplish via computer for this approach to be of significant use in requirements engineering or verification.
This dissertation proposes a new approach to design verification using simulation models that include formalized requirements. The main contributions are a new method that is supported by a new language and tool, along with case studies. The method enables verification of system dynamic behavior designs against requirements using simulation models. In particular, it shows how natural-language requirements and scenarios are formalized. Moreover, it presents a framework for automating the composition of simulation models that are used for design verification, evaluation of verification results, and sharing of new knowledge inferred in verification sessions.
A new language called ModelicaML was developed to support the new method. It enables requirement formalization and integrates UML and Modelica. The language and the developed algorithms for automation are implemented in a prototype that is based on Eclipse Papyrus UML, Acceleo, and Xtext for modeling, and OpenModelica tools for simulation. The prototype is used to illustrate the applicability of the new method to examples from industry. The case studies presented start with sets of natural-language requirements and show how they are translated into models. Then, designs and verification scenarios are modeled, and simulation models are composed and simulated automatically. The simulation results produced are then used to draw conclusions on requirement violations; this knowledge is shared using semantic web technology.
This approach supports the development and dynamic verification of cyber-physical systems, including both hardware and software components. ModelicaML facilitates a holistic view of the system by enabling engineers to model and verify multi-domain system behavior using mathematical models and state-of-the-art simulation capabilities. Using this approach, requirement inconsistencies, incorrectness, or infeasibilities, as well as design errors, can be detected and avoided early on in system development. The artifacts created can be reused for product verification in later development stages.
No 1551
Simulations
Henrik Svensson
This thesis is concerned with explanations of embodied cognition as internal simulation. The hypothesis is that several cognitive processes can be explained in terms of predictive chains of simulated perceptions and actions.
In other words, perceptions and actions are reactivated internally by the nervous system to be used in cognitive phenomena such as mental imagery.
This thesis contributes by advancing the theoretical foundations of simulations and the empirical grounds on which they are based, including a review of the empiricial evidence for the existence of simulated perceptions and actions in cognition, a clarification of the representational function of simulations in cognition, as well as identifying implicit, bodily and environmental anticipation as key mechanisms underlying such simulations. The thesis also develops the ³inception of simulation² hypothesis, which suggests that dreaming has a function in the development of simulations by forming associations between experienced, non-experienced but realistic, and even unrealistic perceptions during early childhood. The thesis further investigates some aspects of simulations and the ³inception of simulation² hypothesis by using simulated robot models based on echo state networks. These experiments suggest that it is possible for a simple robot to develop internal simulations by associating simulated perceptions and actions, and that dream-like experiences can be beneficial for the development of such simulations.
No 1559
Stability of Adaptive Distributed Real-TimeSystems with Dynamic Resource Management
Sergiu Rafiliu
Today's embedded distributed real-time systems, are exposed to large variations in resource usage due to complex software applications, sophisticated hardware platforms, and the impact of their run-time environment. As eciency becomes more important, the applications running on these systems are extended with on-line resource managers whose job is to adapt the system in the face of such variations. Distributed systems are often heterogeneous, meaning that the hardware platform consists of computing nodes with dierent performance, operating systems, and scheduling policies, linked through one or more networks using dierent protocols.
In this thesis we explore whether resource managers used in such distributed embedded systems are stable, meaning that the system's resource usage is controlled under all possible run-time scenarios. Stability implies a bounded worst-case behavior of the system and can be linked with classic real-time systems' properties such as bounded response times for the software applications. In the case of distributed systems, the stability problem is particularly hard because software applications distributed over the dierent resources generate complex, cyclic dependencies between the resources, that need to be taken into account. In this thesis we develop a detailed mathematical model of an adaptive, distributed real-time system and we derive conditions that, if satised, guarantee its stability.
No 1581
Performance-aware Component Composition for GPU-based systems
Usman Dastgeer
This thesis addresses issues associated with efficiently programming modern heterogeneous GPU-based systems, containing multicore CPUs and one or more programmable Graphics Processing Units (GPUs). We use ideas from component-based programming to address programming, performance and portability issues of these heterogeneous systems. Specifically, we present three approaches that all use the idea of having multiple implementations for each computation; performance is achieved/retained either a) by selecting a suitable implementation for each computation on a given platform or b) by dividing the computation work across different implementations running on CPU and GPU devices in parallel.
In the first approach, we work on a skeleton programming library (SkePU) that provides high-level abstraction while making intelligent implementation selection decisions underneath either before or during the actual program execution. In the second approach, we develop a composition tool that parses extra information (metadata) from XML files, makes certain decisions online, and, in the end, generates code for making the final decisions at runtime. The third approach is a framework that uses source-code annotations and program analysis to generate code for the runtime library to make the selection decision at runtime. With a generic performance modeling API alongside program analysis capabilities, it supports online tuning as well as complex program transformations.
These approaches differ in terms of genericity, intrusiveness, capabilities and knowledge about the program source-code; however, they all demonstrate usefulness of component programming techniques for programming GPU-based systems. With experimental evaluation, we demonstrate how all three approaches, although different in their own way, provide good performance on different GPU-based systems for a variety of applications.
No 1602
Reinforcement Learning of Locomotion based on Central Pattern Generators
Cai Li
Locomotion learning for robotics is an interesting and challenging area in which the movement capabilities of animals have been deeply investigated and acquired knowledge has been transferred into modelling locomotion on robots. What modellers are required to understand is what structure can represent locomotor systems in different animals and how such animals develop various and dexterous locomotion capabilities. Notwithstanding the depth of research in the area, modelling locomotion requires a deep rethinking.
In this thesis, based on the umbrella of embodied cognition, a neural-body-environment interaction is emphasised and regarded as the solution to locomotion learning/development. Central pattern generators (CPGs) are introduced in the first part (Chapter 2) to generally interpret the mechanism of locomotor systems in animals. With a deep investigation on the structure of CPGs and inspiration from human infant development, a layered CPG architecture with baseline motion generation and dynamics adaptation interfaces are proposed. In the second part, reinforcement learning (RL) is elucidated as a good method for dealing with locomotion learning from the perspectives of psychology, neuroscience and robotics (Chapter 4). Several continuous-space RL techniques (e.g. episodic natural actor critic, policy learning by weighting explorations with returns, continuous action space learning automaton are introduced for practical use (Chapter 3). With the knowledge of CPGs and RL, the architecture and concept of CPG-Actor-Critic is constructed. Finally, experimental work based on published papers is highlighted in a path of my PhD research (Chapter 5). This includes the implementation of CPGs and the learning on the NAO robot for crawling and walking. The implementation is also extended to test the generalizability to different morphologies (the ghostdog robot). The contribution of this thesis is discussed from two angles: the investigation of the CPG architecture and the implementation (Chapter 6).
No 1663
On Some Combinatorial Optimization Problems: Algorithms and Complexity
Hannes Uppman
This thesis is about the computational complexity of several classes of combinatorial optimization problems, all related to the constraint satisfaction problems.
A constraint language consists of a domain and a set of relations on the domain. For each such language there is a constraint satisfaction problem (CSP). In this problem we are given a set of variables and a collection of constraints, each of which is constraining some variables with a relation in the language. The goal is to determine if domain values can be assigned to the variables in a way that satisfies all constraints. An important question is for which constraint languages the corresponding CSP can be solved in polynomial time. We study this kind of question for optimization problems related to the CSPs.
The main focus is on extended minimum cost homomorphism problems. These are optimization versions of CSPs where instances come with an objective function given by a weighted sum of unary cost functions, and where the goal is not only to determine if a solution exists, but to find one of minimum cost. We prove a complete classification of the complexity for these problems on three-element domains. We also obtain a classification for the so-called conservative case.
Another class of combinatorial optimization problems are the surjective maximum CSPs. These problems are variants of CSPs where a non-negative weight is attached to each constraint, and the objective is to find a surjective mapping of the variables to values that maximizes the weighted sum of satisfied constraints. The surjectivity requirement causes these problems to behave quite different from for example the minimum cost homomorphism problems, and many powerful techniques are not applicable. We prove a dichotomy for the complexity of the problems in this class on two-element domains. An essential ingredient in the proof is an algorithm that solves a generalized version of the minimum cut problem. This algorithm might be of independent interest.
In a final part we study properties of NP-hard optimization problems. This is done with the aid of restricted forms of polynomial-time reductions that for example preserves solvability in sub-exponential time. Two classes of optimization problems similar to those discussed above are considered, and for both we obtain what may be called an easiest NP-hard problem. We also establish some connections to the exponential time hypothesis.
No 1664
Tools and Methods for Analysis, Debugging, and Performance Improvement of Equation-Based Models
Martin Sjölund
Equation-based object-oriented (EOO) modeling languages such as Modelica provide a convenient, declarative method for describing models of cyber-physical systems. Because of the ease of use of EOO languages, large and complex models can be built with limited effort.
However, current state-of-the-art tools do not provide the user with enough information when errors appear or simulation results are wrong. It is of paramount importance that such tools should give the user enough information to correct errors or understand where the problems that lead to wrong simulation results are located. However, understanding the model translation process of an EOO compiler is a daunting task that not only requires knowledge of the numerical algorithms that the tool executes during simulation, but also the complex symbolic transformations being performed.
As part of this work, methods have been developed and explored where the EOO tool, an enhanced Modelica compiler, records the transformations during the translation process in order to provide better diagnostics, explanations, and analysis. This information is used to generate better error-messages during translation. It is also used to provide better debugging for a simulation that produces unexpected results or where numerical methods fail.
Meeting deadlines is particularly important for real-time applications. It is usually essential to identify possible bottlenecks and either simplify the model or give hints to the compiler that enable it to generate faster code. When profiling and measuring execution times of parts of the model the recorded information can also be used to find out why a particular system model executes slowly.
Combined with debugging information, it is possible to find out why this system of equations is slow to solve, which helps understanding what can be done to simplify the model. A tool with a graphical user interface has been developed to make debugging and performance profiling easier. Both debugging and profiling have been combined into a single view so that performance metrics are mapped to equations, which are mapped to debugging information.
The algorithmic part of Modelica was extended with meta-modeling constructs (MetaModelica) for language modeling. In this context a quite general approach to debugging and compilation from (extended) Modelica to C code was developed. That makes it possible to use the same executable format for simulation executables as for compiler bootstrapping when the compiler written in MetaModelica compiles itself.
Finally, a method and tool prototype suitable for speeding up simulations has been developed. It works by partitioning the model at appropriate places and compiling a simulation executable for a suitable parallel platform.
No 1666
Contributions to Simulation of Modelica Models on Data-Parallel Multi-Core Architectures
Kristian Stavåker
Modelica is an object-oriented, equation-based modeling and simulation language being developed through an international effort by the Modelica Association. With Modelica it is possible to build computationally demanding models; however, simulating such models might take a considerable amount of time. Therefore techniques of utilizing parallel multi-core architectures for faster simulations are desirable. In this thesis the topic of simulation of Modelica on parallel architectures in general and on graphics processing units (GPUs) in particular is explored. GPUs support code that can be executed in a data-parallel fashion. It is also possible to connect and run several GPUs together which opens opportunities for even more parallelism. In this thesis several approaches regarding simulation of Modelica models on GPUs and multi-core architectures are explored.
In this thesis the topic of expressing and solving partial differential equations (PDEs) in the context of Modelica is also explored, since such models usually give rise to equation systems with a regular structure, which can be suitable for efficient solution on GPUs. Constructs for PDE-based modeling are currently not part of the standard Modelica language specification. Several approaches on modeling and simulation with PDEs in the context of Modelica have been developed over the years. In this thesis we present selected earlier work, ongoing work and planned work on PDEs in the context of Modelica. Some approaches detailed in this thesis are: extending the language specification with PDE handling; using a software with support for PDEs and automatic discretization of PDEs; and connecting an external C++ PDE library via the functional mockup interface (FMI).
Finally the topic of parallel skeletons in the context of Modelica is explored. A skeleton is a predefined, generic component that implements a common specific pattern of computation and data dependence. Skeletons provide a high degree of abstraction and portability and a skeleton can be customized with user code. Using skeletons with Modelica opens up the possibility of executing heavy Modelica-based matrix and vector computations on multi-core architectures. A working Modelica-SkePU library with some minor necessary compiler extensions is presented.
No 1680
Hardware/Software Codesign of Embedded Systems with Reconfigurable and Heterogeneous Platforms
Adrian Lifa
Modern applications running on today's embedded systems have very high requirements. Most often, these requirements have many dimensions: the applications need high performance as well as exibility, energy-eciency as well as real-time properties, fault tolerance as well as low cost. In order to meet these demands, the industry is adopting architectures that are more and more heterogeneous and that have reconguration capabilities. Unfortunately, this adds to the complexity of designing streamlined applications that can leverage the advantages of such architectures. In this context, it is very important to have appropriate tools and design methodologies for the optimization of such systems. This thesis addresses the topic of hardware/software codesign and optimization of adaptive real-time systems implemented on recongurable and heterogeneous platforms. We focus on performance enhancement for dynamically recongurable FPGA-based systems, energy minimization in multi-mode real-time systems implemented on heterogeneous platforms, and codesign techniques for fault-tolerant systems. The solutions proposed in this thesis have been validated by extensive experiments, ranging from computer simulations to proof of concept implementations on real-life platforms. The results have conrmed the importance of the addressed aspects and the applicability of our techniques for design optimization of modern embedded systems.
No 1685
Timing Analysis of Distributed Embedded Systems with Stochastic Workload and Realiability Constraints
Bogdan Tanasa
Today's distributed embedded systems are exposed to large variations in workload due to complex software applications and sophisticated hardware platforms. Examples of such systems are automotive and avionics applications.
The tasks running on computational units have variable execution times. Thus, the workload that the computational units must accommodate is likely to be stochastic. Some of the tasks trigger messages that will be transmitted over communication buses. There is a direct connection between the variable execution times of the tasks and the moments of triggering of these messages. Thus, the workload imposed on the communication buses will also be stochastic. The likelihood for transient faults to occur is another dimension for stochastic workload as today's embedded systems are designed to work in extreme environmental conditions. Given the above, the need for tools that can analyze systems that experience stochastic workload is continuously increasing.
The present thesis addresses this need. The solutions proposed in this thesis have been validated by extensive experiments that demonstrate the efficiency of the presented techniques.
No 1702
Thermal Issues in Testing of Advanced Systems on Chip
Nima Aghaee
Many cutting-edge computer and electronic products are powered by advanced Systems-on-Chip (SoC). Advanced SoCs encompass superb performance together with large number of functions. This is achieved by efficient integration of huge number of transistors. Such very large scale integration is enabled by a core-based design paradigm as well as deep-submicron and 3D-stacked-IC technologies. These technologies are susceptible to reliability and testing complications caused by thermal issues. Three crucial thermal issues related to temperature variations, temperature gradients, and temperature cycling are addressed in this thesis.
Existing test scheduling techniques rely on temperature simulations to generate schedules that meet thermal constraints such as overheating prevention. The difference between the simulated temperatures and the actual temperatures is called temperature error. This error, for past technologies, is negligible. However, advanced SoCs experience large errors due to large process variations. Such large errors have costly consequences, such as overheating, and must be taken care of. This thesis presents an adaptive approach to generate test schedules that handle such temperature errors.
Advanced SoCs manufactured as 3D stacked ICs experience large temperature gradients. Temperature gradients accelerate certain early-life defect mechanisms. These mechanisms can be artificially accelerated using gradient-based, burn-in like, operations so that the defects are detected before shipping. Moreover, temperature gradients exacerbate some delay-related defects. In order to detect such defects, testing must be performed when appropriate temperature-gradients are enforced. A schedule-based technique that enforces the temperature-gradients for burn-in like operations is proposed in this thesis. This technique is further developed to support testing for delay-related defects while appropriate gradients are enforced.
The last thermal issue addressed by this thesis is related to temperature cycling. Temperature cycling test procedures are usually applied to safety-critical applications to detect cycling-related early-life failures. Such failures affect advanced SoCs, particularly through-silicon-via structures in 3D-stacked-ICs. An efficient schedule-based cycling-test technique that combines cycling acceleration with testing is proposed in this thesis. The proposed technique fits into existing 3D testing procedures and does not require temperature chambers. Therefore, the overall cycling acceleration and testing cost can be drastically reduced.
All the proposed techniques have been implemented and evaluated with extensive experiments based on ITC’02 benchmarks as well as a number of 3D stacked ICs. Experiments show that the proposed techniques work effectively and reduce the costs, in particular the costs related to addressing thermal issues and early-life failures. We have also developed a fast temperature simulation technique based on a closed-form solution for the temperature equations. Experiments demonstrate that the proposed simulation technique reduces the schedule generation time by more than half.
No 1715
Security in Embedded Systems: A Model-Based Approach with Risk Metrics
Maria Vasilevskaya
The increasing prevalence of embedded devices and a boost in sophisticated attacks against them make embedded system security an intricate and pressing issue. New approaches to support the development of security-enhanced systems need to be explored. We realise that efficient transfer of knowledge from security experts to embedded system engineers is vitally important, but hardly achievable in current practice. This thesis proposes a Security-Enhanced Embedded system Design (SEED) approach, which is a set of concepts, methods, and processes that together aim at addressing this challenge of bridging the gap between the two areas of expertise.Â
We introduce the concept of a Domain-Specific Security Model (DSSM) as a suitable abstraction to capture the knowledge of security experts in a way that this knowledge can be later reused by embedded system engineers. Each DSSM characterises common security issues of a specific application domain in a form of security properties linked to a range of solutions. Next, we complement a DSSM with the concept of a Performance Evaluation Record (PER) to account for the resource-constrained nature of embedded systems. Each PER characterises the resource overhead created by a security solution, a provided level of security, and other relevant information.Â
We define a process that assists an embedded system engineer in selecting a suitable set of security solutions. The process couples together (i) the use of the security knowledge accumulated in DSSMs and PERs, (ii) the identification of security issues in a system design, (iii) the analysis of resource constraints of a system and available security solutions, and (iv) model-based quantification of security risks to data assets associated with a design model. The approach is supported by a set of tools that automate certain steps.
We use scenarios from a smart metering domain to demonstrate how the SEED approach can be applied. We show that our artefacts are rich enough to support security experts in description of knowledge about security solutions, and to support embedded system engineers in integration of an appropriate set of security solutions based on that knowledge. We demonstrate the effectiveness of the proposed method for quantification of security risks by applying it to a metering device. This shows its usage for visualising of and reasoning about security risks inherent in a system design.
No 1729
Security-Driven Design of Real-Time Embedded System
Ke Jiang
Real-time embedded systems (RTESs) have been widely used in modern society. And it is also very common to find them in safety and security critical applications, such as transportation and medical equipment. There are, usually, several constraints imposed on a RTES, for example, timing, resource, energy, and performance, which must be satisfied simultaneously. This makes the design of such systems a difficult problem.
More recently, the security of RTESs emerges as a major design concern, as more and more attacks have been reported. However, RTES security, as a parameter to be considered during the design process, has been overlooked in the past. This thesis approaches the design of secure RTESs focusing on aspects that are particularly important in the context of RTES, such as communication confidentiality and side-channel attack resistance.
Several techniques are presented in this thesis for designing secure RTESs, including hardware/software co-design techniques for communication confidentiality on distributed platforms, a global framework for secure multi-mode real-time systems, and a scheduling policy for thwarting differential power analysis attacks.Â
All the proposed solutions have been extensively evaluated in a large amount of experiments, including two real-life case studies, which demonstrate the efficiency of the presented techniques.
No 1733
Strong Partial Clones and the Complexity of Constraint Satisfaction Problems: Limitations and Applications
Victor Lagerkvist
In this thesis we study the worst-case time complexity of the constraint satisfaction problem parameterized by a constraint language (CSP(S)), which is the problem of determining whether a conjunctive formula over S has a model. To study the complexity of CSP(S) we borrow methods from universal algebra. In particular, we consider algebras of partial functions, called strong partial clones. This algebraic approach allows us to obtain a more nuanced view of the complexity CSP(S) than possible with algebras of total functions, clones.
The results of this thesis is split into two main parts. In the first part we investigate properties of strong partial clones, beginning with a classification of weak bases for all Boolean relational clones. Weak bases are constraint languages where the corresponding strong partial clones in a certain sense are extraordinarily large, and they provide a rich amount of information regarding the complexity of the corresponding CSP problems. We then proceed by classifying the Boolean relational clones according to whether it is possible to represent every relation by a conjunctive, logical formula over the weak base without needing more than a polynomial number of existentially quantified variables. A relational clone satisfying this condition is called polynomially closed and we show that this property has a close relationship with the concept of few subpowers. Using this classification we prove that a strong partial clone is of infinite order if (1) the total functions in the strong partial clone are essentially unary and (2) the corresponding constraint language is finite. Despite this, we prove that these strong partial clones can be succinctly represented with finite sets of partial functions, bounded bases, by considering stronger notions of closure than functional composition.
In the second part of this thesis we apply the theory developed in the first part. We begin by studying the complexity of CSP(S) where S is a Boolean constraint language, the generalised satisfiability problem (SAT(S)). Using weak bases we prove that there exists a relation R such that SAT({R}) is the easiest NP-complete SAT(S) problem. We rule out the possibility that SAT({R}) is solvable in subexponential time unless a well-known complexity theoretical conjecture, the exponential-time hypothesis, (ETH) is false. We then proceed to study the computational complexity of two optimisation variants of the SAT(S) problem: the maximum ones problem over a Boolean constraint language S (MAX-ONES(S)) and the valued constraint satisfaction problem over a set of Boolean cost functions Δ (VCSP(Δ)). For MAX-ONES(S) we use partial clone theory and prove that MAX-ONES({R}) is the easiest NP-complete MAX-ONES(S) problem. These algebraic techniques do not work for VCSP(Δ), however, where we instead use multimorphisms to prove that MAX-CUT is the easiest NP-complete Boolean VCSP(Δ) problem. Similar to the case of SAT(S) we then rule out the possibility of subexponential algorithms for these problems, unless the ETH is false.
No 1734
An Informed System Development Approach to Tropical Cyclone Track and Intensity Forecasting
Chandan Roy
Introduction: Tropical Cyclones (TCs) inflict considerable damage to life and property every year. A major problem is that residents often hesitate to follow evacuation orders when the early warning messages are perceived as inaccurate or uninformative. The root problem is that providing accurate early forecasts can be difficult, especially in countries with less economic and technical means.
Aim: The aim of the thesis is to investigate how cyclone early warning systems can be technically improved. This means, first, identifying problems associated with the current cyclone early warning systems, and second, investigating if biologically based Artificial Neural Networks (ANNs) are feasible to solve some of the identified problems.
Method: First, for evaluating the efficiency of cyclone early warning systems, Bangladesh was selected as study area, where a questionnaire survey and an in-depth interview were administered. Second, a review of currently operational TC track forecasting techniques was conducted to gain a better understanding of various techniques’ prediction performance, data requirements, and computational resource requirements. Third, a technique using biologically based ANNs was developed to produce TC track and intensity forecasts. Systematic testing was used to find optimal values for simulation parameters, such as feature-detector receptive field size, the mixture of unsupervised and supervised learning, and learning rate schedule. Five types of 2D data were used for training. The networks were tested on two types of novel data, to assess their generalization performance.
Results: A major problem that is identified in the thesis is that the meteorologists at the Bangladesh Meteorological Department are currently not capable of providing accurate TC forecasts. This is an important contributing factor to residents’ reluctance to evacuate. To address this issue, an ANN-based TC track and intensity forecasting technique was developed that can produce early and accurate forecasts, uses freely available satellite images, and does not require extensive computational resources to run. Bidirectional connections, combined supervised and unsupervised learning, and a deep hierarchical structure assists the parallel extraction of useful features from five types of 2D data. The trained networks were tested on two types of novel data: First, tests were performed with novel data covering the end of the lifecycle of trained cyclones; for these test data, the forecasts produced by the networks were correct in 91-100% of the cases. Second, the networks were tested with data of a novel TC; in this case, the networks performed with between 30% and 45% accuracy (for intensity forecasts).
Conclusions: The ANN technique developed in this thesis could, with further extensions and up-scaling, using additional types of input images of a greater number of TCs, improve the efficiency of cyclone early warning systems in countries with less economic and technical means. The thesis work also creates opportunities for further research, where biologically based ANNs can be employed for general-purpose weather forecasting, as well as for forecasting other severe weather phenomena, such as thunderstorms.
No 1746
Analysis, Design, and Optimization of Embedded Control Systems
Amir Aminifar
Today, many embedded or cyber-physical systems, e.g., in the automotive domain, comprise several control applications, sharing the same platform. It is well known that such resource sharing leads to complex temporal behaviors that degrades the quality of control, and more importantly, may even jeopardize stability in the worst case, if not properly taken into account.
In this thesis, we consider embedded control or cyber-physical systems, where several control applications share the same processing unit. The focus is on the control-scheduling co-design problem, where the controller and scheduling parameters are jointly optimized. The fundamental difference between control applications and traditional embedded applications motivates the need for novel methodologies for the design and optimization of embedded control systems. This thesis is one more step towards correct design and optimization of embedded control systems.
Offline and online methodologies for embedded control systems are covered in this thesis. The importance of considering both the expected control performance and stability is discussed and a control-scheduling co-design methodology is proposed to optimize control performance while guaranteeing stability. Orthogonal to this, bandwidth-efficient stabilizing control servers are proposed, which support compositionality, isolation, and resource-efficiency in design and co-design. Finally, we extend the scope of the proposed approach to non-periodic control schemes and address the challenges in sharing the platform with self-triggered controllers. In addition to offline methodologies, a novel online scheduling policy to stabilize control applications is proposed.
No 1747
Energy Modelling and Fairness for Efficient Mobile Communication
Ekhiotz Vergara
Energy consumption and its management have been clearly identified as a challenge in computing and communication system design, where energy economy is obviously of paramount importance for battery powered devices. This thesis addresses the energy efficiency of mobile communication at the user end in the context of cellular networks.
We argue that energy efficiency starts by energy awareness and propose EnergyBox, a parametrised tool that enables accurate and repeatable energy quantification at the user end using real data traffic traces as input. EnergyBox offers an abstraction of the underlying states for operation of the wireless interfaces and allows to estimate the energy consumption for different operator settings and device characteristics. The tool is used throughout the thesis to quantify and reveal inefficient data communication patterns of widely used mobile applications.
We consider two different perspectives in the search of energy-efficient solutions. From the application perspective, we show that systematically quantifying the energy consumption of design choices (e.g., communication patterns, protocols, and data formats) contributes to a significantly smaller energy footprint. From the system perspective, we devise a cross-layer solution that schedules packet transmissions based on the knowledge of the network parameters that impact the energy consumption of the handset. These attempts show that application level decisions require a better understanding of possible energy apportionment policies at system level.
Finally, we study the generic problem of determining the contribution of an entity (e.g., application) to the total energy consumption of a given system (e.g., mobile device). We compare the state-of-the-art policies in terms of fairness leveraging cooperative game theory and analyse their required information and computational complexity. We show that providing incentives to reduce the total energy consumption of the system (as part of fairness) is tightly coupled to the policy selection. Our study provides guidelines to select an appropriate policy depending on the characteristics of the system.Â
No 1748
Chain Graphs — Interpretations, Expressiveness and Learning Algorithms
Dag Sonntag
Probabilistic graphical models are currently one of the most commonly used architectures for modelling and reasoning with uncertainty. The most widely used subclass of these models is directed acyclic graphs, also known as Bayesian networks, which are used in a wide range of applications both in research and industry. Directed acyclic graphs do, however, have a major limitation, which is that only asymmetric relationships, namely cause and effect relationships, can be modelled between their variables. A class of probabilistic graphical models that tries to address this shortcoming is chain graphs, which include two types of edges in the models representing both symmetric and asymmetric relationships between the variables. This allows for a wider range of independence models to be modelled and depending on how the second edge is interpreted, we also have different so-called chain graph interpretations.
Although chain graphs were first introduced in the late eighties, most research on probabilistic graphical models naturally started in the least complex subclasses, such as directed acyclic graphs and undirected graphs. The field of chain graphs has therefore been relatively dormant. However, due to the maturity of the research field of probabilistic graphical models and the rise of more data-driven approaches to system modelling, chain graphs have recently received renewed interest in research. In this thesis we provide an introduction to chain graphs where we incorporate the progress made in the field. More specifically, we study the three chain graph interpretations that exist in research in terms of their separation criteria, their possible parametrizations and the intuition behind their edges. In addition to this we also compare the expressivity of the interpretations in terms of representable independence models as well as propose new structure learning algorithms to learn chain graph models from data.
No 1768
Web Authentication using Third-Parties in Untrusted Environments
Anna Vapen
With the increasing personalization of the Web, many websites allow users to create their own personal accounts. This has resulted in Web users often having many accounts on different websites, to which they need to authenticate in order to gain access. Unfortunately, there are several security problems connected to the use and re-use of passwords, the most prevalent authentication method currently in use, including eavesdropping and replay attacks.
Several alternative methods have been proposed to address these shortcomings, including the use of hardware authentication devices. However, these more secure authentication methods are often not adapted for mobile Web users who use different devices in different places and in untrusted environments, such as public Wi-Fi networks, to access their accounts.
We have designed a method for comparing, evaluating and designing authentication solutions suitable for mobile users and untrusted environments. Our method leverages the fact that mobile users often bring their own cell phones, and also takes into account different levels of security adapted for different services on the Web.
Another important trend in the authentication landscape is that an increasing number of websites use third-party authentication. This is a solution where users have an account on a single system, the identity provider, and this one account can then be used with multiple other websites. In addition to requiring fewer passwords, these services can also in some cases implement authentication with higher security than passwords can provide.
How websites select their third-party identity providers has privacy and security implications for end users. To better understand the security and privacy risks with these services, we present a data collection methodology that we have used to identify and capture third-party authentication usage on the Web. We have also characterized the third-party authentication landscape based on our collected data, outlining which types of third-parties are used by which types of sites, and how usage differs across the world. Using a combination of large-scale crawling, longitudinal manual testing, and in-depth login tests, our characterization and analysis has also allowed us to discover interesting structural properties of the landscape, differences in the cross-site relationships, and how the use of third-party authentication is changing over time.
Finally, we have also outlined what information is shared between websites in third-party authentication, dened risk classes based on shared data, and proled privacy leakage risks associated with websites and their identity providers sharing data with each other. Our ndings show how websites can strengthen the privacy of their users based on how these websites select and combine their third-parties and the data they allow to be shared.
No 1778
On a Need to Know Basis: A Conceptual and Methodological Framework for Modelling and Analysis of Information Demand in an Enterprise Context
Magnus Jandinger
While the amount of information, readily available to workers in information- and knowledge intensive business- and industrial contexts, only seem to increase with every day, those workers continue to have difficulties in finding and managing relevant and needed information despite the numerous technical, organisational, and practical approaches promising a remedy to the situation. In this dissertation it is claimed that the main reason for the shortcomings of such approaches are a lack of understanding of the underlying information demand people and organisations have in relation to performing work tasks. Furthermore, it is also argued that while this issue, even with a better understanding of the underlying mechanisms, still would remain a complex problem, it would at least be manageable.
To facilitate the development of demand-driven information solutions and organisational change with respect to information demand the dissertation sets out to first provide the empirical and theoretical foundation for a method for modelling and analysing information demand in enterprise contexts and then presents an actual method. As a part of this effort, a conceptual framework for reasoning about information demand is presented together with experiences from a number of empirical cases focusing on both method generation and -validation. A methodological framework is then defined based on principles and ideas grounded in the empirical background and finally a number of method components are introduced in terms of notations, conceptual focus, and procedural approaches for capturing and representation of various aspects of information demand.
The dissertation ends with a discussion concerning the validity of the presented method and results in terms of utility, relevance, and applicability with respect to industrial context and needs, as well as possible and planned future improvements and developments of the method.
No 1798
Collaborative Network Security: Targeting Wide-area Routing and Edge-networks Attacks
Rahul Hiran
To ensure that services can be delivered reliably and continuously over theInternet, it is important that both Internet routes and edge networks aresecured. However, the sophistication and distributed nature of many at-tacks that target wide-area routing and edge networks make it difficult foran individual network, user, or router to detect these attacks. Thereforecollaboration is important. Although the benefits of collaboration betweendifferent network entities have been demonstrated, many open questionsstill remain, including how to best design distributed scalable mechanismsto mitigate attacks on the network infrastructure. This thesis makes severalcontributions that aim to secure the network infrastructure against attackstargeting wide-area routing and edge networks.
First, we present a characterization of a controversial large-scale routinganomaly, in which a large Telecom operator hijacked a very large numberof Internet routes belonging to other networks. We use publicly availabledata from the time of the incident to understand what can be learned aboutlarge-scale routing anomalies and what type of data should be collected inthe future to diagnose and detect such anomalies.
Second, we present multiple distributed mechanisms that enable col-laboration and information sharing between different network entities thatare affected by such attacks. The proposed mechanisms are applied in thecontexts of collaborating Autonomous Systems (ASes), users, and servers,and are shown to help raise alerts for various attacks. Using a combina-tion of data-driven analysis and simulations, based on publicly availablereal network data (including traceroutes, BGP announcements, and net-work relationship data), we show that our solutions are scalable, incur lowcommunication and processing overhead, and provide attractive tradeoffsbetween attack detection and false alert rates.
Finally, for a set of previously proposed routing security mechanisms,we consider the impact of regional deployment restrictions, the scale of thecollaboration, and the size of the participants deploying the solutions. Al-though regional deployment can be seen as a restriction and the participationof large networks is often desirable, we find interesting cases where regionaldeployment can yield better results compared to random global deployment,and where smaller networks can play an important role in achieving bettersecurity gains. This study offers new insights towards incremental deploy-ment of different classes of routing security mechanisms.
No 1813
Algorithms and Framework for Energy Efficient Parallel Stream Computing on Many-Core Architectures
Nicolas Melot
The rise of many-core processor architectures in the market answers to a constantly growing need of processing power to solve more and more challenging problems such as the ones in computing for big data. Fast computation is more and more limited by the very high power required and the management of the considerable heat produced. Many programming models compete to take profit of many-core architectures to improve both execution speed and energy consumption, each with their advantages and drawbacks. The work described in this thesis is based on the dataflow computing approach and investigates the benefits of a carefully pipelined execution of streaming applications, focusing in particular on off- and on-chip memory accesses. As case study, we implement classic and on-chip pipelined versions of mergesort for Intel SCC and Xeon. We see how the benefits of the on-chip pipelining technique are bounded by the underlying architecture, and we explore the problem of fine tuning streaming applications for many-core architectures to optimize for energy given a throughput budget. We propose a novel methodology to compute schedules optimized for energy efficiency given a fixed throughput target. We introduce \emph{Drake}, derived from Schedeval, a tool that generates pipelined applications for Many-Core architectures and allows the performance testing in time or energy of their static schedule. We show that streaming applications based on Drake compete with specialized implementations and we use Schedeval to demonstrate performance differences between schedules that are otherwise considered as equivalent by a simple model.
No 1823
Making Sense of Adaptations: Resilience in High-Risk Work
Amy Rankin
To cope with variations, disturbances, and unexpected
events in complex socio-technical systems people are required to continuously
adapt to the changing environment, sometimes in novel and innovative ways. This
thesis investigates adaptive performance in complex work settings across
domains, with a focus on examining what enables and disables successful
adaptations, and how contextual factors shape performance. Examples of adaptive
performance studies include a crisis command team dealing with the loss of key
personnel, a crew coping with unreliable system feedback in the cockpit, and a
nursing team managing an overload of patients. The two main contributions of
this thesis is the analysis of cases of people coping with variations and
disturbances, and the development of conceptual models to report findings,
structure cases, and make sense of sharp-end adaptations in complex work
settings. The findings emphasise that adaptive performance outside procedures
and textbook scenarios at the sharp end is a critical ability to cope with
variation and unexpected events. However, the results also show that adaptations
may come at the cost of new vulnerabilities and system brittleness. Analysing
adaptive performance in everyday events informs safety management by making
visible limitations and possibilities of system design, organisational
structures, procedures, and training.
Public sector organizations are in need of new
approaches to development and innovation. There is a need to develop a
capability to better understand priorities, needs and wishes of public sector
service users and become more proactive, in order to meet the demands on keeping
costs down and quality high. Design is increasingly put forward as a potential
answer to this need and there are many initiatives taken across the world to
encourage the use of a design approach to development and innovation within
public sector. In relation to this trend there is a need to improve the
understanding of how public sector organizations develop ability to exploit
design; how they develop design capability. This is the focus of this thesis,
which through an exploratory study has observed the two initiatives aiming to
introduce design and develop design capability within healthcare and social
service organizations. One main contribution of this work is an understanding of the design
capability concept based on a structured review of the use of the design
capability concept in the literature. The concept has previously been used in
relation to different aspects of designs in organizations. Another important contribution is the development of an understanding for how
design capability is developed based on interpretations founded in the
organizational learning perspective of absorptive capacity. The study has
identified how different antecedents to development of design capability have
influenced this development in the two cases. The findings have identified
aspects that both support and impede the development of design capability which
are important to acknowledge and address when aiming to develop design
capability within a public sector organization. In both cases, the set up of the knowledge transferring efforts focus mainly
on developing awareness of design. Similar patterns are seen in other prior and
parallel initiatives. The findings however suggest that it is also important to
ensure that the organization have access to design competence and that
structures like routines, processes and culture support and enable the use of
design practice, in order to make design a natural part of the continuous
development work. Bayesian networks have grown to become a dominant
type of model within the domain of probabilistic graphical models. Not only do
they empower users with a graphical means for describing the relationships among
random variables, but they also allow for (potentially) fewer parameters to
estimate, and enable more efficient inference. The random variables and the
relationships among them decide the structure of the directed acyclic graph that
represents the Bayesian network. It is the stasis over time of these two
components that we question in this thesis. By introducing a new type of
probabilistic graphical model, which we call gated Bayesian networks, we allow
for the variables that we include in our model, and the relationships among
them, to change overtime. We introduce algorithms that can learn gated Bayesian
networks that use different variables at different times, required due to the
process which we are modelling going through distinct phases. We evaluate the
efficacy of these algorithms within the domain of algorithmic trading, showing
how the learnt gated Bayesian networks can improve upon a passive approach to
trading. We also introduce algorithms that detect changes in the relationships
among the random variables, allowing us to create a model that consists of
several Bayesian networks, thereby revealing changes and the structure by which
these changes occur. The resulting models can be used to detect the currently
most appropriate Bayesian network, and we show their use in real-world examples
from both the domain of sports analytics and finance.
Automated planning is known to be computationally
hard in the general case. Propositional planning is PSPACE-complete and
first-order planning is undecidable. One method for analyzing the computational
complexity of planning is to study restricted subsets of planning instances,
with the aim of differentiating instances with varying complexity. We use this
methodology for studying the computational complexity of planning. Finding new
tractable (i.e. polynomial-time solvable) problems has been a particularly
important goal for researchers in the area. The reason behind this is not only
to differentiate between easy and hard planning instances, but also to use
polynomial-time solvable instances in order to construct better heuristic
functions and improve planners. We identify a new class of tractable
cost-optimal planning instances by restricting the causal graph. We study the
computational complexity of oversubscription planning (such as the net-benefit
problem) under various restrictions and reveal strong connections with classical
planning. Inspired by this, we present a method for compiling oversubscription
planning problems into the ordinary plan existence problem. We further study the
parameterized complexity of cost-optimal and net-benefit planning under the same
restrictions and show that the choice of numeric domain for the action costs has
a great impact on the parameterized complexity. We finally consider the
parameterized complexity of certain problems related to partial-order planning.
In some applications, less restricted plans than total-order plans are needed.
Therefore, a partial-order plan is being used instead. When dealing with
partial-order plans, one important question is how to achieve optimal partial
order plans, i.e. having the highest degree of freedom according to some notion
of flexibility. We study several optimization problems for partial-order plans,
such as finding a minimum deordering or reordering, and finding the minimum
parallel execution length.
In this thesis we study automated planning, a branch of artificialintelligence,
which deals with construction of plans. A plan is typically an action sequence
that achieves some specific goal. In particular, we study unsolvable planning
instances, i.e. there is no plan. Historically, this topic has been neglected by
the planning community, and up to recently the International Planning
Competition has only evaluated planners on solvable planning instances. For many
applications we can know, e.g. by design, that there is a solution, but this
cannot be a general assumption. One example is penetration testing in computer
security, where a system inconsidered safe if there is no plan for intrusion.
Other examples are resource bound planning instances that have insufficient
resources to achieve the goal. The main theme of this thesis is to use variable
projection to prove unsolvability of planning instances. We implement and
evaluate two planners: the first checks variable projections with the goal of
finding an unsolvable projection, and the second builds a pattern collection to
provide dead-end detection. In addition to comparing the planners to existing
planners, we also utilise a large computer cluser to statistically assess
whether they can be optimised further. On the benchmarks of planning instances
that we used, it turns out that further improvement is likely to come from
supplementary techniques rather than optimisation. We pursued this and enhanced
variable projections with mutexes, which yielded a very competitive planner. We
also inspect whether unsolvable variable projections tend to be composed of
variables that play different roles, i.e. they are not 'similar'. We devise a
variable similarity measure to rate how similar two variables are on a scale,
and statistically analyse it. The measure can differentiate between unsolvable
and solvable planning instances quite well, and is integrated into our planners.
We also define a binary version of the measure, namely, that two variables are
isomorphic if they behave exactly the same in some optimal solution (extremely
similar). With the help of isomorphic variables we identified a computationally
tractable class of planning instances that meet certain restrictions. There are
several special cases of this class that are of practical interest, and this
result encompass them.
Ontologies are formal knowledge models that describe concepts and relationships
and enable data integration, information search, and reasoning. Ontology Design
Patterns (ODPs) are reusable solutions intended to simplify ontology development
and support the use of semantic technologies by ontology engineers. ODPs
document and package good modelling practices for reuse, ideally enabling
inexperienced ontologists to construct high-quality ontologies. Although ODPs
are already used for development, there are still remaining challenges that have
not been addressed in the literature. These research gaps include a lack of
knowledge about (1) which ODP features are important for ontology engineering,
(2) less experienced developers' preferences and barriers for employing ODP
tooling, and (3) the suitability of the eXtreme Design (XD) ODP usage
methodology in non-academic contexts. This dissertation aims to close these gaps
by combining quantitative and qualitative methods, primarily based on five
ontology engineering projects involving inexperienced ontologists. A series of
ontology engineering workshops and surveys provided data about developer
preferences regarding ODP features, ODP usage methodology, and ODP tooling
needs. Other data sources are ontologies and ODPs published on the web, which
have been studied in detail. To evaluate tooling improvements, experimental
approaches provide data from comparison of new tools and techniques against
established alternatives. The analysis of the gathered data resulted in a set of
measurable quality indicators that cover aspects of ODP documentation, formal
representation or axiomatisation, and usage by ontologists. These indicators
highlight quality trade-offs: for instance, between ODP Learnability and
Reusability, or between Functional Suitability and Performance Efficiency.
Furthermore, the results demonstrate a need for ODP tools that support three
novel property specialisation strategies, and highlight the preference of
inexperienced developers for template-based ODP instantiation---neither of which
are supported in prior tooling. The studies also resulted in improvements to ODP
search engines based on ODP-specific attributes. Finally, the analysis shows
that XD should include guidance for the developer roles and responsibilities in
ontology engineering projects, suggestions on how to reuse existing ontology
resources, and approaches for adapting XD to project-specific contexts.
One major problem for the designer of electronic systems is the presence of
uncertainty, which is due to phenomena such as process and workload variation.
Very often, uncertainty is inherent and inevitable. If ignored, it can lead to
degradation of the quality of service in the best case and to severe faults or
burnt silicon in the worst case. Thus, it is crucial to analyze uncertainty and
to mitigate its damaging consequences by designing electronic systems in such a
way that uncertainty is effectively and efficiently taken into account. We begin
by considering techniques for deterministic system-level analysis and design of
certain aspects of electronic systems. These techniques do not take uncertainty
into account, but they serve as a solid foundation for those that do. Our
attention revolves primarily around power and temperature, as they are of
central importance for attaining robustness and energy efficiency. We develop a
novel approach to dynamic steady-state temperature analysis of electronic
systems and apply it in the context of reliability optimization. We then proceed
to develop techniques that address uncertainty. The first technique is designed
to quantify the variability in process parameters, which is induced by process
variation, across silicon wafers based on indirect and potentially incomplete
and noisy measurements. The second technique is designed to study diverse
system-level characteristics with respect to the variability originating from
process variation. In particular, it allows for analyzing transient temperature
profiles as well as dynamic steady-state temperature profiles of electronic
systems. This is illustrated by considering a problem of design-space
exploration with probabilistic constraints related to reliability. The third
technique that we develop is designed to efficiently tackle the case of sources
of uncertainty that are less regular than process variation, such as workload
variation. This technique is exemplified by analyzing the effect that workload
units with uncertain processing times have on the timing-, power-, and
temperature-related characteristics of the system under consideration. We also
address the issue of runtime management of electronic systems that are subject
to uncertainty. In this context, we perform an early investigation into the
utility of advanced prediction techniques for the purpose of fine-grained
long-range forecasting of resource usage in large computer systems. All the
proposed techniques are assessed by extensive experimental evaluations, which
demonstrate the superior performance of our approaches to analysis and design of
electronic systems compared to existing techniques.
The abundance of data at our disposal empowers data-driven applications and
decision making. The knowledge captured in the data, however, has not been
utilized to full potential, as it is only accessible to human interpretation and
data are distributed in heterogeneous repositories. Ontologies are a key
technology unlocking the knowledge in the data by providing means to model the
world around us and infer knowledge implicitly captured in the data. As data are
hosted by independent organizations we often need to use several ontologies and
discover the relationships between them in order to support data and knowledge
transfer. Broadly speaking, while ontologies provide formal representations and
thus the basis, ontology alignment supplies integration techniques and thus the
means to turn the data kept in distributed, heterogeneous repositories into
valuable knowledge. While many automatic approaches for creating alignments have
already been developed, user input is still required for obtaining the
highest-quality alignments. This thesis focuses on supporting users during the
cognitively intensive alignment process and makes several contributions. We have
identified front- and back-end system features that foster user involvement
during the alignment process and have investigated their support in existing
systems by user interface evaluations and literature studies. We have further
narrowed down our investigation to features in connection to the, arguably, most
cognitively demanding task from the users’ perspective—manual validation—and
have also considered the level of user expertise by assessing the impact of user
errors on alignments’ quality. As developing and aligning ontologies is an
error-prone task, we have focused on the benefits of the integration of ontology
alignment and debugging. We have enabled interactive comparative exploration and
evaluation of multiple alignments at different levels of detail by developing a
dedicated visual environment—Alignment Cubes—which allows for alignments’
evaluation even in the absence of reference alignments. Inspired by the latest
technological advances we have investigated and identified three promising
directions for the application of large, high-resolution displays in the field:
improving the navigation in the ontologies and their alignments, supporting
reasoning and collaboration between users.
Online video streaming has gained tremendous popularity over recent years and
currently constitutes the majority of Internet traffic. As large-scale on-demand
streaming continues to gain popularity, several important questions and
challenges remain unanswered. This thesis addresses open questions in the areas
of efficient content delivery for HTTP-based Adaptive Streaming (HAS) from
different perspectives (client, network and content provider) and in the design,
implementation, and evaluation of interactive streaming applications over HAS.
As streaming usage scales and new streaming services emerge, continuous
improvements are required to both the infrastructure and the techniques used to
deliver high-quality streams. In the context of Content Delivery Network (CDN)
nodes or proxies, this thesis investigates the interaction between HAS clients
and proxy caches. In particular, we propose and evaluate classes of
content-aware and collaborative policies that take advantage of information that
is already available, or share information among elements in the delivery chain,
where all involved parties can benefit. Asides from the users’ playback
experience, it is also important for content providers to minimize users’
startup times. We have designed and evaluated different classes of client-side
policies that can prefetch data from the videos that the users are most likely
to watch next, without negatively affecting the currently watched video. To help
network providers to monitor and ensure that their customers enjoy good playback
experiences, we have proposed and evaluated techniques that can be used to
estimate clients’ current buffer conditions. Since several services today stream
over HTTPS, our solution is adapted to predict client buffer conditions by only
observing encrypted network-level traffic. Our solution allows the operator to
identify clients with low-buffer conditions and implement policies that help
avoid playback stalls. The emergence of HAS as the de facto standard for
delivering streaming content also opens the door to use it to deliver the next
generation of streaming services, such as various forms of interactive services.
This class of services is gaining popularity and is expected to be the next big
thing in entertainment. For the area of interactive streaming, this thesis
proposes, models, designs, and evaluates novel streaming applications such as
interactive branched videos and multi-video stream bundles. For these
applications, we design and evaluate careful prefetching policies that provides
seamless playback (without stalls or switching delay) even when interactive
branched video viewers defer their choices to the last possible moment and when
users switches between alternative streams within multi-video stream bundles.
Using optimization frameworks, we design and implement effective buffer
management techniques for seamless playback experiences and evaluate several
tradeoffs using our policies.
CPU/GPU heterogeneous systems have shown remarkable advantages in performance and energy consumption compared to homogeneous ones such as standard multi-core systems.Such heterogeneity represents one of the most promising trendsfor the near-future evolution of high performance computing hardware.However, as a double-edged sword, the heterogeneity also brings significant programming complexitiesthat prevent the easy and efficient usage of different such heterogeneous systems.In this thesis, we are interested in four such kinds of fundamental complexities that are associated withthese heterogeneous systems: measurement complexity (efforts required to measure a metric, e.g., measuring enegy), CPU-GPU selection complexity, platform complexity and data management complexity. We explore new low-cost programming abstractions to hide these complexities,and propose new optimization techniques that could be performed under the hood.
For the measurement complexity, although measuring time is trivial by native library support,measuring energy consumption, especially for systems with GPUs, is complexbecause of the low level details involved such as choosing the right measurement methods, handling the trade-off between sampling rate and accuracy,and switching to different measurement metrics.We propose a clean interface with its implementationthat not only hides the complexity of energy measurement,but also unifies different kinds of measurements. The unificationbridges the gap between time measurement and energy measurement,and if no metric-specific assumptions related to time optimization techniques are made,energy optimization can be performedby blindly reusing time optimization techniques.
For the CPU-GPU selection complexity, which relates to efficient utilization of heterogeneous hardware,we propose a new adaptive-sampling based construction mechanism of predictors for such selections which can adapt to different hardware platforms automatically,and shows non-trivial advantages over random sampling.
For the platform complexity, we propose a new modular platform modeling language and its implementation to formally and systematically describe a computer system,enabling zero-overhead platform information queries for high-level software tool chains and for programmers as a basis for making software adaptive.
For the data management complexity, we propose a new mechanism to enable a unified memory view on heterogeneous systemsthat have separate memory spaces. This mechanism enables programmers to write significantly less code,which runs equally fast with expert-written code and outperforms the current commercially available solution: Nvidia's Unified Memory.We further propose two data movement optimization techniques, lazy allocation and transfer fusion optimization.The two techniques are based on adaptively merging messages to reduce data transfer latency.We show that these techniques can be potentially beneficial and we prove that our greedy fusion algorithm is optimal.
Finally, we show that our approaches to handle different complexities can be combined so that programmers could use them simultaneously.
This research was partly funded by two EU FP7 projects (PEPPHER and EXCESS) and SeRC.
Simulations are frequently used techniques for training, performance assessment, and prediction of future outcomes. In this thesis, the term “human-centered simulation” is used to refer to any simulation in which humans and human cognition are integral to the simulation’s function and purpose (e.g., simulation-based training). A general problem for human-centered simulations is to capture the cognitive processes and activities of the target situation (i.e., the real world task) and recreate them accurately in the simulation. The prevalent view within the simulation research community is that cognition is internal, decontextualized computational processes of individuals. However, contemporary theories of cognition emphasize the importance of the external environment, use of tools, as well as social and cultural factors in cognitive practice. Consequently, there is a need for research on how such contemporary perspectives can be used to describe human-centered simulations, re-interpret theoretical constructs of such simulations, and direct how simulations should be modeled, designed, and evaluated.
This thesis adopts distributed cognition as a framework for studying human-centered simulations. Training and assessment of emergency medical management in a Swedish context using the Emergo Train System (ETS) simulator was adopted as a case study. ETS simulations were studied and analyzed using the distributed cognition for teamwork (DiCoT) methodology with the goal of understanding, evaluating, and testing the validity of the ETS simulator. Moreover, to explore distributed cognition as a basis for simulator design, a digital re-design of ETS (DIGEMERGO) was developed based on the DiCoT analysis. The aim of the DIGEMERGO system was to retain core distributed cognitive features of ETS, to increase validity, outcome reliability, and to provide a digital platform for emergency medical studies. DIGEMERGO was evaluated in three separate studies; first, a usefulness, usability, and facevalidation study that involved subject-matter-experts; second, a comparative validation study using an expert-novice group comparison; and finally, a transfer of training study based on self-efficacy and management performance. Overall, the results showed that DIGEMERGO was perceived as a useful, immersive, and promising simulator – with mixed evidence for validity – that demonstrated increased general self-efficacy and management performance following simulation exercises.
This thesis demonstrates that distributed cognition, using DiCoT, is a useful framework for understanding, designing and evaluating simulated environments. In addition, the thesis conceptualizes and re-interprets central constructs of human-centered simulation in terms of distributed cognition. In doing so, the thesis shows how distributed cognitive processes relate to validity, fidelity, functionality, and usefulness of human-centered simulations. This thesis thus provides a new understanding of human-centered simulations that is grounded in distributed cognition theory.
This thesis investigates the possibilities of automating parts of the bug handling process in large-scale software development organizations. The bug handling process is a large part of the mostly manual, and very costly, maintenance of software systems. Automating parts of this time consuming and very laborious process could save large amounts of time and effort wasted on dealing with bug reports. In this thesis we focus on two aspects of the bug handling process, bug assignment and fault localization. Bug assignment is the process of assigning a newly registered bug report to a design team or developer. Fault localization is the process of finding where in a software architecture the fault causing the bug report should be solved. The main reason these tasks are not automated is that they are considered hard to automate, requiring human expertise and creativity. This thesis examines the possi- bility of using machine learning techniques for automating at least parts of these processes. We call these automated techniques Automated Bug Assignment (ABA) and Automatic Fault Localization (AFL), respectively. We treat both of these problems as classification problems. In ABA, the classes are the design teams in the development organization. In AFL, the classes consist of the software components in the software architecture. We focus on a high level fault localization that it is suitable to integrate into the initial support flow of large software development organizations.
The thesis consists of six papers that investigate different aspects of the AFL and ABA problems. The first two papers are empirical and exploratory in nature, examining the ABA problem using existing machine learning techniques but introducing ensembles into the ABA context. In the first paper we show that, like in many other contexts, ensembles such as the stacked generalizer (or stacking) improves classification accuracy compared to individual classifiers when evaluated using cross fold validation. The second paper thor- oughly explore many aspects such as training set size, age of bug reports and different types of evaluation of the ABA problem in the context of stacking. The second paper also expands upon the first paper in that the number of industry bug reports, roughly 50,000, from two large-scale industry software development contexts. It is still as far as we are aware, the largest study on real industry data on this topic to this date. The third and sixth papers, are theoretical, improving inference in a now classic machine learning tech- nique for topic modeling called Latent Dirichlet Allocation (LDA). We show that, unlike the currently dominating approximate approaches, we can do parallel inference in the LDA model with a mathematically correct algorithm, without sacrificing efficiency or speed. The approaches are evaluated on standard research datasets, measuring various aspects such as sampling efficiency and execution time. Paper four, also theoretical, then builds upon the LDA model and introduces a novel supervised Bayesian classification model that we call DOLDA. The DOLDA model deals with both textual content and, structured numeric, and nominal inputs in the same model. The approach is evaluated on a new data set extracted from IMDb which have the structure of containing both nominal and textual data. The model is evaluated using two approaches. First, by accuracy, using cross fold validation. Second, by comparing the simplicity of the final model with that of other approaches. In paper five we empirically study the performance, in terms of prediction accuracy, of the DOLDA model applied to the AFL problem. The DOLDA model was designed with the AFL problem in mind, since it has the exact structure of a mix of nominal and numeric inputs in combination with unstructured text. We show that our DOLDA model exhibits many nice properties, among others, interpretability, that the research community has iden- tified as missing in current models for AFL.
Modern embedded systems deploy several hardware accelerators, in a heterogeneous manner, to deliver high-performance computing. Among such devices, graphics processing units (GPUs) have earned a prominent position by virtue of their immense computing power. However, a system design that relies on sheer throughput of GPUs is often incapable of satisfying the strict power- and time-related constraints faced by the embedded systems.
This thesis presents several system-level software techniques to optimize the design of GPU-based embedded systems under various graphics and non-graphics applications. As compared to the conventional application-level optimizations, the system-wide view of our proposed techniques brings about several advantages: First, it allows for fully incorporating the limitations and requirements of the various system parts in the design process. Second, it can unveil optimization opportunities through exposing the information flow between the processing components. Third, the techniques are generally applicable to a wide range of applications with similar characteristics. In addition, multiple system-level techniques can be combined together or with application-level techniques to further improve the performance.
We begin by studying some of the unique attributes of GPU-based embedded systems and discussing several factors that distinguish the design of these systems from that of the conventional high-end GPU-based systems. We then proceed to develop two techniques that address an important challenge in the design of GPU-based embedded systems from different perspectives. The challenge arises from the fact that GPUs require a large amount of workload to be present at runtime in order to deliver a high throughput. However, for some embedded applications, collecting large batches of input data requires an unacceptable waiting time, prompting a trade-off between throughput and latency. We also develop an optimization technique for GPU-based applications to address the memory bottleneck issue by utilizing the GPU L2 cache to shorten data access time. Moreover, in the area of graphics applications, and in particular with a focus on mobile games, we propose a power management scheme to reduce the GPU power consumption by dynamically adjusting the display resolution, while considering the user's visual perception at various resolutions. We also discuss the collective impact of the proposed techniques in tackling the design challenges of emerging complex systems.
The proposed techniques are assessed by real-life experimentations on GPU-based hardware platforms, which demonstrate the superior performance of our approaches as compared to the state-of-the-art techniques.
The move from single-core processor systems to multi-core and manyprocessor systems comes with the requirement of implementing computations in a way that can utilize these multiple computational units efficiently. This task of writing efficient parallel algorithms will not be possible without improving programming languages and compilers to provide the supporting mechanisms. Computer aided mathematical modelling and simulation is one of the most computationally intensive areas of computer science. Even simplified models of physical systems can impose a considerable computational load on the processors at hand. Being able to take advantage of the potential computational power provided by multi-core systems is vital in this area of application. This thesis tries to address how to take advantage of the potential computational power provided by these modern processors in order to improve the performance of simulations, especially for models in the Modelica modelling language compiled and simulated using the OpenModelica compiler and run-time environment.
Two approaches of utilizing the computational power provided by modern multi-core architectures for simulation of Mathematical models are presented in this thesis: Automatic and Explicit parallelization respectively. The Automatic approach presents the process of extracting and utilizing potential parallelism from equation systems in an automatic way without any need for extra effort from the modellers/programmers. This thesis explains new and improved methods together with improvements made to the OpenModelica compiler and a new accompanying task systems library for efficient representation, clustering, scheduling, profiling, and executing complex equation/ task systems with heavy dependencies. The Explicit parallelization approach allows utilizing parallelism with the help of the modeller or programmer. New programming constructs have been introduced to the Modelica language in order to enable modellers to express parallelized algorithms to take advantage of the computational capabilities provided by modern multicore CPUs and GPUs. The OpenModelica compiler has been improved accordingly to recognize and utilize the information from these new algorithmic constructs and to generate parallel code for enhanced computational performance, portable to a range of parallel architectures through the OpenCL standard
Development of new functionality and smart systems for different types of vehicles is accelerating with the advent of new emerging technologies such as connected and autonomous vehicles. To ensure that these new systems and functions work as intended, flexible and credible evaluation tools are necessary. One example of this type of tool is a driving simulator, which can be used for testing new and existing vehicle concepts and driver support systems. When a driver in a driving simulator operates it in the same way as they would in actual traffic, you get a realistic evaluation of what you want to investigate. Two advantages of a driving simulator are (1.) that you can repeat the same situation several times over a short period of time, and (2.) you can study driver reactions during dangerous situations that could result in serious injuries if they occurred in the real world. An important component of a driving simulator is the vehicle model, i.e., the model that describes how the vehicle reacts to its surroundings and driver inputs. To increase the simulator realism or the computational performance, it is possible to divide the vehicle model into subsystems that run on different computers that are connected in a network. A subsystem can also be replaced with hardware using so-called hardware-in-the-loop simulation, and can then be connected to the rest of the vehicle model using a specified interface. The technique of dividing a model into smaller subsystems running on separate nodes that communicate through a network is called distributed simulation.
This thesis investigates if and how a distributed simulator design might facilitate the maintenance and new development required for a driving simulator to be able to keep up with the increasing pace of vehicle development. For this purpose, three different distributed simulator solutions have been designed, built, and analyzed with the aim of constructing distributed simulators, including external hardware, where the simulation achieves the same degree of realism as with a traditional driving simulator. One of these simulator solutions has been used to create a parameterized powertrain model that can be configured to represent any of a number of different vehicles. Furthermore, the driver's driving task is combined with the powertrain model to monitor deviations. After the powertrain model was created, subsystems from a simulator solution and the powertrain model have been transferred to a Modelica environment. The goal is to create a framework for requirement testing that guarantees sufficient realism, also for a distributed driving simulation.
The results show that the distributed simulators we have developed work well overall with satisfactory performance. It is important to manage the vehicle model and how it is connected to a distributed system. In the distributed driveline simulator setup, the network delays were so small that they could be ignored, i.e., they did not affect the driving experience. However, if one gradually increases the delays, a driver in the distributed simulator will change his/her behavior. The impact of communication latency on a distributed simulator also depends on the simulator application, where different usages of the simulator, i.e., different simulator studies, will have different demands. We believe that many simulator studies could be performed using a distributed setup. One issue is how modifications to the system affect the vehicle model and the desired behavior. This leads to the need for methodology for managing model requirements. In order to detect model deviations in the simulator environment, a monitoring aid has been implemented to help notify test managers when a model behaves strangely or is driven outside of its validated region. Since the availability of distributed laboratory equipment can be limited, the possibility of using Modelica (which is an equation-based and object-oriented programming language) for simulating subsystems is also examined. Implementation of the model in Modelica has also been extended with requirements management, and in this work a framework is proposed for automatically evaluating the model in a tool.
In recent years, binary code analysis, i.e., applying program analysis directly at the machine code level, has become an increasingly important topic of study. This is driven to a large extent by the information security community, where security auditing of closed-source software and analysis of malware are important applications. Since most of the high-level semantics of the original source code are lost upon compilation to executable code, static analysis is intractable for, e.g., fine-grained information flow analysis of binary code. Dynamic analysis, however, does not suffer in the same way from reduced accuracy in the absence of high-level semantics, and is therefore also more readily applicable to binary code. Since fine-grained dynamic analysis often requires recording detailed information about every instruction execution, scalability can become a significant challenge. In this thesis, we address the scalability challenges of two powerful dynamic analysis methods whose widespread use has, so far, been impeded by their lack of scalability: dynamic slicing and instruction trace alignment. Dynamic slicing provides fine-grained information about dependencies between individual instructions, and can be used both as a powerful debugging aid and as a foundation for other dynamic analysis techniques. Instruction trace alignment provides a means for comparing executions of two similar programs and has important applications in, e.g., malware analysis, security auditing, and plagiarism detection. We also apply our work on scalable dynamic analysis in two novel approaches to improve fuzzing — a popular random testing technique that is widely used in industry to discover security vulnerabilities.
To use dynamic slicing, detailed information about a program execution must first be recorded. Since the amount of information is often too large to fit in main memory, existing dynamic slicing methods apply various time-versus-space trade-offs to reduce memory requirements. However, these trade-offs result in very high time overheads, limiting the usefulness of dynamic slicing in practice. In this thesis, we show that the speed of dynamic slicing can be greatly improved by carefully designing data structures and algorithms to exploit temporal locality of programs. This allows avoidance of the expensive trade-offs used in earlier methods by accessing recorded runtime information directly from secondary storage without significant random-access overhead. In addition to being a standalone contribution, scalable dynamic slicing also forms integral parts of our contributions to fuzzing. Our first contribution uses dynamic slicing and binary code mutation to automatically turn an existing executable into a test generator. In our experiments, this new approach to fuzzing achieved about an order of magnitude better code coverage than traditional mutational fuzzing and found several bugs in popular Linux software. The second work on fuzzing presented in this thesis uses dynamic slicing to accelerate the state-of-the-art fuzzer AFL by focusing the fuzzing effort on previously unexplored parts of the input space.
For the second dynamic analysis technique whose scalability we sought to improve — instruction trace alignment — we employed techniques used in speech recognition and information retrieval to design what is, to the best of our knowledge, the first general approach to aligning realistically long program traces. We show in our experiments that this method is capable of producing meaningful alignments even in the presence of significant syntactic differences stemming from, for example, the use of different compilers or optimization levels.
As a field in practice and academia, service design is moving out of its formative phase. In service design research, the realisation of service transformation from idea to service practice and the ways that design(ers) can contribute to this process are topics that are not well understood yet. The work presented in this thesis contributes to improving this understanding.
A programmatic design research approach was used to explore service transformation. This resulted in the formulation of two ways of framing and addressing the topic: type 1 service transformation, which frames the realisation of service transformation in terms of assembling a service delivery system, and type 2 service transformation, which views the realisation of service transformation as enabling value co-creating relationships between service actors.
Type 1 service transformation builds on the assimilation perspective on service innovation where service transformation is realised by implementing service concepts. Service Design contributes to this by facilitating the development of desirable service experiences. Trained designers can apply implementation strategies and support the handover of service design projects to contribute to the realisation of type 1 service transformation by. Design for manufacture and assembly (DFMA) is a generative construct for addressing type 1 service transformation. DFMA is central to the program implementation during design, which was used to explore type 1 service transformation.
Type 2 service transformation builds on the synthesis perspective on service innovation, which adopts a service-dominant logic. Service transformation is the shaping of value co-creating relationships between service actors and is realised by enabling service actors to enact roles that make the envisioned value co-creating relationships possible. Designing contributes by helping service developers to improve their understanding of value co-creating relationships and the way that realising service transformation is expected to affect those relations. Trained designers contribute by supporting this inquiry. The concept of roles, including Role Theory vocabulary, is a generative construct for addressing type 2 service transformation and is central to the program enabling enactment, which is suggested for the study of type 2 service transformation.
The main contribution of this thesis is the articulation of these two perspectives on service transformation. The articulation of these two framings helps service developers and researchers in their efforts to study and work on the realisation of service transformation.
Vast amounts of data are continually being generated by a wide variety of data producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, the ability to make sense of these streams of data through reasoning is of great importance. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in physical environments. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and their refinement an important problem.
Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this work, we integrate techniques for logic-based stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over uncertain streaming data and the problem of robustly managing streaming data and their refinement.
The main contributions of this work are (1) a logic-based temporal reasoning technique based on path checking under uncertainty that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt to situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in a case study on run-time adaptive reconfiguration. The results show that the proposed system - by combining reasoning over and reasoning about streams - can robustly perform stream reasoning, even when the availability of streaming resources changes.
Page responsible: Anne MoeNo 1831
Building Design Capability in the Public Sector: Expanding the Horizons of
Development
Lisa Malmberg
No 1851
Gated Bayesian Networks
Marcus Bendtsen
No 1854
Computational Complexity of some Optimization Problems in Planning
Meysam Aghighi
No 1863
Methods for Detecting Unsolvable Planning Instances using Variable Projection,
2017
Simon Ståhlberg
No 1879
Content Ontology Design Patterns: Qualities, Methods, and Tools, 2017
Karl Hammar
No 1887
System-Level Analysis and Design under Uncertainty, 2017
Ivan Ukhov
No 1891
Fostering User Involvement in Ontology Alignment and Alignment Evaluation, 2017
Valentina Ivanova
No 1902
Efficient HTTP-based Adaptive Streaming of Linear and Interactive Videos, 2018
Vengatanathan Krishnamoorthi
No 1903
Programming Abstractions and Optimization Techniques for GPU-based Heterogeneous
Systems, 2018
Lu Li
No 1913
Studying Simulations with Distributed Cognition, 2018
Jonas Rybing
No 1936
Machine Learning-Based Bug Handling in Large-Scale Software
Development, 2018
Leif Jonsson
No 1964
System-Level Design of GPU-Based Embedded Systems, 2018
Arian Maghazeh
No 1967
Automatic and Explicit Parallelization Approaches for
Equation Based Mathematical Modeling and Simulation, 2019
Mahder Gebremedhin
No 1984
Distributed Moving Base Driving
Simulators; Technology, Performance, and Requirements, 2019
Anders Andersson
No 1993
Scalable Dynamic Analysis of Binary Code, 2019
Ulf Kargén
No 2001
How Service Ideas Are
Implemented: Ways of Framing and Addressing Service Transformation, 2019
Tim Overkamp
No 2006
Robust Stream Reasoning Under
Uncertainty, 2019
Daniel de Leng