Licentiate of Engineering Theses
INTERLEAVED PROCESSING OF NON-NUMERICAL DATA STORED ON A CYCLIC MEMORY
Abstract: The memory access delay, caused by the ’emulation’ of the random access capabilities on a sequential access memory type, is stated as a problem in Non-Numerical Data Processing and the motivation for a problem solution is provided. An adequate storage data representation and the proper organization of processing algorithms are introduced, in order to take advantage of the natural sequential access of a Cyclic Memory. In support of this approach, two published works were produced.
The first paper entitled "The Utilization of Controllable Cyclic Memory Properties in Non-Numerical Data Processing" defines the conditions for the sequential evaluation of a given query. The second work "Sequential Evaluation of Boolean Functions" was derived originally as a supporting theory for the concepts presented in the first paper; namely in order to enable the sequental (per Boolean function argument) evaluation of a verification expression of a query, given as a Boolean expression of predicates of attributes of given data type. The last method however, has much broader application area e.,g., time efficient real time decision making.
AN INTERACTIVE FLOWCHARTING TECHNIGUE FOR COMMUNICATING AND REALIZING ALGORITHMS
Arne Jönsson, Mikael Patel
Abstract: This thesis describes the design, specification, implementation and experience of the use of an interactive flowcharting technique for communicating and realizing algorithms. Our goals are: 1) to help novices to understand computers, by giving them a framework for organizing algorithms, and 2) to support development of software produced by groups of people over an extended period of time. Based on the notions of Dimensional Flowcharts, a system called the DIMsystem has been developed for handling structured flowcharts. The DIMsystem consist of different modules for creating, manipulating and communicating dimensional flowcharts. The current research implementation is in Pascal and runs on a VAX/VMS computer system.
RETARGETING OF AN INCREMENTAL CODE GENERATOR
Abstract: Incremental programming environments are becoming increasingly popular. Earlier systems are mostly based on interpretation. DICE is an incremental programming environment based on incremental compilation only. In this thesis the retargeting of the code generator from PDP-11 to DEC-20 is described.
DICE supports separate host and target environments. The code and data administration in the target is discussed and analyzed. A new runtime environment in the target is created by providing an interface to an existing one.
In DICE, the traditional compile, link and load passes are integrated and interleaved. A classification of linking programs is made from this point of view.
ON THE USE OF TYPICAL CASES FOR KNOWLEDGE-BASED CONSULTATION AND TEACHING.
Abstract: Knowledge-based approaches to software development promise to result in important break-through both regarding our ability to solve complex problems and in improved software productivity in general. A key technique here is to separate domain knowledge from control information needed for the procedural execution of a program. However, general-purpose inference mechanisms entail certain disadvantages with respect to e.g. efficiency, focusing in problem-solving, and transparency in knowledge representation. In this licentiate thesis we propose an approach, where domain-dependent control is introduced in the form of prototypes, based on typical cases from the application domain. It is shown how this scheme results in a more effective problem-solving behaviour as compared with a traditional approach, relying entirely on rules for domain as well as control information. Further we demonstrate how the knowledge base can be easily reused for independent purposes, such as consultative problem solving and teaching respectively. Our claims are supported by implementations, both in a conventional knowledge system environment with the help of the EMYCIN system, and in a system supporting reasoned control of reasoning, namely the economic advice giving in a bank environment, in particular advice on procedures for transfers of real estates.
STEPS TOWARDS THE FORMALIZATION OF DESIGNING VLSI SYSTEMS
Abstract: This thesis describes an attempt to formalize the design process of VLSI systems as a sequence of semantics-preserving mappings which transforms a program-like behavioral description into a structural description. The produced structural description may then be partitioned into several potential asynchronous modules with well-defined interfaces. The proposed strategy is based on a formal computational model derived from timed Petri net and consisting of separate, but related, models of control and data parts. Partitioning of systems into submodules is provided both on the data part and on the control part, which produces a set of pairs of corresponding data subparts and control subparts and allows potential asynchronous operation of the designed systems as well as physical distribution of the modules. The use of such a formal specification also leads to the effective use of CAD and automatic tools in the synthesis process as well as providing for the possibility of verifying some aspects of a design before it is completed. CAMAD, an integrated design aid system, has been partially developed based on these formalizations. The present thesis attempts also to formulate the control/data path allocation and module partitioning problem as an optimization problem. This differs from previous approaches where ad hoc algorithms and predefined implementation structures are explicitly or implicitly used, and where a centralized control strategy is assumed.
SIMULATION AND EVALUATION OF AN ARCHITECTURE BASED ON ASYNCHRONOUS PROCESSES
Abstract: Much research today is devoted to finding new and improved computer architectures in which parallel computations can be performed, the goal being an exploitation of the natural parallelism inherent in many problem descriptions. This thesis describes a register level simulator for a family of architectures based on asynchronous processes. An important aspect of this class of architectures is its modularity. Within the architecture, we hope to avoid the problem of dynamically binding every operation as in dataflow machines. A silicon compiler can use the modularity of descriptions to perform various optimizations on instances of the architecture. The simulator is written in a language called Occam, in which parallel execution at the statement level can be expressed. A short description of the language is given and some of the issues of designing, testing and maintaining concurrent programs are discussed. The added complexity of parallelism especially makes the debugging phase very difficult.
ICONSTRAINT, A DEPENDENCY DIRECTED CONSTRAINT MAINTENANCE SYSTEM
Abstract: Problem solving involves search. In AI we try to find ways of avoiding or minimizing search. An effective approach is to exploit knowledge of the problem domain. Such knowledge often takes the form of a set of constraints. In general, a constraint represents a required relationship among some variables. For this reason we usually assume the existence of some machinery that can enforce a given set of constraints and resolve the conflicts that arise when these are violated.
Programming systems based on constraints have been successfully applied to circuit analysis, design and simulation, scene analysis, plan generation, hardware diagnosis and qualitative reasoning.
This thesis presents ICONStraint, a programming language based on the constraints paradigm of computation and gives a characterization of consistency conditions and the operations for ensuring consistency in constraint networks designed in the language.
In ICONStraint we represent constraint systems in dependency structures and use reason maintenance, local propagation and dependency directed backtracking for computation and consistency maintenance.
ICONStraint has been implemented in Interlisp-D.
ON THE SPECIFICATION AND VERIFICATION OF VLSI SYSTEMS
Abstract: System designers now have the opportunity to place on the order of 105-106 transistors on a single chip allowing larger and more complicated systems to be produced at reduced production costs. This opportunity increases the demand for appropriate design automation including tools for synthesis, analysis, and verification. However, human designers still are and will be our main source of innovation. In order to express and communicate new design ideas to a computer aided engineering environment some form of specification language is needed. Different tools and engineering situations put different requirements on a specification language. Is is important that all users of a language have a firm knowledge about how to interpret the language. This thesis proposes a specification language (ASL), a semantic model, and transformation rules related to the language. The thesis focuses upon the specification of a systems actional behaviour and the semantics provides a framework for the verification of a systems structural implementation versus its actional specification. A set of calculus, port and event reduction rules, and rules for port binding and partial evaluation are proposed as tools for verification.
A STRUCTURE EDITOR FOR DOCUMENTS AND PROGRAMS
Abstract: This thesis presents a generalized approach to data editing in interactive systems. The design and implementation of the ED3 editor, which is a powerful tool for text editing combining the ability to handle hierarchical structures with screen-oriented text editing facilities, is described as well as a number of ED3 applications.
A technique for efficient program editing for large programs is also described. An editor for Pascal and Ada programs has been created by integrating parsers and pretty-printers and a Pascal to Ada syntax transaltor into ED3.
NEW RESULTS ABOUT THE APPROXIMATION BEHAVIOR OF THE GREEDY TRIANGULATION
Abstract: In this paper it is shown that there is some constant c, such that for any polygon, with or without holes, with w concave vertices, the length of any greedy triangulation of the polygon is not longer than c x (w + 1) times the length of a minimum weight triangulation of the polygon (under the assumption that no three vertices lie on the same line). A low approximation constant is proved for interesting classes of polygons. On the other hand, it is shown that for every integer n greater than 3, there exists some set S of n points in the plane, such that the greedy triangulation of S is
W(n1/2) times longer than the minimum weight triangulation (this improves the previously known W(n1/3) lower bound). Finally, a simple linear-time algorithm is presented and analyzed for computing greedy triangulations of polygons with the so called semi-circle property.
STATISTICAL EXPERT SYSTEMS - A SPECIAL APPLICATION AREA FOR KNOWLEDGE-BASED COMPUTER METHODOLOGY
Shamsul I. Chowdhury
Abstract: The study investigates the purposes, functions and requirements of statistical expert systems and focuses attention on some unique characteristics of this subcategory of knowledge-based systems. Statistical expert systems have been considered in this thesis as one approach to improve statistical software and extend their safe usability to a broad category of users in different phases of a statistical investigation. Some prototype applications in which the author has been involved are presented and discussed. A special chapter is devoted to the question whether this methodology might be a rare example of an advanced technology that is suitable for application in non-advanced environments, such as in developing countries.
INCREMENTAL SCANNING AND TOKEN-BASED EDITING
Abstract: A primary goal with this thesis work has been to investigate the consequences of a token-based program representation. Among the results which are presented here are an incremental scanning algorithm together with a token-based syntax sensitive editing approach for program editing.
The design and implementation of an incremental scanner and a practically useful syntax-sensitive editor is described in some detail. The language independent incremental scanner converts textual edit operations to corresponding operations on the token sequence. For example, user input is converted to tokens as it is typed in. This editor design makes it possible to edit programs with almost the same flexibility as with a conventional text editor and also provides some features offered by a syntax-directed editor, such as template instantiation, automatic indentation and prettyprinting, lexical and syntactic error handling.
We have found that a program represented as a token sequence can on the average be represented in less than half the storage space required for a program in text form. Also, interactive syntax checking is speeded up since rescanning is not needed.
The current implementation, called TOSSED - Token-based Syntax Sensitive Editor, supports editing and development of programs written in Pascal. The user is guaranteed a lexically and syntactically correct program on exit from the editor, which avoids many unnecessary compilations. The scanner, parser, prettyprinter, and syntactic error recovery are table-driven and language independent template specification is supported. Thus, editors supporting other languages can be generated.
SPORT-SORT - SORTING ALGORITHMS AND SPORT TOURNAMENTS
Abstract: Arrange a really short, thrilling and fair tournament! Execute parallel sorting in a machine of a new architecture! The author shows how these problems are connected. He designs several new tournament schemes, and analyses them both in theory and in extensive simulations. He uses only elementary mathematical and statistical methods. The results are much better than previous ones, and close to the theoretical limit. Now personal computers can be used to arrange tournaments which give the complete ranking list of several thousands of participants within only 20 - 30 rounds.
NETWORK AND LATTICE BASED APPROACHES TO THE REPRESENTATION OF KNOWLEDGE
Abstract: This report is a study of the formal means for specifying properties of network structures as provided by the theory of information management systems. Along with axioms for some simple network structures we show examples of the manner of which intuitive observations on the structures are formulated and proved.
AFFECT-CHAINING IN PROGRAM FLOW ANALYSIS APPLIED TO QUERIES OF PROGRAMS
Mariam Kamkar, Nahid Shahmehri
Abstract: This thesis presents how program flow analysis methods can be used to help the programmer understand data flow and data dependencies in programs. The design and implementation of an interactive query tool based on static analysis methods is presented. These methods include basic analysis and cross-reference analysis, intraprocedural data flow analysis, interprocedural data flow analysis and affect-chaining analysis.
The novel concept of affect-chaining is introduced, which is the process of analysing flow of data between variables in a program. We present forward- and backward- affect-chaining, and also algorithms to compute these quantities. Also, a theorem about affect-chaining is proved.
We have found that data flow problems appropriate for query applications often need to keep track of paths associated with data flows. By contrast, flow analysis in conventional compiler optimization
TRANSFER AND DISTRIBUTION OF APPLICATION PROGRAMS
Abstract: This work addresses two problems in development of application software. One problem concerns the transfer of software from environments optimized for development, to target environments oriented towards efficient execution. The second problem is how to express distribution of application software. This thesis contains three papers. The first is about a programming language extension for distribution of a program to a set of workstations. The second reports on an experiment in downloading aiming at transfer of programs from development to runtime environments, and the third describes an application area where the need of these development and distribution facilities are evident.
CASE STUDIES IN KNOWLEDGE ACQUISITION, MIGRATION AND USER ACCEPTANCE OF EXPERT SYSTEMS
Abstract: In recent years, expert systems technology has become commercially mature, but widespread delivery of systems in regular use is still slow. This thesis discusses three main difficulties in the development and delivery of expert systems, namely,
the knowledge acquisition bottleneck, i.e. the problem of formalizing the expert knowledge into a computer-based representation.
the migration problem, where we argue that the different requirements on a development environment and a delivery environment call for systematic methods to transfer knowledge bases between the environments.
the user acceptance barrier, where we believe that user interface issues and concerns for a smooth integration into the end-user’s working environment play a crucial role for the successful use of expert systems.
In this thesis, each of these areas is surveyed and discussed in the light of experience gained from a number of expert system projects performed by us since 1983. Two of these projects, a spot-welding robot configuration system and an antibody analysis advisor, are presented in greater detail in the thesis.
REASONING ABOUT INTERDEPENDENT ACTIONS
Abstract: This thesis consists of two papers on different but related topics.
The first paper is concerned with the use of logic as a tool to model mechanical assembly processes. A restricted 2+-dimensional world is introduced and although this world is considerably simpler than a 3-dimensional one, it is powerful enough to capture most of the interesting geometrical problems arising in assembly processes. The geometry of this 2+-dimensional world is axiomatized in first order logic. A basic set of assembly operations are identified and these operations are expressed in a variant of dynamic logic which is modified to attack the frame problem.
The second paper presents a formalism for reasoning about systems of sequential and parallel actions that may interfere or interact with each other. All synchronization of actions is implicit in the definitions of the actions and no explicit dependency information exists. The concept of action hierarchies is defined, and the couplings between the different abstraction levels are implicit in the action definitions. The hierarchies can be used both top-down and bottom-up and thus support both planning and plan recognition in a more general way than is usual.
ON CONTROL STRATEGIES AND INCREMENTALITY IN UNIFICATION-BASED CHART PARSING
Abstract: This thesis is a compilation of three papers dealing with aspects of context-free- and unification-based chart parsing of natural language. The first paper contains a survey and an empirical comparison of rule-invocation strategies in context-free chart parsing. The second paper describes a chart parser for a unification-based formalism (PATR) which is control-strategy-independent in the sense that rule invocation, search, and parsing direction are parametrized. The third paper describes a technique for incremental chart parsing (under PATR) and outlines how this fits into continued work aimed at developing a parsing system which is both interactive and incremental.
A SOFTWARE SYSTEM FOR DEFINING AND CONTROLLING ACTIONS IN A MECHANICAL SYSTEM
Abstract: This thesis deals with the subject of technical systems where machinery, usually of mechanical character, are controlled by a computer system. Sensors in the system provide information about a machine’s current state, and are crucial for the controlling computer. The thesis presents an architecture for such a software system and then describes the actual implementation along with some examples.
DIAGNOSING FAULTS USING KNOWLEDGE ABOUT MALFUNCTIONING BEHAVIOR
Abstract: Second generation expert systems presume support for deep reasoning, i.e. the modelling of causal relationships rather than heuristics only. Such an approach benefits from more extensive inference power, improved reusability of knowledge and a better potential for explanations. This thesis presents a method for diagnosis of faults in technical devices, which is based on the representation of knowledge about the structure of the device an the behavior of its components. Characteristic for our method is that components are modelled in terms of states related to incoming and outgoing signals, where both normal and abnormal states are described. A bidirectional simulation method is used to derive possible faults, single as well as multiple, which are compatible with observed symptoms.
The work started from experiences with a shallow expert system for diagnosis of separator systems, with a main objective to find a representation of knowledge which promoted reusability of component descriptions. The thesis describes our modelling framework and the method for fault diagnosis.
Our results so far indicate that reusability and maintainability is improved, for instance since all knowledge is allocated to components rather than to the structure of the device. Further more, our approach seems to allow more reliable fault diagnosis than other deep models, due to the explicit modelling of abnormal states. Another advantage is that constraints do not have to be stated separately, but are implicitly represented in simulation rules.
SUPPORTING DESIGN AND MANAGEMENT OF EXPERT SYSTEM USER INTERFACES
Abstract: This thesis is concerned with user interface aspects of expert systems, and in particular tools for the design and management of such user interfaces. In User Interface Management Systems (UIMSs) in general, the user interface is seen as a separate structure. We investigate the possibilities of treating an expert system user interface as separate from the reasoning process of the system, and the consequences thereof.
We propose that an expert system user interface can be seen as a combination of two different structures; the surface dialogue, comprising mainly lexical and syntactical aspects, and the session discourse which represents the interaction between user and system on a discourse level. For the management of these two structures, a tool consisting of two modules is outlined. The two modules are the surface dialogue manager and the session discourse manager. Proposed architectures for these two modules are presented and discussed. The thesis also outlines further steps towards a validation of the proposed approach.
ON ADAPTIVE SORTING IN SEQUENTIAL AND PARALLEL MODELS
Abstract: Sorting is probably the most well-studied problem in computer science. In many applications the elements to be sorted are not randomly distributed, but are already nearly ordered. Most existing algorithms do not take advantage of this fact. In this thesis, the problem of utilizing existing order among the input sequence, yielding adaptive sorting algorithms, is explored. Different measures for measuring existing order are proposed; all motivated by geometric interpretations of the input. Furthermore, several adaptive, sequential and parallel, sorting algorithms are provided.
The thesis consists of three papers. The first paper studies the local insertion sort algorithm of Mannila, and proposes some significant improvements. The second provides an adaptive variant of heapsort, which is space efficient and uses simple data structures. In the third paper, a cost-optimal adaptive parallel sorting algorithm is presented. The model of computation is the EREW PRAM.
DYNAMIC CONFIGURATION IN A DISTRIBUTED ENVIRONMENT
Abstract: This thesis describes an implementation of the PEPSy paradigm and distinguishes between the different types of changes occurring in a distributed system, and how the many complicating issues of distribution affect our ability to perform these changes.
We also compare our implementation with known systems from both the distributed programming and software engineering communities.
The thesis includes a description of two tools for configuring and reconfiguring distributed systems, a list of facilities and constructs deemed necessary and desirable for reconfiguring distributed systems, an enumeration of the different aspects of change in distributed systems, and a short evaluation of the programming language Conic used in the implementation.
DESIGN OF A MULTIPLE VIEW PRESENTATION AND INTERACTION MANAGER
Abstract: This thesis describes the design model of a presentation and interaction manager for an advanced information system, based on concepts developed in the domain of User Interface Management Systems - primarily, the separation of presentation and interaction components from application semantics and data. We show our design to be in many ways an extension of that common in UIMSs; significantly, we apply presentation separation to data, as well as programs; we allow presentation and interaction methods to be selected dynamically at run-time, which gives rise to the concept of multiple views on the same information, or application semantics; and, we may adapt to the capabilities of different computer systems. We present the components of the presentation manager, including the methods used for specifying the user interface in terms of both presentation and interaction; and the support provided for application programs. We also present the LINCKS advanced information system of which our presentation manager is a component, and demonstrate how it affects our design.
A STUDY IN DOMAIN-ORIENTED TOOL SUPPORT FOR KNOWLEDGE ACQUISITION
Abstract: Knowledge acquisition is the process of bridging the gap between human expertise and representations of domain knowledge suited for storage as well as reasoning in a computer. This gap is typically large, which makes the knowledge acquisition process extensive and difficult.
In this thesis, an iterative two-stage methodology for knowledge acquisition is advocated. In the first stage, a domain-oriented framework with a conceptual model of the domain is developed. In the second stage, that framework together with supporting tools are actively used by the domain expert for building the final knowledge base. The process might be iterated when needed.
This approach has been tested for explorative planning of protein purification. Initially, an expert system was hand-crafted using conventional knowledge acquisition methods. In a subsequent project, a compatible system was developed directly by the domain expert using a customized tool. Experience from these projects are reported in this thesis together with a discussion of our methodological approach.
THE DEEP GENERATION OF TEXT IN EXPERT CRITIQUING SYSTEMS
Abstract: An expert critiquing system differs from most first-generation systems in that it allows the user to suggest his own solution to a problem and then receive expert feedback (the critique) on his proposals. A critique may be presented in different ways - textually, graphically, in tabular form, or a combination of these. In this report we discuss textual presentation. A generalized approach to text generation is presented, in particular in producing the deep structure of a critiquing text.
The generation of natural language falls into two generally accepted phases: deep generation and surface generation. Deep generation involves selecting the content of the text and the level of detail to be included in the text, ie. deciding what to say and how much information to include. Surface generation involves choosing the words and phrases to express the content determined by the deep generator. In this report we discuss the deep generation of a critique.
We present expert critiquing systems and the results of an implementation. Then we review recent advances in text generation which suggest more generalized approaches to the production of texts and we examine how they can be applied to the construction of a critique.
Central considerations in the deep generation of a text involve establishing the goals the text is to achieve (eg. provide the user with the necessary information on which to base a decision), determining a level of detail of such information to be included in the text and organizing the various parts of the text to form a cohesive unit. We discuss the use of Speech Act Theory as means of expressing the goals of the text, the user model to influence the level of detail and the use of Rhetorical Structure Theory for the organization of the text. Initial results from the text organization module are presented.
CONTRIBUTIONS TO THE DECLARATIVE APPROACH TO DEBUGGING PROLOG PROGRAMS
Abstract: Logic programs have the characteristic that their intended semantics can be expressed declaratively or operationally. Since the two semantics coincide, programmers may find it easier to adopt the declarative view when writing the program. But this causes a problem when the program is to be debugged. The actual semantics of a logic program is dependent on the specific implementation on which the program is run. Although the actual semantics is of operational nature it is usually different from the theoretical operational semantics. Hence debugging may require a comparison of the actual (operational) semantics of a program and its intended declarative semantics.
The idea of declarative debugging, first proposed by Shapiro under the term algorithmic debugging, is a methodology which leads to detecting errors in a logic program through knowledge about its intended declarative semantics. Current Prolog systems do not employ declarative diagnosis as an alternative to the basic tracer. This is partly due to the fact that the Shapiro’s declarative debugging system only dealt with pure Prolog programs, and partly due to practical limitations of the suggested methods and algorithms. This thesis consists of three papers. In these papers we point out practical problems with the use of basic declarative debugging systems, and present methods and algorithms which make the framework applicable to a wider range of Prolog programs. We introduce the concept of assertion that can ease communication between the user and the debugging system by reducing the number of necessary interactions, and introduce new debugging algorithms which are adapted to this extended notion. Further, we extend the basic debugging scheme to cover some built-in features of Prolog, and report on practical experience with a prototype declarative debugging system which incorporates the extensions.
TEMPORAL INFORMATION IN NATURAL LANGUAGE
Abstract: The subject of this thesis is temporal information; how it is expressed and conveyed in natural language. When faced with the task of processing temporal information in natural language computationally, a number of challenges has to be met. The linguistic units that carry temporal information must be recognized and their semantic functions decided upon. Certain temporal information is not explicitly stated by grammatical means and must be deduced from contextual knowledge and from discourse principles depending on the type of discourse.
In this thesis, a grammatical and semantic description of Swedish temporal expressions is given. The context dependency of temporal expressions is examined and the necessity of a conceptual distinction between phases and periods is argued for. Furthermore, it is argued that the Reichenbachian notion of reference time is unnecessary in the processing of temporal processing of texts. Instead the general contextual parameters speech time/utterance situation (ST/US) and discourse time/temporal focus (DT/TF) are defended. An algorithm for deciding the temporal structure of discourse is presented where events are treated as primary individuals.
A SYSTEMATIC APPROACH TO ABSTRACT INTERPRETATION OF LOGIC PROGRAMS
Abstract: The notion of abstract interpretation facilitates a formalized process of approximating meanings of programs. Such approximations provide a basis for inferring properties of programs. After having been used mainly in the area of compiler optimization of traditional, imperative languages it has recently also attracted people working with declarative languages.
This thesis provides a systematic framework for developing abstract interpretations of logic programs. The work consists of three major parts which together provide a basis for practical implementations of abstract interpretation techniques. Our starting point is a new semantic description of logic programs which extracts the set of all reachable internal states in a possibly infinite collection of SLD-derivations. This semantics is called the base interpretation. Computationally the base interpretation is of little interest since it is not, in general, effectively computable. The aim of the base interpretation is rather to facilitate construction of abstract interpretations which approximate it. The second part of this work provides systematic methods for constructing such abstract interpretations from the base interpretation. The last part of the thesis concerns efficient computing of approximate meanings of programs. We provide some simple but yet efficient algorithms for computing meanings of programs.
The thesis also provides a survey of earlier work done in the area of abstract interpretation of logic programs and contains a comparison between that work and the proposed solution.
HORN CLAUSE LOGIC WITH EXTERNAL PROCEDURES: TOWARDS A THEORETICAL FRAMEWORK
Abstract: Horn clause logic has certain properties which limit its usefulness as a programming language. In this thesis we concentrate on three such limitations: (1) Horn clause logic is not intended for the implementation of algorithms. Thus, if a problem has an efficient algorithmic solution it may be difficult to express this within the Horn clause formalism. (2) To work with a predefined structure like integer arithmetic, one has to axiomatize it by a Horn clause program. Thus functions of the structure are to be represented as predicates of the program. (3) Instead of re-implement existing software modules, it is clearly better to re-use them. To this end, a support for combining Horn clause logic with other programming languages is needed.
When extending the Horn clause formalism, there is always a trade-off between general applicability and purity of the resulting system. There have been many suggestions for solving some of problems (1) to (3). Most of them use one of the following strategies: (a) To allow new operational features, such as access to low-level constructs of other languages. (b) To introduce new language constructs, and to support them by a clean declarative semantics and a complete operational semantics.
In this thesis a solution to problems (1) to (3) is suggested. It combines the strategies of (a) and (b) by limiting their generality: We allow Horn clause programs to call procedures written in arbitrary languages. It is assumed however that these procedures are either functional or relational. The functional procedures yield a ground term as output whenever given ground terms as input. Similarly, the relational procedures either succeed or fail whenever applied to ground terms. Under these assumptions the resulting language has a clean declarative semantics.
For the operational semantics, an extended but incomplete unification algorithm, called S-unify is developed. By using properties of this algorithm we characterize classes of goals for which our interpreter is complete. It is also formally proved that (a slightly extended version of) S-unify is as complete as possible under the adopted assumptions.
A PROTOTYPE SYSTEM FOR LOGICAL REASONING ABOUT TIME AND ACTION
Abstract: This thesis presents the experience and results from the implementation of a prototype system for reasoning about time and action. Sandewall has defined syntax, semantics and preference relations on the interpretations of a temporal logic. The preference relations are so defined that the preferred interpretations will contain a minimal number of changes not explained by actions occurring in the world and also a minimal number of actions which do occur. An algorithm for a model-based decision procedure is also defined by Sandewall. The algorithm is given a partial description of a scenario and returns all the preferred models of the given description. The preferred models are calculated in two levels: the first searches the set of all sets of actions; the second calculates all the preferred models of the given description with respect to a given set of actions. In this work a proposed implementation of the second level is described and discussed. During the implementation of the system we discovered a flaw in the temporal logic, which lead to a modification of the logic. The implemented system is based on this modified logic.
A discussion about the termination of the first level suggests that the level only terminates under very strong conditions. However, if the condition of returning all preferred models is relaxed, then the first level will terminate for an arbitrary set of formulas under the condition that there exists a preferred model with a finite set of actions. The complexity of the proposed implementation of the second level is of the order faculty over the number of actions in the given plan.
Finally, the AI-planner TWEAK is reviewed and we discuss the similarities in the problem-solving behavior of TWEAK and the decision procedure.
AN APPROACH TO EXTRACTION OF PIPELINE STRUCTURES FOR VLSI HIGH-LEVELSYNTHESIS
Abstract: One of the concerns in high-level synthesis is how to efficiently exploit the potential concurrency in a design. Pipelining achieves a high degree of concurrency, and a certain structural regularity through exploitation of locality in communication. However, pipelining cannot be applied to all designs. Pipeline extraction localizes parts of the design that can benefit form pipelining. Such extraction is a first step in pipeline synthesis. While current pipeline synthesis systems are restricted to exploitation of loops, this thesis addresses the problem of extracting pipeline structures from arbitrary designs without apparent pipelining properties. Therefore, an approach that is based on pipelining of individual computations is explored. Still, loops constitute an important special case, and can be encompassed within the approach in an efficient way. The general formulation of the approach cannot be applied directly for extraction purposes, because of a combinatorial explosion of the design space. An iterative search strategy to handle this problem i presented. A specific polynomial-time algorithm based on this strategy, using several additional heuristics to reduce complexity, has been implemented in the PiX system, which operates as a preprocessor to the CAMAD VLSI design system. The input to PiX is an algorithmic description in a Pascal-like language, which is translated into the Extended Timed Petri Net (ETPN) representation. The extraction is realized as analysis of and transformations on the ETPN. Preliminary results from PiX show that the approach is feasible and useful for realistic designs.
A THREE-VALUED APPROACH TO NON-MONOTONIC REASONING
Abstract: The subject of this thesis is the formalization of a type of non-monotonic reasoning using a three-valued logic based on the strong definitions of Kleene. Non-monotonic reasoning is the rule rather than the exception when agents, human or machine, must act where information about the environment is uncertain or incomplete. Information about the environment is subject to change due to external causes, or may simply become outdated. This implies that inferences previously made may no longer hold and in turn must be retracted along with the revision of other information dependent on the retractions. This is the variety of reasoning we would like to find formal models for.
We start by extending Kleene’s three-valued logic with an "external negation" connective where ~ a is true when a is false or unknown. In addition, a default operator D is added where D a is interpreted as "a is true by default. The addition of the default operator increases the expressivity of the language, where statements such as "a is not a default" are directly representable. The logic has an intuitive model theoretic semantics without any appeal to the use of a fixpoint semantics for the default operator. The semantics is based on the notion of preferential entailment, where a set of sentences G preferentially entails a sentence a, if and only if a preferred set of the models of G are models of a. We also show that one version of the logic belongs to the class of cumulative non-monotonic formalisms which are a subject of current interest.
A decision procedure for the propositional case, based on the semantic tableaux proof method is described and serves as a basis for a QA-system where it can be determined if a sentence a is preferentially entailed by a set of premises G. The procedure is implemented.
COACHING PARTIAL PLANS: AN APPROACH TO KNOWLEDGE-BASED TUTORING
Abstract: The thesis describes a design for how a tutoring system can enhance the educational capabilities of a conventional knowledge-based system. Our approach to intelligent tutoring has been conceived within the framework of the KNOWLEDGE-LINKER project, which aims to develop tools and methods to support knowledge management and expert advice-giving for generic applications. Biochemistry, more specifically experiment planning, is the current reference domain for the project. The selected tutoring paradigm is a computer coach, based on the following central concepts and structures: instructional prototypes, an intervention strategy, teaching operators and instructional goals controlling the instructional strategy; error descriptions to model common faults; and stereotype user models to support the user modelling process. The tutoring interaction is planned using the instructional prototypes and constrained by an intervention strategy which specifies when the user should be interrupted, for which reason, and how the interruption should be handled. The set of instructional prototypes and teaching operators can be used to describe individual teaching styles within the coaching paradigm; we propose one way to represent them using discourse plans based on a logic of belief. The case data may be either generated by the coach or specified by the user, thus making possible using the coach both for instructional purposes as well as job assistance.
POSTMORTEM DEBUGGING OF DISTRIBUTED SYSTEMS
Abstract: This thesis describes the design and implementation of a debugger for parallel programs executing on a system of loosely coupled processors. A primary goal has been to create a debugging environment that structurally matches the design of the distributed program. This means that the debugger supports hierarchical module structure, and communication flow through explicitly declared ports. The main advantages of our work over existing work in this area are: overview of the inter-process communication structure, a minimal amount of irrelevant information presented in the inspection tools, and support for working at different levels of detail. The debugging system consists of a trace collecting runtime component linked into the debugged program, and two window based tools for inspecting the traces. The debugger also includes animation of traced events.
SLDFA-RESOLUTION - COMPUTING ANSWERS FOR NEGATIVE QUERIES
Abstract: The notion of SLDNF-resolution gives a theoretical foundation for implementation of logic programming languages. However, a major drawback of SLDNF-resolution is that for negative queries it can not produce answers other than yes or no. Thus, only a limited class of negative queries can be handled. This thesis defines an extension of SLDNF-resolution, called SLDFA-resolution, that allows to produce the same kind of answers for negative queries as for positive ones. The extension is applicable for every normal program. A proof of its soundness with respect to the completion semantics with the (weak) domain closure axiom is given.
USING CONNECTIVITY GRAPHS TO SUPPORT MAP-RELATED REASONING
Peter D. Holmes
This thesis describes how connectivity graphs can be used to support automated as well as human reasoning about certain map-related problems. Here, the term "map" intends to denote the representation of any two-dimensional, planar surface which can be partitioned into regions of free vs. obstructed space. This thesis presents two methods for solving shortest path problems within such maps. One approach involves the use of heuristic rules of inference, while the other is purely algorithmic. Both approaches employ A* search over a connectivity graph -- a graph abstracted from the map’s 2-D surface information. This work also describes how the algorithmic framework has been extended in order to supply users with graphical replies to two other map- related queries, namely visibility and localization. The technique described to solve these latter two queries is unusual in that the graphical responses provided by this system are obtained through a process of synthetic construction. This thesis finally offers outlines of proofs regarding the computational complexity of all algorithmic methods employed.
IMPROVING IMPLEMENTATION OF GRAPHICAL USER INTERFACES FOR OBJECT-ORIENTED KNOWLEDGE-BASES
Abstract: Second generation knowledge-based systems have raised the focus of research from rule-based to model-based systems. Model-based systems allow knowledge to be separated into target domain model knowledge and problem solving knowledge.
This work supports and builds on the hypothesis that fully object-oriented knowledge-bases provide the best properties for managing large amounts of target domain model knowledge. The ease by which object-oriented representations can be mapped to efficient graphical user interfaces is also beneficial for building interactive graphical knowledge acquisition and maintenance tools. These allow experts to incrementally enter and maintain larger quantities of knowledge in knowledge-bases without support from a knowledge engineer.
The thesis points to recent advances in the conception of knowledge-based systems. It shows the need for efficient user interfaces for management of large amounts of heterogeneous knowledge components. It describes a user interface software architecture for implementing interactive graphical knowledge-base browsers and editors for such large knowledge-bases. The architecture has been inspired by object-oriented programming and data-bases, infological theory, cognitive psychology and practical implementation work.
The goal with the user interface software architecture has been to facilitate the implementation of flexible interactive environments that support creative work. Theoretical models of the entire user interaction situation including the knowledge-base, the user interface and the user are described. The models indicate how theoretical comparisons of different user interface designs can be made by using certain suggested measures.
The architecture was developed in the frame of a cooperative project with the Department of Mechanical Engineering on developing a knowledge-based intelligent front end for a computer aided engineering system for damage tolerance design on aircraft structures.
AKTIVITETSBASERAD KALKYRERING I ETT NYTT EKONOMISYSTEM
Rolf G Larsson
Abstract: Activity-Based Costing (ABC) for a New Management Accounting System is a report on a matrix model. The model combines traditional financial data from the accounting system with non-financial data from production, administration and marketing. The financial dimension is divided into cost centers at a foreman level. The two dimensions are combined at the lowest organizational level by using Cost drivers in an Activity-Based Costing technique. In doing so we create “foreman centers” where each operation is matched with a certain expenditure or income. These “foreman centers” are later accumulated into divisions and subsidiaries. The results from the matrix model can be used as measurements for:
evaluation of ex ante - ex post variance in production costs
productivity and efficiency at a foreman level
capital usage and work-in-progress evaluations
production control and control of other operations
life cycle cost-analysis
Gunnebo Fastening AB invited us to test the matrix model in reality. Step by step the hypothetical model is conceptualized into a system for management accounting. Gunnebo Fastening AB produces about 6000 different types of nails per year. The matrix model show that only about 50% of them are profitable. The rest are non-profitable articles. Customers have a vast range of discounts. This together with other special deals turns many of them into non-profitable customers. As the model points out which articles and customers are “in the red”, Gunnebo have shown interest in adopting the matrix model in a new system on a regular basis. The matrix model is compared with other literature in the field of management accounting. Important sources include the Harvard Business School with Professors Kaplan, Cooper and Porter, who all contributed to ABC and management accounting development in general. Another valuable source is Professor Paulsson Frenckner. The literature shows that both academics and practitioners seek ways to provide more detailed and accurate calculations for product profitability.
The report concludes with an analysis and conclusions about the matrix model. It also indicates future opportunities for the model in decision support systems (DSS).
STUDIES IN EXTENDED UNIFICATION-BASED FORMALISM FOR LINGUISTIC DESCRIPTION: AN ALGORITHM FOR FEATURE STRUCTURES WITH DISJUNCTION AND A PROPOSAL FOR FLEXIBLE SYSTEMS
Abstract: Unification-based formalisms have been used in computational and traditional linguistics for quite some while. In these formalisms the feature structure is the basic structure for representing linguistic information. However, these structures often do not suffice for describing linguistic phenomena and various extensions to the basic structures have been proposed. These extensions constitute the subject of this thesis.
The thesis contains a survey of the extensions proposed in the literature. The survey is concluded by stating the properties that are most important if we want to build a system that can handle as many of the extensions as possible. These properties are expressiveness, flexibility, efficiency and predictability. The thesis also evaluates four existing formalisms with respect to these properties. On the basis of the evaluation we also suggest how to design a system handling multiple extensions where the main emphasis have been laid on getting a flexible system.
As the main result the thesis specifies an algorithm for unifying disjunctive feature structures. Unlike previous algorithms, except Eisele & Dörre (1990), this algorithm is as fast as an algorithm without disjunction when disjunctions do not participate in the unification, it is also as fast as an algorithm handling only local disjunctions when there are only local disjunctions, and expensive only in the case of unifying full disjunction. By this behaviour the algorithm shows one way to avoid the problem that high expressiveness also gives low efficiency. The description is given in the framework of graph unification algorithms which makes it easy to implement as an extension of such an algorithm
DML-A LANGUAGE AND SYSTEM FOR THE GENERATION OF EFFICIENT COMPILERS FROM DENOTATIONAL SPECIFICATION
Abstract: Compiler-generation from formal specifications of programming language semantics is an important field. Automatically-generated compilers will have fewer errors, and be easier to construct and modify. So far, few systems have succeeded in this area, primarily due to a lack of flexibility and efficiency.
DML, the Denotational Meta-Language, is an extension of Standard ML aimed at providing a flexible and convenient implementation vehicle for denotational semantics of programming languages. The main extension is a facility for declaring and computing with first-class syntactic objects. A prototype implementation of DML has been made, and several denotational specifications have been implemented successfully using it. Integrated in the system is a code generation module able to compile denotational specifications of Algol-like languages to efficient low-level code.
This thesis presents an overview of the field, and based on it the DML language is defined. The code generation method for Algol-like languages is presented in detail. Finally, the implementation of the DML system is described, with emphasis on the novel extensions for first-class syntactic objects.
LOGIC PROGRAMMING WITH EXTERNAL PROCEDURES: AN IMPLEMENTATION
Abstract: This work aims at combining logic programs with functions written in other languages, such that the combination preserves the declarative semantics of the logic program. S-Unification, as defined by S. Bonnier and J. Maluszynski, provides a theoretical foundation for this.
- This thesis presents a definition and an implementation of a logic programming language, GAPLog, that uses S-unification to integrate Horn clauses with external functional procedures. The implementation is described as a scheme that translates GAPLog programs to Prolog. In particular, a call to an external function is transformed to a piece of Prolog code that will delay the call until all arguments are ground. The scheme produces reasonably efficient code if Prolog supports efficient coroutining. S-unification will cause no overhead if there are no function calls in the program.
- If the arguments are ground whenever a function call is reached in the execution process, then the dynamic check for groundness is unnecessary. To avoid the overhead caused by the groundness check a restriction of GAPLog is described, called Ground GAPLog, where the groundness of function calls is guaranteed. The restrictions are derived from the language Ground Prolog defined by F. Kluzniak. Many of the results for Ground Prolog also apply for Ground GAPLog. They indicate that Ground GAPLog is suitable for compilation to very efficient code.
ASPECTS OF VERSION MANAGEMENT OF COMPOSITE OBJECTS
Abstract: An important aspect of object oriented database systems is the ability to build up composite objects from objects parts. This allows modularity in the representation of objects and reuse of parts where appropriate. It is also generally accepted that object-oriented database systems should be able to handle temporal data. However little theoretical work has been done on temporal behaviour of composite objects, amd only relatively few systems attempt to incorporate both historical information and composite objects in a multi-user environment.
We argue that the support for handling temporal information provided by other systems addresses only one of two important kinds of historical information. We describe the notions of temporal history and edit history.
In this work we also make a first step in formalizing historical information of composite objects. We identify different kinds of compositions and give formal synchronization rules between a composition and its components to induce the desired behavior of these compositions in a database setting. We also discuss the transitivity property for the part-of relation with respect to the newly defined compositions.
Finally we address the problem of maintaining consistent historical information of a composition using the historical information of its components. This problem occurs as a result of sharing objects between ceveral compositions. We propose a solution and show an implementation in the LINCKS system.
TESTABILITY ANALYSIS AND IMPROVEMENT IN HIGH-LEVEL SYNTHESIS SYSTEMS
Abstract: With the level of integration existing today in VLSI technology, the cost of testing a digital circuit has become a very significant part of a product. This cost mainly comes from the test pattern generation (ATPG) for a design and the test execution for each product. Therefore it is worthwhile to detect parts of a design which are difficult for ATPG and test execution, and improve these parts before using ATPG and testing.
There are existing methods of improving the testability of a design, such as the scan path technique. However, due to the high cost of introducing scan registers for all registers in a design and the delay caused by long scan paths, these approaches are not very efficient. In this thesis, we propose a method which only selects parts of a design to be transformed based on the testability analysis. For this purpose, we
1. define a testability measurement which is able to predict the costs to be spent during the whole test procedure.
2. evaluate the result of the testability measurement, and based on this evaluation develop some strategies to identify difficult-to-test parts in a design.
3. create a class of testability improvement transformations and apply them to the difficult-to-test parts identified.
4. recalculate testability for the design with part of it transformed.
These procedures are repeated until design criteria are satisfied. The implementation result conforms to our direct observation. For test examples discussed in the thesis, we find the best improvement of testability within the other design constraint.
ON THE ROLE OF EVALUATIONS IN ITERATIVE DEVELOPMENT OF MANAGERIAL SUPPORT SYSTEMS
Abstract: Iterative development of information systems does not only mean iterative construction of systems and prototypes. Even more important in iterative development is the role of formative evaluations. The connection between these activities is of crucial importance for ensuring that the evaluations will be able to fulfil their formative role. When this is achieved, the development process can be a process of learning; acquiring missing information during the development process.
For Managerial Support Systems, the fit between the user, the organization, and the system is of vital importance for the success of the system being developed. Iterative development with an emphasis on evaluations is a suitable development approach for this kind of system.
A framework for iterative development of Managerial Support Systems is constructed and discussed. The framework is assessed in real-world development projects, and experiences from participant observation are reported.
Among the findings from this explorative inquiry are that it is easy to achieve very important information by the use of evaluations. The applicability of the approach is however dependent on explicitness of the support idea and an efficient flow of information between developers and evaluators.
INDUSTRIAL SOFTWARE DEVELOPMENT - A CASE STUDY
Abstract: There have been relatively few empirical studies of large-scale software development in Sweden. To understand the development process and be able to transfer knowledge from large-scale software development projects to academic basic education as well as to other industrial software projects, it is necessary to perform studies on-site where those systems are built.
This thesis describes the development process used at one particular Swedish company, NobelTech Systems AB, where they are trying to develop reusable software. The differences and similarities between their process and methodology and the corresponding approaches described in the literature are investigated.
The evolution of the software development process over the course of a decade is also discussed. During the 80s the company decided to make a technology shift in hardware as well as in software. They changed from minicomputer - based systems to integrated, distributed solutions, began to use Ada and the Rational Environment in the building of large complex systems, and shifted their design approach from structured to object-oriented design. Furthemore, they decided to design a system family of distributed solutions instead of specific systems to facilitare future reuse.
PREDICTABLE CYCLIC COMPUTATIONS IN AUTONOMOUS SYSTEMS: A COMPUTATIONAL MODEL AND IMPLEMENTATION
Cyclic computations under real-time constraints naturally occur in systems which use periodic sampling to approximate continuous signals for processing in a computer. In complex control systems, often with a high degree of autonomy, there is a need to combine this type of processing with symbolic computations for supervision and coordination.
In this thesis we present a computational model for cyclic time-driven computations subjected to run-time modifications initiated from an external system, and formulate conditions for predictable real-time behaviour. We introduce the dual state vector for representing periodically changing data. Computations are viewed as functions from the state represented by the vector at time t to the state one period later. Based on this model, we have implemented a software tool, the Process Layer Executive, which maintains dual state vectors and manages cyclic tasks that perform computations on vectors.
We present the results of a theoretical and practical evaluation of the real-time properties of the tool and give its overhead as a function of application dependent parameters that are automatically derivable from the application description in the Process Layer Configuration Language.
EVALUATION OF STRATEGIC INVESTMENTS IN INFORMATION TECHNOLOGY
This report tackles the implications of the strategic role of information (IT) for evaluation of investments in new technologies. The purpose is to develop a strategic management perspective on IT-investments and suggest appropriate methods for evaluation of flexible manufacturing systems (FMS) and office information systems (OIS). Since information systems are interdisciplinary in nature, our efforts have been concentrated on integrating different perspectives on the proliferation of new technology in organizations and theories about planning, evaluation and development of strategic investments. Case studies, surveys, statistics and empirical evidence from the literature are used to support our findings and to expose some ideas for further research.
The strategic and managerial perspective on IT-investments is developed in the context of the role of ledarship in a changing business, technological and organizational environment. The strategic perspective is derived from integration of different theories and perspectives on development of technology and strategies in organizations, as well as planning and evaluation in strategic investment decisions. In order to enhance our understanding of the strategic roles of ITs, the rationale behind introduction of IT-investments, their impact on firm’s profitability and role as management support are discussed and the pattern of technical change in organizations is described. Particular attention is paid to the integrative role of the FMS in the firms’s value chain and the OIS in supporting administrators at different organization levels. Then we analyze the crucial role of FMS- and OIS in the firm’s manufacturing strategy and information system strategy and the implications for measurement and evaluation of the effects of FMS- and OIS-projects.
In order to extend the role of evaluation and enable managers to handle strategic investment decisions, a model for strategic investment planning has been developed. The model integrates different development processes in the organization (such as strategic management, organization and system development and project control) and requires involvement of other members as well. Furthermore, we suggest a mix of qualitative and quantitative techniques and appraisal approaches for the analysis of the strategic, tactical and operational implications of FMS- and OIS-investments. Since the introduction of FMS and OIS is generally a long-term and continuous process, our approach is regarded as an attempt to create a flexible evaluation structure which can follow changes in technologies and the organization’s development strategies in a natural manner.
A TRANSFORMATIONAL APPROACH TO FORMAL DIGITAL SYSTEM DESIGN
The continuing development in electronic technology has made it possible to fit more and more functionality on a single chip, thus allowing digital systems to become increasingly complex. This has led to a need for better synthesis and verification methods and tools to manage this complexity.
Formal digital system design is one such method. This is the integrated process of proof and design that starting from a formal specification of a digital system, generates a proof of correctness of its implementation as a by-product of the design process. Thus, by using this method, the designer can interactively transform the specification into a design that implements the specification, and at the same time generate a proof of its correctness.
In this thesis we present an approach to formal digital system design that we call transformational. By this we mean that we regard design as an iterative process that progresses stepwise by applying design decisions that transforms the design until a satisfactory design has been reached. To be able to support both synthesis and verification we use a two-level design representation where the first level is a design specification in logic that is used for formal reasoning, and the second level is a set of design annotations that are used to support design analysis and design checking.
We have implemented an experimental design tool based on the HOL (Higher Order Logic) proof system and the window inference package. We demonstrate the usability of our approach with a detailed account of two non trivial examples of digital design derivations made using the implemented tool.
COMPILER GENERATION FOR PARALLEL LANGUAGES FROM DENOTATIONAL SPECIFICATION
There exist several systems for the generation of compiler front-ends from formal semantics. Systems that generate entire compilers have also started to appear. Many of these use attribute grammars as the specification formalism, but there also are systems based on operational semantics or denotational semantics. However, there are very few systems based on denotational semantics that generate compilers for parallel languages.
The goal of this thesis is to show that it is possible to automatically generate an efficient compiler for a parallel language from a denotational specification. We propose a two-level structure for the formal description. The high level uses denotational semantics, whereas the low-level part consists of an abstract machine including data-parallel operations.
This thesis concentrates on the high-level denotational part. A prototype compiler for a small Algol-like parallel language has been generated using a modified version of the DML (Denotational Meta Language) system back-end. A fixed operational semantics in the form of a low-level language that includes data-parallel operations is used as target during the generation.
PROPAGATION OF CHANGE IN AN INTELLIGENT INFORMATION SYSTEMS
This thesis pursues the issue of propagation of change in an intelligent information system. Propagation of change occurs when a data base manager executes transactions repeatedly, though to different parts of a data base and possibly with some small variations. A number of operations can be performed automatically as propagation, such as: (1) merging variants of information, (2) undoing a prior change to some information without loosing recent changes, (3) fixing a set of (dissimilar) information that contains the same bug, etc.
The emphasis is on the potential problems that can arise when propagating changes, as well as presenting a solution to the problems discussed. A secondary goal is to describe the architecture and use of the surrounding system where propagation of change can occur.
Three different aspects of propagation of change are discussed in detail; determining the change, performing the propagation, and issues regarding implementing a propagation of change tool. This tool has been implemented as an integrated part of LINCKS, an intelligent information system designed and develop in Linköping.
AN ARCHITECTURE AND A KNOWLEDGE REPRESENTATION MODEL FOR EXPERT CRITIQUING SYSTEMS
The aim of an expert critiquing system (ECS) is to act as an assistant in a problem-solving situation. Once a user has presented a tentative solution to a given problem, the ECS reviews the proposed solution and then provides a critique — an evaluation of the solution. In this thesis we provide an architecture for an expert critiquing shell and present an implementation of such a shell, the AREST system. Experience from using this shell in two experimental implementations is reported. Further we show how the critique generation process can be supported by extending standard knowledge representation structures by explicitly providing conceptual relationships between items in the knowledge base in the form of a simple extension of a rule-based schema. Current results in text generation have opened the way for dynamically producing multi-paragraph text. Our work is based on a theory for text organization, Rhetorical Structure Theory (RST). To remedy certain shortcomings in RST we proposed improvements, e.g. rhetorical aggregates for guiding content selection and organization. In this thesis we discuss how the AREST system has been used to construct two ECSs with text generation capabilities.
SYMBOLIC MODELLING OF THE DYNAMIC ENVIRONMENTS OF AUTONOMOUS AGENTS
To interact with a dynamic environment in a reliable and predictable manner, an autonomous agent must be able to continuously sense and “understand” the environment in which it is operating, while also meeting strict temporal constraints.
In this thesis we present means to support this activity within a unified framework aimed to facilitate autonomous agent design and implementation. The central role in this approach is played by models at different levels of abstraction. Those models are continuously updated on the basis of available information about the dynamic environment. We emphasize the interface between the numeric and symbolic models, and present an approach for recognizing discrete events in a dynamic environment based on sequences of observations. Furthermore, we propose a logic to specify these characterization procedures.
A prototype driver support system is used as a means for testing our framework on a real world application with considerable complexity. The characterization procedures are specified in the logic, and an implementation of the prototype is presented.
DEPENDENCY-BASED GROUNDNESS ANALYSIS OF FUNCTIONAL LOGIC PROGRAMS
The object of study in this thesis is a class of functional logic programs, where the functions are implemented in an external functional or imperative language. The contributions are twofold:
Firstly, an operational semantics is formally defined. The key idea is that non-ground function calls selected for unification are delayed and retained in form of constraints until their arguments become ground. With this strategy two problems arise: (1) Given a program P and an initial goal, will any delayed unifications remain unresolved after computation? (2) For every function call f(X) in P, find a safe evaluation point for f(X), i.e. a point in P where X always will be bound to a ground term, and thus f(X) can be evaluated.
Secondly, we present a static groundness analysis technique which enables us to solve problems (1) and (2) in a uniform way. The analysis method is dependency-based, exploiting analogies between logic programs and attribute grammars.
TABULATED RESOLUTION FOR WELL FOUNDED SEMANTICS
This work is motivated by the need for efficient question-answering methods for Horn clause logic and its non-classical extensions - formalisms which are of great importance for the purpose of knowledge representation. The methods presented in this thesis are particularly suited for the kind of ‘‘computable specifications’’ that occur in areas such as logic programming and deductive databases.
The subject of study is a resolution-based technique, called tabulated resolution, which provides a procedural counterpart to the so-called well-founded semantics. Our study is carried out in two steps.
First we consider only classical Horn theories. We introduce a framework called the search forest which, in contrast to earlier formalizations of tabulated resolution for Horn theories, strictly separates between search space and search. We prove the soundness and completeness of the search space and provide some basic strategies for traversing the space. An important feature of the search forest is that it clarifies the relationship between a particular tabulation technique, OLDT-resolution, and the transformational bottom-up method called ‘‘magic templates’’.
Secondly, we generalize the notion of a search forest to Horn theories extended with the non-monotonic connective known as negation as failure. The tabulation approach that we propose suggests a new procedural counterpart to the well-founded semantics which, in contrast to the already existing notion of SLS-resolution, deals with loops in an effective way. We prove some essential results for the framework, including its soundness and completeness.
SATELLITKONTOR - EN STUDIE AV KOMMUNIKATIONSMÖNSTER VID ARBETE PÅ DISTANS
The purpose of this study is to bring forward the relevant theories to describe and analyse the area of remote work. Based upon these theories, the experiences of satellite work centres is analysed. In this context, satellite work centre means a part of a company department which has been geographically separated from the rest of the department and where normal departmental functions are carried out at a distance from the company headquarters. The geographic separation requires that communication between the different work places by and large must be done with help of information technology. Three companies participated in the study.
Satellite work centres can be studied from several perspectives. A selection of theories from the area of organisation is presented to illustrate organisational development and to describe organisational structure. Furthermore, examples are given of the interplay between technological development and changes in the company’s business. Several different definitions of remote work are presented. In literature which deals with working remotely, no distinction is made amongst the experiences of different organisational forms of remote work. Previous experiences with remote work are used in this study as examples of satellite work centres. The prerequisites for communication are central to the ability to carry out remote work. The description of communication patterns, both within and between the operational units in the companies, is therefore given heavy treatment. Data have been collected with help of a communication diary. The analysis builds upon theories of communication and organisational communication, which deal with information requirements, the function of communication, communications patterns, choice of communication medium, and so forth.
As part of the study’s results, several factors are discussed which should be taken into consideration by a company in order for remote work in satellite work centres to function properly. These considerations are such things as the content of the job, dissemination of information and social contact. The results can also be found in a description of the communications patterns in the study groups. The kinds of information needed is by and large simple and can for the most part be spread through computerised information systems, The most used form of communication is the telephone, but telefax and electronic mail also play a role. Three functions of communication are studied. The command function, giving instructions about tasks to be done, was responsible for a relatively small amount of the total communication. the ambiquity management function, dissemination of information to reduce insecurity, was responsible for a large portion of the information flow, both between and within organisational units. The relational function, maintaining social contact, was also a significant portion of the communication. Despite geographic distance, the prerequisites for communication between those at the corporate headquarters and those at the satellite work centres is much the same as for communication within the corporate headquarters itself.
SEPARATION OF MANAGEMENT AND FINANCING, CASE STUDIES OF MANAGEMENT BUY-OUTS IN AN AGENCY THEORETICAL PERSPECTIVE
The interaction between owners, management team and lenders in companies form a natural part of modern industry and commerce. The purpose of this study is to investigate whether separation of management and financing in companies causes costs. In the study financing relates both to debt finance and equity finance provided by others than members of the management team.
The study is based on an agency theoretical perspective. According to agency theory, specific costs arise in contractual relationships, namely agency costs. In order to draw conclusions regarding the costs of separation between management and financing, this study investigates the change in agency costs in the contractual relationships between the owners and the management team and the lenders and the owner/management group, respectively, due to a management buy-out, MBO. A MBO is an acquisition of a company or part of a company, where the company’s management team constitutes or forms part of the group of investors and where the acquisition normally is financed with an above average proportion of debt finance in relation to equity finance.
The results of the study indicate that the value of a company is related to how the company is financed. In companies where the management teams do not own the company, costs seem to arise because the companies are not managed in a value-maximizing way. Costs also seem to arise because of monitoring and bonding activities in the relationship between the owners and the management team. In companies where the owner/management group do not finance the whole company but debt finance is used, costs seem to arise primarily because of scarcity of capital and deteriorated conditions in the relations with lenders and suppliers. Costs seem to arise because of monitoring and bonding activities as well.
AUDITING AND LEGISLATION - AN HISTORIC PERSPECTIVE
Laws change due to changes in society’s norms. New laws in turn alter conditions for business and society. Auditing is one of the few professions in Sweden which is governed by comprehensive legislation. Changes in the rules for auditing therefore mean changes in the conditions of an auditor’s daily work.
It was not until 1912 that individuals began to plan for a career in auditing. It was then that the first auditors were certified by the business community. Ever since then the profession has changed. Auditing ceased to be only of concern to the business community and became instead a concern for many interested parties, such as creditors, the state, and tax authorities. Their needs have been different and have therefore put different demands on auditors. The state has as a consequence changed its regulation of auditing in companies.
The purpose of this study is to describe the changes in legislation which affect Swedish auditors’ work, as well as to illustrate the reasons and consequences which were argued for by various interested parties in conjunction with these changes. The areas which are covered are the changes with regard to auditors’ education and competency, supervision and certification of auditors, the auditor’s task, sphere of activity, independence, and professional secrecy.
In this debate, there is a gap between the expectations of what auditors believe they should do and what the various interest groups, such as clients and creditors, expect of them. This gap in expectations might be due to dissatisfaction with the rules, a poor understanding of what the rules are, or that the rules can be interpreted in various ways. This gap in expectations could possibly be minimized by making information about rules for auditing and their background available for the various interested parties.
VOICES IN DESIGN: ARGUMENTATION IN PARTICIPATORY DEVELOPMENT
The focus in this thesis lies on methods used to support the early phases in the information system design process. The main perspective emanates from the ”Scandinavian” approach to systems development and Participatory Design. This perspective can be characterised as being based on a ”socio-cultural” understanding that a broad participation is beneficial for social development activities. Another perspective has its point of departure within the Design Rationale field. A third perspective is derived from Action Research. In detail, the goals have been to develop an argumentation based method to support the design process; study how this method relates to the process of participatory design in a work life setting; and study how the method can come to support design visualization and documentation in such a group process.
The resulting Argumentative Design (aRD) method is derived from both theoretical influences (Design theory, Design Rationale applications and the Action Design method) and empirical evaluations and revisions. The latter are performed in the form of a case study of the interaction in a multi-disciplinary design group, analyzed by using qualitative methods and application of activity theory. Three ”voices” in participatory development were here identified to characterize the interaction in the design group: the voice of participatory design, the voice of practice and the voice of technology.
The conclusion is that the ideas of the second generation of design methodologies fit well also in the Scandinavian tradition of systems development. Both these perspectives converge into the group process where the product is seen as secondary and derived. aRD thus uses both types of theoretical arguments to push the high-level design issues forward, while making different design ideas and decisions explicit.
CONTRIBUTIONS TO A HIGH-LEVEL PROGRAMMING ENVIRONMENT FOR SCIENTIFIC COMPUTING
The usage of computers for solving real-world scientific and engineering problems is becoming more and more important. Nevertheless, the current development practice for scientific software is rather primitive. One of the main reasons for this is the lack of good high-level tools. Most scientific software is still being developed the traditional way in Fortran, especially in application areas such as machine element analysis, where complex non-linear problems are the norm.
In this thesis we present a new approach to the development of software for scientific computing and a tool which supports this approach. A key idea is that mathematical models should be expressed in a high-level modeling language which allows the user to do this in a way which closely resembles how mathematics is written with pen and paper. To facilitate the structuring of complex mathematical models the modeling language should also support object-oriented concepts.
We have designed and implemented such a language, called ObjectMath, as well as a programming environment based on it. The system has been evaluated by using it in an industrial environment. We believe that the proposed approach, supported by appropriate tools, can improve productivity and quality, thus enabling engineers and scientists to solve problems which are too complex to handle with traditional tools.
The thesis consists of five papers (four of which have previously been published in the proceedings of international conferences) dealing with various aspects of the ObjectMath language and programming environment.
ERROR RECOVERY SUPPORT IN MANUFACTURING CONTROL SYSTEMS
Instructing production equipment on a shop floor in mid 1990’s is still done using concepts from the 1950’s, although the electro-mechanical devices forming logical gates and memory cells are nowadays replaced by computers. The gates and memory cells are now presented in graphical editors, and there is support for defining new building blocks (gates) handling complex data as well as for advanced graphical monitoring of a production process. The progress during the last 40 years concerns the implementation of the control system, not the instruction of the control system. Although this progress has increased the flexibility of the equipment, it has not provided any support for error recovery. The contribution of this thesis is twofold. Firstly, it presents Aramis - A Robot and Manufacturing Instruction System - which extends the traditional instruction taxonomy with notions developed in computer science. Aramis uses a graphical task specification language which operates on an abstract model of the plant, a world model. The essence of the differences between Aramis and current practice is the usage of abstraction layers to focus on different aspects of the instruction task, and that Aramis retains much specification knowledge about the controlled plant, knowledge which can be reused for other purposes such as error recovery. Secondly, the problems of error recovery is investigated, and a proposal for how to provide recovery support in a system structured such as Aramis is presented. The proposal includes the generation of a multitude of possible restart points in a task program, and it uses a planning approach to support the modification of the current state of the machinery to the ”closest” restart point.
INFORMATION TECHNOLOGY AND INDUSTRIAL LOCATION, EFFECT UPON PRODUCTIVITY AND REGINAL DEVELOPMENT
Companies within the service sector have begun relocating units. Information technology guarantees their ability to communicate from remote areas. In addition companies have suffered from high costs in the Stockholm area mainly during the 1980’s. This report aims to identify the effects the relocations have had on the companies, the regions and the process of finding a new site including the financial support from regional funds. The study was based on 57 different relocations.
The study has shown that 63% of the companies have been more effective after relocation. The main reason for this has been the use of the specific advantages the new site may offer such as lower employee turnover and lower rents. Along with the local advantages the most successful companies have used IT not only in order to increase production, but also in order to change the organisation. With technical support communication both within the organisation and with the customers is improved, which decreases the costs for transactions.
The communities to which the companies have relocated the units have been within regional support areas, which entitles the companies to governmental support. The relocated units have had a very positive impact on the community. The reason has been that the areas very often are dominated by a single industry and have a low employment rate, leaving youth and women without jobs and resulting in emigration. The new companies usually hire people within these sectors, which means that the effect not only increases the employment rate, but also within a desired sector. It must however be remembered that there is a limited effect on the rest of the community. Indirectly the settlements will cause good will and a positive atmosphere, but they will not create a large number of jobs, since the companies do not use suppliers or transportation services.
Since 1983 representatives for the government have actively encouraged companies to relocate units. Besides the centrally financed funds there has been support from both regional and local governments. Added together the total support for each job guaranteed for five years averages 350.000 SEK, which is far more than expected. The reason for the excess contributions is the uncoordinated and overlapping contributions from different actors at various stages of the relocating process.
Besides the increased efficiency of the companies, IT offers local communities in remote areas new possibilities for development. Taken altogether and keeping in mind that the improvements have been made within the expanding service sector, the effects may lead to national growth.
No FHS 3/94
INFORMATIONSSYSTEM MED VERKSAMHETSKVALITET - UTVÄRDERING BASERAT PÅ ETT VERKSAMHETSINRIKTAT OCH SAMSKAPANDE PERSPEKTIV
Den övergripande forskningsuppgiften för detta arbete är att diskutera och definiera begreppet verksamhetskvalitet samt att utveckla en utvärderingsmetod för att bedöma informationssystem med verksamhetskvalitet, utvärderingsmetoden har även prövats empiriskt i samband med två utvärderingstillfällen. Begreppet verksamhetskvalitet och utvärderingsmetoden baserar sig på ett verksamhetsinriktat och samskapande perspektiv på informationssystem och kvalitet. Verksamhetskvalitet är det centrala begreppet och utifrån detta begrepp definieras en utvärderingsmetod för informationssystem med verksamhetskvalitet. Begreppet verksamhetskvalitet blir därför en syntes av den syn på kvalitet som finns i de bägge grundläggande perspektiven. Begreppet verksamhetskvalitet har utvecklats genom en kritisk analys av de två perspektiven och utifrån denna analys har kriterier medvetet valts ut som kännetecknar detta begrepp. I begreppet verksamhetskvalitet finns det även med kriterier och värderingar som man inte fokuserar på i de två perspektiven, men som måste vara med för att man ska få ett användbart kvalitetsbegrepp i samband med utvärdering av informationssystem.
No FHS 4/94
INFORMATIONSSYSTEMSTRUKTURERING, ANSVARSFÖRDELNING OCH ANVÄNDARINFLYTANDE - EN KOMPARATIV STUDIE MED UTGÅNGSPUNKT I TVÅ INFORMATIONSSYSTEMSTRATEGIER
I denna rapport identifieras ett antal ansvarstyper avseende informationssystem och informationssystemstrukturering. Sex stycken fallstudier har genomförts i verksamheter där en datadriven eller verksamhetsbaserad informationssystemstrategi tillämpas. Med utgångs-punkt i dessa verksamheter presenteras erfarenheter kring hur ansvar fördelas och realiseras. Detta görs genom en jämförelse mellan strategiernas teoretiska principer för ansvarsfördelning och de verkliga resultaten. Vidare studeras vilka möjligheter till användarinflytande som respektive strategi erbjuder samt hur detta i realiteten utövas.
Rapporten består av en teoretisk del, där en jämförelse på idealtypsnivå sker mellan två informationssystemstrategier; Information Resource Management (IRM) och VerksamhetsBaserad Systemstrukturering (VBS). Därefter följer en genomgång av empiriskt material från ovan nämnda fallstudier.
Resultatet av studien visar att strategiernas idealbilder skiljer sig åt på ett antal punkter, men att de trots detta leder till en rad likartade konsekvenser vad gäller ansvarsfördelning och användarinflytande. Det finns flera orsaker till att det är svårt att få systemägare, ledning och användare att ta sitt ansvar. Följden av detta blir att dataavdelningen, eller en liknande funktion, får ta det faktiska system- och strategiansvaret. Användarnas möjlighet och vilja att ta ansvar har bl a visat sig vara beroende av vilken grad av inflytande de haft vid systemutvecklingen.
OBJECT VIEWA OF RELATIONAL DATA IN MULTIDATABASE SYSTEMS
In a multidatabase system it is possible to access and update data residing in multiple databases. The databases may be distributed, heterogeneous, and autonomous. The first part of the thesis provides an overview of different kinds of multidatabase system architectures and discusses their relative merits. In particular, it presents the AMOS multidatabase system architecture which we have designed with the purpose of combining the advantages and minimizing the disadvantages of the different kinds of proposed architectures.
A central problem in multidatabase systems is that of data model heterogeneity: the fact that the participating databases may use different conceptual data models. A common way of dealing with this is to use a canonical data model (CDM). Object-oriented data models, such as the AMOS data model, have all the essential properties which make a data model suitable as the CDM. When a CDM is used, the schemas of the participating databases are mapped to equivalent schemas in the CDM. This means that the data model heterogeneity problem in AMOS is equivalent to the problem of defining an object-oriented view (or object view for short) over each participating database.
We have developed such a view mechanism for relational databases. This is the topic of the second part of the thesis. We discuss the relationship between the relational data model and the AMOS data model and show, in detail, how queries to the object view are processed.
We discuss the key issues when an object view of a relational database is created, namely: how to provide the concept of object identity in the view; how to represent relational database access in query plans; how to handle the fact that the extension of types in the view depends on the state of the relational database; and how to map relational structures to subtype/supertype hierarchies in the view.
A special focus is on query optimization.
A DECLARATIVE APPROACH TO DEBUGGING FOR LAZY FUNCTIONAL LANGUAGES
Debugging programs written in lazy functional languages is difficult, and there are currently no realistic, general purpose debugging tools available. The basic problem is that computations in general do not take place in the order one might expect. Furthermore, lazy functional languages to a large extent free programmers from concerns regarding operational issues such as evaluation order, i.e. they are ‘declarative’. Debugging should therefore take place at the same, high level of abstraction. Thus, we propose to use algorithmic debugging for lazy functional languages, since this technique allows the user to focus on the declarative semantics of a program.
However, algorithmic debugging is based on tracing, and since the trace reflects the operational behaviour of the traced program, the trace should be transformed to abstract away these details if we wish to debug as declaratively as possible. We call this transformation strictification, because it makes the trace more like a trace from a strict language.
In this thesis, we present a strictifying algorithmic debugger for a small lazy functional language, and some experience from using it. We also discuss its main shortcomings, and outline a number of ideas for building a more realistic debugger. The single most pressing problem is the size of a complete trace. We propose to use a piecemeal tracing scheme to overcome this, by which only a part of the trace is stored at any one time, other parts being created on demand by re-executing the program.
CREDITOR - FIRM RELATIONS: AN INTERDISCIPLINARY ANALYSIS
The thesis gives a survey of theories relevant for understanding the problems in relations between lending banks and borrowing business firms.
First a survey of comparative financial systems is given: The main types are bank-oriented (Germany, Japan, Sweden) and market-oriented systems (USA, GB). In the bank-oriented systems the risk exposure due to high-firm indebtedness is counteracted by trust in dense informal bank-firm networks. Market-oriented systems are characterized by arms-lenght bank-firm relations. Legal rules hinder the banks from active long-term relations with borrowing firms. Firms are financed on the anonymous markets.
Sociology provides theory for analysis: social cohesion, norms, networks and trust. Institutional arrangements provide norms for societal cooperation that are enforced by culture. Traditional and modern society are used to exemplify two different ways of upholding social cohesion with emphasis on business relations.
Concepts from neoclassical economic theory for analyzing these relations are: agency, transaction costs, contract, and asymmetric information.
Game theory models strategic behaviour and conflict:; long-term relations can be interpreted as a way of bonding partners in an n-period Prisoners Dilemma game. A model is developed for analyzing bank-firm interaction for a firm in insolvency in a bank-oriented system.
The thesis concludes with a speculative integrative model for the development of the business community. Three models are identified and named: the Oligarchy, the War-Lords and the Business(like) Rationality. The last model is an attempt to construct a model on the advantages from both The Oligarchy (inspired by the bank-oriented systems) and the War-Lords (inspired by the market-oriented systems).
ACTIVE RULES BASED ON OBJECT RELATIONAL QUERIES - EFFICIENT CHANGE MONITORING TECHNIQUES
The role of databases is changing because of the many new applications that need database support. Applications in technical and scientific areas have a great need for data modelling and application-database cooperation. In an active database this is accomplished by introducing active rules that monitor changes in the database and that can interact with applications. Rules can also be used in databases for managing constraints over the data, support for management of long running transactions, and database authorization control.
This thesis presents work on tightly integrating active rules with a second generation Object-Oriented(OO) database system having transactions and a relationally complete OO query language. These systems have been named Object Relational. The rules are defined as Codition Action (CA) pairs that can be parameterized, overloaded, and generic. The condition part of a rule is defined as a declarative OO query and the action as procedural statements.
Rule condition monitoring must be efficient with respect to processor time and memory utilization. To meet these goals, a number of techniques have been developed for compilation and evaluation of rule conditions. The techniques permit efficient execution of deferred rules, i.e. rules whose executions are deferred until a check phase usually occurring when a transaction is committed.
A rule compiler generates screener predicates and partially differentiated relations. Screener predicates screen physical events as they are detected in order to efficiently capture those events that influence activated rules. Physical events that pass through screeners are accumulated. In the check phase the accumulated changes are incrementally propagated to the relations that they affect in order to determine whether some rule condition has changed. Partial Differentiation is defined formally as a way for the rule compiler to automatically generate partially differentiated relations. The techniques assume that the number of updates in a transaction is small and therefore usually only some of the partially differentiated relations need to be evaluated. The techniques do not assume permanent materializations, but this can be added as an optimization option. Cost based optimization techniques are utilized for both screener predicates and partially differentiated relations. The thesis introduces a calculus for incremental evaluation based on partial differentiation. It also presents a propagation algorithm based on the calculus and a performance study that verifies the efficiency of the algorithm.
A COLLABORATIVE APPROACH TO USABILITY ENGINEERING: TECHNICAL COMMUNICATORS AND SYSTEM DEVELOPERS IN USABILITY-ORIENTED SYSTEMS DEVELOPMENT
For the last 20 years, the human-computer interaction research community has provided a multitude of methods and techniques intended to support the development of usable systems, but the impact on industrial software development has been limited. One of the reasons for this limited success is argued to be the gap between traditional academic theory generation and industrial practice.
Furthermore, technical communicators (TCs) have until recently played a subordinate role in software design, even in usability-oriented methods. Considering their close relation to the users of the developed systems, and to the usability issue itself, they constitute a hidden resource, which potentially would contribute to the benefit of more usable systems.
We formed the Delta project as a joint effort between industry and academia. The objectives of the project were to jointly develop usability-oriented method extensions, adapted for a particular industrial setting, and to account for the specialist competence of the TCs in the software development process. This thesis is a qualitative study of the development, introduction and evaluation of the Delta method extension. The analysis provides evidence in favor of a closer collaboration between system developers (SDs) and TCs. An additional outcome of the in-depth study is a proposed redefinition of the extended interface concept, taking into account the inseparability of user documentation and user interface, while providing a natural common ground for a closer collaboration between SDs and TCs.
No FHS 5/94
VARFÖR CASE-VERKTYG I SYSTEMUTVECKLING? EN MOTIV- OCH KONSEKVENSSTUDIE AVSEENDE ARBETSSÄTT OCH ARBETSFORMER
För utveckling av datorbaserade informationssystem existerar numera många olika typer av datorstöd. På senare år har det skett en intensiv utveckling av olika verktyg (programvaror) som skall vara behjälpliga vid systemutveckling (SU). Verktyg för senare faser (realisering/konstruktion) under SU-processen har funnits sedan en lång tid tillbaka, men datorstödda hjälpmedel har nu även konstruerats för de tidigare faserna (analys, design).
Verktyg som skall ge stöd under SU-processen benämns ofta för CASE-verktyg. CASE är en förkortning av Computer Aided Systems/Software Engineering. CASE-verktyg som stödjer tidiga faser brukar kallas för upper-CASE och CASE-verktyg som stödjer senare faser för lower-CASE. I denna rapport identifieras ett antal motiv för att införa CASE-verktyg i systemutvecklingsprocessens tidigare faser. I rapporten identifieras också konsekvenser av användning av CASE-verktyg för arbetssätt och arbetsformer. Slutligen studeras om de motiv som föranlett en satsning på CASE-verktyg har infriats.
Sex fallstudier har genomförts i svenska företag som utvecklar administrativa system. Utifrån dessa fallstudier presenteras erfarenheter kring motiv och konsekvenser av användning av CASE-verktyg. Rapporten består av en inledande del där problemområde, forskningsmetod och teoretiska förutsättningar redovisas. Därefter redovisas empiriskt material från fallstudierna.
Resultatet av denna studie visar att de huvudsakliga motiven för att införa CASE-verktyg är att man vill uppnå konkurrensfördelar genom att erhålla kortare projekttider och att få en bättre produktkvalitet. Det finns inget i denna studie som tyder på att användning av CASE-verktyg har medfört förändrade arbetsformer. CASE-verktyg används huvudsakligen i enskilt arbete.
Olika konsekvenser för arbetssätt har identifierats. De CASE-verktyg som använts i fallstudierna kan i första hand klassificeras som dokumentationsstödjande och verktygen stödjer huvudsakligen framställning av diagram. Användare av undersökta CASE-verktyg har ansett sig fått ett godtagbart stöd för diagramutformning och lagring av objekt. I flera av de studerade fallen anser man att de motiv man haft för införande har infriats och att verktygen innehåller en godtagbar funktionalitet. I andra fall anser man att motiven ej infriats och att verktygens funktionalitet är otillräcklig.
För att lyckas med ett CASE-införande är det viktigt att en noggrann behovsanalys genomförs och att man före införandet utsätter CASE-verktygen för en omfattande prövning. Resultatet från denna prövning skall ligga till grund för en utvärdering och bedömning av verktygets förmåga. Avsaknaden av dokumenterade utvärderingar är påtaglig i de studerade fallen.
A STUDY OF TRACEABILITY IN OBJECT-ORIENTED SYSTEMS DEVELOPMENT
We regard a software system as consisting not only of its source code, but also of its documented models. Traceability is defined as the ability to trace the dependent items within a model and the ability to trace the correspondent items in other models. A common use of the term traceability is requirements traceability which is the ability to trace a requirement via the different models to its implementation in the source code. Traceability is regarded as a quality factor that facilitates maintenance of a software system.
The thesis is the result from a case study performed on a large commercial software system developed with an object-oriented methodology and largely implemented in C++ and by using a relational database. A number of concrete traceability examples collected from the project are the result of a thorough investigation of the various models that were produced during the project. The exemples are thoroughly analyzed and discussed, forming the main contribution of this research. Insight and knowledge as regards traceability and object-oriented modeling is the result from the work with the examples.
STRATEGI PCH EKONOMISK STYRNING - EN STUDIE AV SANDVIKS FÖRVÄRV AV BAHCO VERKTYG
Att företag köper andra företag är ett vanligt inslag i svenskt och internationellt näringsliv. Andelen misslyckade förvärv är dock hög. Tidigt uppmärksammades att många förvärv misslyckas pga bristande planering och frånvaro av en strategisk analys. Senare forskning har visat att även hanteringen av förändringsprocessen efter ett förvärv påverkar förvärvsutfallet. Ekonomiska styrsystem är en förändringsmekanism som kan antas ha stor betydelse i denna process. Syftet med denna undersökning är därför att studera vilken roll ekonomiska styrsystem har i den förändringsprocess som följer av ett förvärv.
En undersökningsmodell har utvecklats där en av de teoretiska utgångspunkterna är att företagsförvärv skall vara ett resultat av en strategisk analys. I de fall analysen leder till att förvärvet genomförs kan det antas att någon form av förändring i det förvärvade företagets affärsstrategiska inriktning blir följden. En modifierad affärsstrategi innebär nya krav på information för planering, beslutsfattande och kontroll. Det förvärvade företagets ekonomiska styrsystem måste då utformas på ett sådant sätt att de svarar upp mot dessa nya informationskrav. Därigenom kan de ekonomiska styrsystemen användas för att styra beteendet i enlighet med den strategiska inriktningen. Detta är en förutsättning för att synergier skall kunna realiseras och därmed leda till resultatförbättringar i det förvärvade företaget.
Undersökningsmodellen testades genom en empirisk studie av Sandviks förvärv av Bahco Verktyg-gruppen. Av verktygsgruppens affärsenheter valdes Sandvik Bahco i Enköping som lämpligt studieobjekt. Förändringsprocessen inom Sandvik Bahco efter förvärvet kan karakteriseras som linjär och sekventiell bestående av faserna strategiformulering och implementering. Av den strategiska analysen drogs slutsatsen att Sandvik Bahcos affärsstrategi inte var nödvändig att förändra i någon större utsträckning. Inom ramen för den befintliga affärsstrategin ansågs dock förändringar vara nödvändiga inom områden såsom exempelvis kvalitet, leveranssäkerhet, produktivitet etc. Dessa förändringar implementerades bland annat genom att förändra utformningen och användningen av Sandvik Bahcos ekonomiska styrsystem. Därigenom blev det möjligt att följa upp förändringsarbetet löpande. Trots att de ekonomiska styrsystemen i hög utsträckning anpassats till den affärsstrategiska inriktningen har det inte varit möjligt att belägga ett samband mellan styrsystemens utformning och förvärvets utfall.
Förändringen av Sandvik Bahcos ekonomiska styrsystem kan även hänföras till koncerngemensamma instruktioner och rekommendationer. Denna typ av centrala riktlinjer tar vanligtvis ej hänsyn till affärsenhetens specifika verksamhet vilket kan medföra en försämrad anpassning till enhetens affärsstrategiska inriktning. En sådan försämrad anpassning har dock ej kunnat påvisas när det gäller Sandvik Bahcos styrsystem. Detta kan förklaras med att de informationsbehov som uppstår pga av Sandvik-koncernens och Sandvik Bahcos strategiska inriktning är likartade. Kraven på koncernens och affärsenhetens styrsystem blir därför snarlika.
COLLAGE INDUCTION: PROVING PROGRAM PROPERTIES BY PROGRAM SYSNTHESIS
The motivation behind this thesis is to formally prove programs correct. The contributions are twofold:
Firstly, a new rule of mathematical induction called collage induction, is introduced, which treats mathematical induction as a natural generalization of the CUT-rule. Conceptually the rule can be understood as follows: To prove an implication G ? D, an inductively defined property p is constructed such that G ? p. The implication p ? D is then proved by induction according to the definition of p.
Secondly, a program synthesis method for extracting programs from proofs in Extended Execution, is generalized to allow the relations defining p to be synthesized from the proof of an induction lemma.
SPECIFICATION AND SYNTHESIS OF PLANS USING THE FEATURES AND FLUENTS FRAMEWORK
An autonomous agent operating in a dynamical environment will face a number of different reasoning problems, one of which is how to plan its actions in order to pursue its goals. For this purpose, it is important that the agent represents its knowledge about the world in a coherent, expressive and well-understood way, in our case the temporal logics from Erik Sandewall's ”Features and Fluents” framework.
However, most existing planning systems make no use of temporal logics, but have specialised representations such as the STRIPS formalism and hierarchical task networks. In order to benefit from the techniques used by these planners, it is useful to analyse and reconstruct them within the given framework. This includes making explicit the ontological and epistemological assumptions underlying the planners; representing plans as entities of the temporal logic; and reconstructing the algorithms in terms of the new representation.
In this thesis, a number of traditional planners are analysed and reconstructed in this way. The total-order planner STRIPS, the partial-order planner TWEAK, the causal-link planner SNLP, and finally the decompositional planner NONLIN are all examined. The results include reconstructions of the planners mentioned, operating on a temporal logic representation, and truth criteria for total-order and partial-order plans. There is also a discussion regarding the limitations of traditional planners from the perspective of ”Features and Fluents”, and how these limitations can be overcome.
ON CONCEPTUAL MODELLING OF MODE SWITCHING SYSTEMS
This thesis deals with fundamental issues underlying the systematic construction of behaviour models of physical systems, especially man made engineering systems. These issues are important for the design and development of effective computer aided modelling systems providing high-level support for the difficult task of modelling. In particular, the thesis is about conceptual modelling of physical systems, i.e. modelling characterized by the explicit use of well defined abstract physical concepts. An extensive review of conceptual modelling is presented, providing good insight into modelling in its own and forming a useful reference for the development of computer aided modelling systems.
An important contribution of this work is the extension of the conceptual modelling framework by an ideal switch concept. This novel concept enables a uniform and systematic treatment of physical systems involving continuous as well as discrete changes. In the discussion of the switch concept, the bond graph approach to modelling is used as a specific example of a conceptual modelling approach. In this dicussion, the bond graph version of the switch is presented. This switch element fully complies with the standard bond graph modelling formalism.
The thesis consists of six papers. The first paper contains an extensive review of (traditional) conceptual modelling. The next three papers deals with the ideal switch concept and the remaining two papers discuss an application. Four of these papers have been published in proceedings of international conferences.
REASONING ABOUT CONCURRENT ACTIONS IN THE TRAJECTORY SEMANTICS
We have introduced concurrency into the framework of Sandewall. The resulting formalism is capable of reasoning about interdependent as well as independent concurrent actions. Following Sandewall’s systematical method, we have then applied the entailment criterion PCM to selecting intended models of common sense theories where concurrent actions are allowed, and proved that the criterion leads to only intended models for a subset of such theories. Our work implies that most of Sandewall’s results on the range of applicability of logics for sequential actions can be reobtained similarly for concurrent actions as well after necessary generalizations.
SUCCESSIV RESULTATAVRÄKNING AV PÅGÅENDE ARBETEN. FULLSTUDIER I TRE BYGGFÖRETAG
I Swerige har sedan längeden externa redovisningen av resultat i pågående arbeten varit direkt anpassad till en sträng tolkning av realisationsprincipen och en strikt tillämpning av försiktighetsprincipen. Med anledning av harmoniseringssträvanden och en internationell regleringsaktivitet i riktning mot periodisering (matching) har det uppfattats som angeläget att studera förutsättningarna för ett principbyte i den externa redovisningen även i Sverige.
Syftet med undersökning har varit att kartlägga och beskriva potentiella problem som en tillämpning av successivresultatavräkning kan medföra. Undersökningen har genomförts med ett överordnat systemperspektiv viklet innebär att bygg- och anläggningsföretag betraktas som en speciell kategori av företag med en egen karaktäristik. De teoretiska utgångspunkterna har tagits i ett synsätt som innebär att informationens beslutsanvändbarhet för externa intressenter ställs i fokus. Följden av detta är att redovisningsdatans relevans och tillförlitlighet diskuterats ingående med avseende på realisations- och försiktighetsprincipen. Resultatet av denna diskussion har varit att ett starkt stöd för periodisering av resultat i pågående arbeten har konstaterats.
En referensram har därefter formulerats utifrån ett urval av internationella rekommendationer och standards i vilka reglerna har tagit ställning för successiv resultatavräkning av entreprenadprojekt. Syftet med detta har varit att analysera valda rekommendationer och standards för att identifiera och belysa väsentliga problemområden vid tillämpning av successiv resultatavräkning. Genomgången har resulterat i en arbetsmodell där detaljerade bestämningar formulerats om problemens struktur. Arbetsmodellen har sedan legat till grund för en intervjuguide som tillämpats i de fallstudier. Fallstudierna har genomförts för att empiriskt belysa de identifierade problemområdena.
Urvalet av fallföretag täcker en majoritet av de i Sverige börsnoterade företagen som bedriver bygg- och anläggningsentreprenader. Undersökninger har därför karaktären av en totalundersökning. På grund av att tre fallstudier har genomförts har det uppfattats föreligga goda möjligheter till generaliseringar. De tre fallen har varit PEAB Öst AB, ett entreprenadbolag inom PEAB-koncernen, Siab och Skanska.
Undersökningsresultaten har indikerat en stor överensstämmelse mellan företagen i samtliga problemområden. Slutsatser som resultaten givit är bland annat att bygg- och anläggningsföretagen i undersökningen har kapacitet, kunskap och erfarenhet av tillämpning avsuccessiv resultatavräkning av sina fastprisprojekt. Det har även konstaterats att företagen har en väl utvecklad systemteknisk kapacitet för hantering av sina projekt vad avser kalkylering, redovisning och uppföljning. De problem som iakttagits har framförallt varit hänförliga till prognosarbetet där eventuella brister i framtida uppskattningar om kvarvarande kostnader uppfattats ha stor inverkan på avräknade resultatandelars riktighet. Även de inledande faserna där produktionskalkyler formuleras och konton väljs har uppfattats som kritiska för riktigheten i avräknade resultatandelar. Till sist har även själva redoviningen av nedlagda kostnader uppfattats utgöra en potentiell felkälla på grund av att det akn vara svårt att hänföra kostnader till rätt projekt och projektdel.
De implikationer som resultaten givit upphov till har varit att det bör finnas möjligheter till en tillämpning av successiv resultatavräkning även i den externa redovisningen för den undersökta kategorin av företag. Undersökningen har dock inte mer än kortfattat tagit upp skattemässiga konsekvenser som ett principbyte skulle kunna medföra varför dessa inte har kunnat analyseras.
No FHS 7/95
ARBETSINTEGRERAD SYSTEMUTVECKLING MED KALKYLPROGRAM
Studieområdet för denna rapport kan formuleras: ”Människor utvecklar med hjälp av kalkylprogram, inom ramen för sina linjearbetsuppgifter, tillämpningar för eget behov, som om de gjordes med ’traditionell’ systemutveckling, skulle innebära betydligt större insatser av tid, personal och specialistkompetens”. Denna aktivitet kan ses ur ett process- produktperspektiv, där processen kallas arbetsintegrerad systemutveckling med kalkylprogram (AIS-K) och produkten kallas kalkylsystem. Människorna som utför aktiviterna kallas användarutvecklare.
Syftet med rapporten är att analysera AIS-K som fenomen. AIS-K har analyserats teoretiskt och empiriskt. Den teoretiska analysen har bestått i att relatera AIS-K till Livscykelmodellen och till Nurminens HIS-modell. Fyra empiriska studier har gjorts, en genom rekonstruerad deltagande observation och de övriga genom intervjuer och enkäter.
Resultaten visar att arbetsintegrerad systemutveckling är att uttryck för integration dels av arbetsuppgifter och systemutveckling och dels av olika systemutvecklingsaktiviteter. I relation till ”traditionell” systemutveckling fokuseras arbetsuppgiften snarare än utvecklingsarbetet. AIS-K präglas av stegvis förfining, sammanflätade aktiviteter, deklarativt arbetssätt samt avsaknad av standardiserat metodarbete. Kalkylsystem kan indelas efter krav på användarutvecklares förkunskaper om kalkylprogram. AIS-K innebär förändring av roller i utvecklingsarbetet där användarutvecklaren kombinerar utvecklarrollen med någon traditionell användarroll. Syftena med kalkylsystem kan vara rationaliserande, beslutsstödjande eller strategiska.
COMPLEXITY OF STATE-VARIABLE PLANNING UNDER STRUCTURAL RESTRICTIONS
Computationally tractable planning problems reported in the literature have almost exclusively been defined by syntactical restrictions. To better exploit the inherent structure in problems, it is probably necessary to study also structural restrictions on the underlying state-transition graph. Such restrictions are typically hard to test since this graph is of exponential size. We propose an intermediate approach, using a state variable model for planning and defining restrictions on the state-transition graph for each state variable in isolation. We identify such restrictions which are tractable to test and we present a planning algorithm which is correct and runs in polynomial time under these restrictions.
Moreover, we present an exhaustive map over the complexity results for planning under all combinations of four previously studied syntactical restrictions and our five new structural restrictions. This complexity map considers both the bounded and unbounded plan generation problem. Finally, we extend a provably correct, polynomial-time planner to plan for a miniature assembly line, which assembles toy cars. Although somewhat limited, this process has many similarities with real industrial processes.
TOWARDS STUDENT MODELLING THROUGH COLLABORATIVE DIALOGUE WITH A LEARNING COMPANION
Eva L Ragnemalm
Understanding a student's knowledge is an important part of adapting the instruction to that individual student. In the area of Intelligent Tutoring Systems this is called Student Modelling.
A specific student modelling problem is studied in the situation where a simulator-based learning environment is used to train process operators in diagnosis. An experiment shows that the information necessary for building a student model is revealed in the dialogue between two students collaborating on diagnosing a fault.
As a side effect of this investigation a framework for describing student modelling emerged. In this framework student modelling is viewed as the process of bridging the gap between observations of the student and the system's conception of the knowledge to be taught.
This thesis proposes the use of a Learning Companion as the collaboration partner. The ensuing dialgoue can then be used for student modelling. An analysis of the collaborative dialogue is presented and several important characteristics identified. An architecture is presented and a prototype that reproduces these characteristics is described.
CONTRIBUTIONS TO PARALLEL MULTUPARADIGM LANGUAGES: COMBINING OBJECT-ORIENTED AND RULE-BASED PROGRAMMING
Today, object-oriented programming is widely used as a practical tool. For some types of complex applications, the object-oriented style needs to be complemented with other types of programming paradigms into a multiparadigm language. One candidate for such a complement is the rule-based programming paradigm. For this purpose, several object-oriented languages have been extended with rule-based features from production systems.
- We propose a loosely coupled parallel multiparadigm language based on object-orientation, features from production systems, and ideas from the joint action concept. The latter is a model for writing executable specifications, but basically it is a rule-oriented technique. It has a loose coupling between objects and actions, which is essential to extend an object-oriented language in an evolutionary way.
- Production systems have a natural potential for massively parallel execution, and have a general execution model. However, they have traditionally been limited to applications within the area of artificial intelligence. If the restrictions imposed by the traditional problems are eliminated, rule-based programming can become practical for a wider spectrum of applications, and they can also utilize parallelism to a higher degree.
- The main contribution of this thesis is to investigate the possibilities of cross-fertilization between some research areas that can contribute to a language of the proposed type. These areas are object-orientation, production systems, parallel computing, and to some extent formal specification languages and database management systems.
- A prototype, intended to verify some of our ideas, has been built with the Erlang functional language and is implemented on a parallel machine.
A PETRI NET BASE UNIFIED REPRESENTATION FOR HARDWARE/SOFTWARE CO-DESIGN
This thesis describes and defines a design representation model for hardware/software co-design. To illustrate its usefulness we show how designs captured in the representation can be repartitioned by moving functionality between hardware and software. We also describe a co-simulator which has been implemented using the representation and can be used to validate systems consisting of hardware and software components.
The term co-design implies the use of design methodologies for heterogeneous systems which emphasize the importance of keeping a system-wide perspective throughout the design process and letting design activities in different domains influence each other. In the case of hardware/software systems this means that the design of hardware and software subsystems should not be carried out in isolation from each other.
We are developing a design environment for hardware/software co-design and the objective of the work presented in this thesis has been to define the internal representation that will be used in this environment to capture designs as they evolve. The environment should support transformation-based design methods and in particular it should be possible to introduce transformations that move functionality from hardware to software and vice versa, thus allowing repartitioning of designs to be performed as a part of the normal optimization process.
Our co-design representation captures systems consisting of hardware circuits cooperating with software processes run on pre-defined processors. Its structure and semantics are formally defined, and hardware and software are represented in very similar manners, as regards both structure and semantics. Designs are represented as subsystems linked by communication channels. Each subsystem has a local controller which is represented by a Petri net. Data manipulation representation is based on datapaths in hardware and syntax trees in software. The representation is executable. It captures both abstract and concrete aspects of designs and supports transformation-based design methods. Implementation of the co-design representation has demonstrated that it can be used for several important tasks of the hardware/software co-design process.
ENVIRONMENT SUPPORT FOR BUILDING STRUCTURED MATHEMATICAL MODELS
This thesis is about two topics. It describes a high-level programming environment for scientific computing, called ObjectMath, and several contributions to this environment. It also analyses the concept of software development environment architecture, in particular with respect to the ObjectMath environment. The ObjectMath programming environment is designed to partly automate many aspects of the program development cycle in scientific computing by providing support for high-level object-oriented mathematical modelling and generation of efficient numerical implementations from such high-level models. There is a definite need for such tools, since much scientific software is still written in Fortran or C the traditional way, manually translating mathematical models into procedural code and spending much time on debugging and fixing convergence problems. The automatic code generation facilities in the ObjectMath environment eliminate many of the problems and errors caused by this manual translation. The ObjectMath language is a hybrid language, combining computer algebra facilities from Mathematica with object-oriented constructs for single and multiple inheritance and composition. Experience from using the environment shows that such structuring concepts increase re-use and readability of mathematical models. Large object-oriented mathematical models are only a third of the size of corresponding models that are not object-oriented. The system also provides some support for visualization, both for models and for numerical results.
The topic of engineering a software development environment is very important in itself, and has been dealt with primarily from an architectural point of view. Integration of different tools and support facilities in an environment is important in order to make it powerful and to make it easy to use. On the other hand, developing whole environments completely from scratch is very costly and timeconsuming. In the ObjectMath project, we have followed an approach of building an integrated environment using mostly pre-existing tools, which turned out very well. In this thesis the integration aspects of ObjectMath is analysed with respect to three dimensions: control, data and user interface, according to a general model described by Schefström. The conclusion is that ObjectMath fits this model rather well, and that this approach should be successful in the design of future environments, if the integration issues are dealt with in a systematic way. In addition, the analysis provides some guidance regarding integration issues that could be enhanced in future versions of ObjectMath.
STRUCTURE DRIVEN DERIVATION OF INTER-LINGUAL-ARGUMENT TREES FOR MULTI LINGUAL GENERATION
We show how an inter-lingual representation o messages can be exploited for natural language generation of technical documentation into Swedish and English in a system called Genie. Genie has a conceptual knowledge base of the facts considered as true in the domain. A user queries the knowledge base for the facts she wants the document to include. The responses constitute the messages which are multi-lingually generated into Swedish and English texts.
The particular kind of conceptual representation of messages that is chosen, allows for two assumptions aboutinter-linguality; (i) Syntactic compositionality, viz. the linguistic expression for a message is a function from the expressions obtained from the parts of the message. (ii) A message has in itself an adequate expression, which gives a restriction in size of the input to generation. These assumptions underlie a grammar that maps individual messages to linguistic categories in three steps. The first step constructs a functor-argument tree over the target language syntax using a non-directed unification categorial grammar. The tree is an inter-mediate representation that includes the message and the assumptions. It lies closer to the target languages but is still language neutral. The second step instantiates the tree with linguistic material according to target language. The final step uses the categorial grammar application rule on each node of the tree to obtain the resulting basic category. It contains an immediate representation for the linguistic expression of the message, and is trivially converted into a string of words. Some example texts in the genre have been studied. Their sublanguage traits clearly enable generation by the proposed method.
The results indicate that Genie, and possibly other comparable systems that have a conceptual message representation, benefits in efficiency and ease of maintenance of the linguistic resources by making use of the knowledge-intensive multi-lingual generation method described here.
PREDICTION AND POSTDICTION UNDER UNCERTAINTY
An intelligent agency requires the capability to predict what the world looks like as a consequence of its actions. It also needs to explain present observations in order to infer previous states. This thesis proposes an approach to realize both capabilities, that is prediction and postdiction based on temporal information. In particular, there is always some uncertainty in the knowledge about the world which the autonomous agent inhabits. Therefore we handle uncertainty using probability theory. None of the previous works dealing with quantitative (or numerical) approaches addressed on the postdiction problem in designing an intelligent agent. Our thesis presents a method to resolve this postdiction problem under uncertainty.
No FHS 8/95
METODER I ANVÄNDNING - MOT FÖRBÄTTRING AV SYSTEMUTVECKLING GENOM SITUATIONELL METODKUNSKAP OCH METODANALYS
För utveckling av datorbaserade system har det under flera decennier funnits många olika metoder och modeller. Utvecklingen av metoder har skett ända från slutet av 60-talet och fram till nu. Under senare år har medvetenheten för metoder ökat och de används som hjälpmedel för att utveckla verksamheter och datasystem. Det finns många olika faktorer som påverkar metodanvändningen, vilka kan benämnes som situationella omgivningsfaktorer. Dessa faktorer är viktiga att ta hänsyn till då man använder metoder. Till metoder finns det olika typer av datorstöd för att stödja användningen. Verktyg för de senare faserna av systemutvecklingsprocessen har existerat under en lång tid, medan verktyg för de tidigare faserna har blivit allt vanligare under de senare åren.
Verktyg som ger stöd för metodanvändningen kallas för CASE-verktyg. CASE är en förkortning av Computer Aided Systems/Software Engineering. CASE-verktyg som stödjer de tidiga faser kallas för UpperCASE och verktyg som stödjer senare faser kallas för LowerCASE. En viktig del i ett CASE-verktyg är dess metodstöd, eftersom detta påverkar metodanvändningen.
Avhandlingen behandlar hur metoder används i systemutveckling och vad som påverkar användningen samt hur metoden påverkar systemutvecklingsarbetet. Det redovisas också en teoretisk kunskapsgenomgång av metodanalys och hur detta kan utnyttjas för att förbättra metodanvändningen. Resultatet från undersökningen visar på att det finns olika aspekter som påverkar metodanvändningen som t ex utvecklingsmiljö, förankring av metoden och olika situationella omgivningsfaktorer. Dessa omgivningsfaktorer är mer eller mindre generella och dessa är situationella av sin karaktär. Det innebär att de uppkommer i olika situationer och de är faktorer som bör beaktas då man situationsanpassar sin metod. Metodanalys kan användas på ett antal områden för att förbättra metodanvändningen. I rapporten identifieras följande områden: drivkraft i organisationen, aktiv metodutveckling - precisering av metoden, kompetensutveckling kring metoder, metamodeller för integration av CASE-verktyg och jämförelseinstrument av metoder.
No FHS 9/95
SYSTEMFÖRVALTNING I PRAKTIKEN - EN KVALITATIV STUDIE AVSEENDE CENTRALA BEGREPP, AKTIVITETER OCH ANSVARSROLLER
Intresset för systemförvaltning har ökat under senare år, inte minst mot bakgrund av att stora resurser av de totala ADB-kostnaderna anses åtgå till förvaltning av befintliga system. Syftet med den här studien är att undersöka vad systemförvaltning egentligen innebär. Den här studien kan sägas utgöra en grundläggande verksamhetsanalys av systemförvaltningsarbetet med inriktning på ändringshantering och dess problembild. Sju fallstudier har genomförts i svenska organisationer som bedriver systemförvaltning. Utifrån dessa fallstudier har begrepp och kategorier genererats med hjälp av ”grounded theory” - ansats. Rapporten består av en inledande del där frågeställningar, forskningsmetod och teoretiska förutsättningar redovisas. Därefter redovisas det empiriska materialet från fallstudierna. Resultatet av studien visar att systemförvaltning idag bedrivs främst ADB-inriktat. De verksamhetsinriktade aspekterna tillgodoses inte tillräckligt. I rapporten presenteras därför en är- och en bör-definition av systemförvaltning. Är-definitionen uttrycker hur systemförvaltning bedrivs idag (på ADB-inriktat sätt). Bör-definitionen uttrycker vad systemförvaltning borde vara utifrån en verksamhetsinriktad syn. Systemförvaltningens aktiviteter har identifierats och presenteras i rapporten. Avgränsning mot närliggande aktiviteter görs genom att en ny livscykelmodell för informationssystemarbete presenteras. Ett antal ansvarsroller som är nödvändiga i samband med systemförvaltning har identifierats och presenteras i rapporten.
TOWARDS A STRATEGY FOR SOFTWARE REQUIREMENTS SELECTION
The importance of identifying clearly the requirements for a software system is now widely recognized in the software engineering community. From the emerging field of requirements engineering, this thesis identifies a number of key areas. In particular, it shows that, to achieve customer satisfaction, it is essential to select a subset of all candidate requirements for actual implementation. This selection process must take into account both the value and the estimated cost of including any candidate requirements in the set to be implemented. At the moment this entire process is usually carried out in an informal manner so there is a pressing need for techniques and a strategy to support it.
With the explicit aim of clearly identifying requirements, the use of quality function deployment (QFD) in an industrial project at Ericsson Radio Systems AB was studied and evaluated. It was found that QFD helps developers to focus more clearly on the customer’s needs and in managing non-functional requirements.
To trade off maximization of value and minimization of cost when selecting requirements for implementation a method was developed, the contribution-based method, which is based on the analytic hierarchy process (AHP). The contribution-based method was applied in two industrial projects and the results evaluated. Both studies indicate that both the value and cost of candidate requirements can vary by orders of magnitude. Thus, deciding which requirements to be selected for implementation is of paramount importance, and also a primary determinant of customer satisfaction. The contribution-based method forms the basis of a selection strategy that will maximize customer satisfaction.
SCHEDULABILITY-DRIVEN PARTITIONING OF HETEROGENEOUS REAL-TIME SYSTEMS
During the development of a real-time system, the main goal is to find an implementation that satisfies the system's specified worst-case timing constraints. Often, the most cost-effective solution is a heterogeneous implementation, where some parts of the functionality are implemented in software, and the rest in hardware, using application-specific circuits. Hardware/software codesign allows the designer to describe the complete system homogeneously, and thereafter divide it into separate hardware and software parts. This thesis is a contribution to hardware/software partitioning of real-time systems. It proposes an automatic partitioning of a set of real-time tasks in order to meet their deadlines. A key issue when verifying timing constraints is the analysis of the task scheduler. Therefore, an extension of fixed-priority scheduling theory is proposed, which is suitable for heterogeneous implementations. It includes an optimal task priority assignment algorithm. The analysis uses information about the execution time of the tasks in different implementations, and a method for estimating these data is also proposed. The analysis results are used to guide the partitioning process, which is based on a branch-and-bound algorithm.
TOWARD COOPERATIVE ADVICE-GIVING SYSTEMS: THE EXPERT SYSTEMS EXPERIENCE
Expert systems have during the last fifteen years successfully been applied to a number of difficult problems in a variety of different application domains. Still, the impact on the commercial market has been less than expected, and the predicted boom just failed to occur. This thesis seeks to explain these failures in terms of a discrepancy between the tasks expert systems have been intended for and the kind of situations where they typically have been used. Our studies indicate that the established expert systems technology primarily focuses on providing expert-level solutions to comparatively well-defined problems, while most real-life applications confront a decision maker with much more ill-defined situations where the form of the argumentation rather than the explicit decision proposal is crucial. Based on several commercial case-studies performed over a 10-year period together with a review of relevant current research in decision making theory, this thesis discusses the differences between different use situations with respect to the degree of how well-defined the decision task is and what kind of support the users require. Based on this analysis, we show the need for a shift in research focus from autonomous problem solvers to cooperative advice-giving systems intended to support joint human-computer decision making. The requirements on techniques suitable to support this trend toward cooperative systems are discussed and a tentative system architecture and knowledge representation for such systems is proposed. The thesis concludes with a research agenda for examining the cost and benefits of the suggested approach as a tool for cooperative advice-giving systems, and to determine the appropriateness of such systems for real-world application problems.
BILDER AV SMÅFÖRETAGARES EKONOMISTYRNING
Utvecklingen av ekonomisk styrning och även den aktuella debatten inom området är till stora delar koncentrerad till stora företag. Under en seminarieserie om risker i samband med nyföretagande anordnad av Närings- och teknikutvecklingsverket (NUTEK) aktualiserades frågan om ekonomisk styrning i små och nystartade företag.
Inför denna studie blev det därför naturligt att ställa sig frågan om hur de teorier och metoder som idag finns till förfogande för ekonomistyrning fungerar i små företag. Syftet med denna studie är att öka insikterna om hur ekonomisk styrning bedrivs i små företag. Detta sker genom att beskriva och analysera den ekonomiska styrningen i ett antal små företag.
Undersökningen omfattar intervjuer med nio småföretagsledare. Centrala forskningsfrågor är vilka mål företagsledaren har, vilka ekonomiska verktyg som används, hur beslutsfattande sker samt hur samarbetet med externa aktörer i form av banker, revisorer och konsulter fungerar.
Studien visar att långsiktiga mål ofta anges i kvalitativa termer. Mer sällan är de monetära målen konkret formulerade. Företagandet bygger ofta på ett praktiskt kunnande hos företags-ledaren. Affärsidén kan antingen vara explicit formulerad eller mer outtalad. Vidare visas hur komplext beslutsfattande kan vara i den typ av verksamhet som har studerats. I vissa fall kan en sekventiell och rationell beslutsmodell spåras. I andra fall kan irrationella beslutsmodeller beskriva beslutsprocessen. Tydligt är dock att både formella beslutsunderlag och intuition och erfarenhet spelar en viktig roll vid beslutsfattande. Studien visar dessutom att många av de metoder och verktyg som anges inom ekonomistyrningslitteratur också används i de studerade företagen. Vilka verktyg som upplevs som mest betydelsefulla varierar mellan de olika företagsledarna. I studien tolkas användningen av formella ekonomistyrningsmetoder som ett utslag för en avvägning mellan nytta och kostnader i en situation där resurserna är mycket begränsade. Dessutom framkommer att företagsledare har förhållandevis få externa kontakter med vilka affärsverksamheten diskuteras. Av dessa upplevs samarbetet med revisor i många fall som det viktigaste.
Slutligen argumenteras för att nya metoder för styrning av verksamheter mot ekonomiska mål behöver utvecklas. Traditionella metoder för ekonomistyrning kan behöva kompletteras med andra metoder där verksamhetens centrala resultatskapande faktorer identifieras och utnyttjas för styrning, vilket kan innebära att icke-monetära mått får en ökad betydelse.
EFFICIENT MANAGEMENT OF OBJECT-ORIENTED QUERIES WITH LATE BINDING
To support new application areas for database systems such as mechanical engineering applications or office automation applications a powerful data model is required that supports the modelling of complex data, e.g. the object-oriented model.
The object-oriented model supports subtyping, inheritance, operator overloading and overriding. These are features to assist the programmer in managing the complexity of the data being modelled.
Another desirable feature of a powerful data model is the ability to use inverted functions in the query language, i.e. for an arbitrary function call fn(x)=y, retrieve the arguments x for a given result y.
Optimization of database queries is important in a large database system since query optimization can reduce the execution cost dramatically. The optimization considered here is a cost-based global optimization where all operations are assigned a cost and a way of a priori estimating the number of objects in the result. To utilize available indexes the optimizer has full access to all operations used by the query, i.e. its implementation.
The object-oriented data modelling features lead to the requirement of having late bound functions in queries which require special query processing strategies to achieve good performance. This is so because late bound functions obstruct global optimization since the implementation of a late bound function cannot be accessed by the optimizer and available indexes remain hidden within the function body.
In this thesis the area of query processing is described and an approach to the management of late bound functions is presented which allows optimization of invertible late bound functions where available indexes are utilized even though the function is late bound. This ability provides a system with support for the modelling of complex relations and efficient execution of queries over such complex relations.
AN APPROACH TO AUTOMATIC CONSTRUCTION OF GRAPHICAL USER INTERFACES FOR APPLICATIONS IN SCIENTIFIC COMPUTING
Applications in scientific computing perform input and output of large amounts of data of complex structure. Since it is difficult to interpret these data in textual form, a graphical user interface (GUI) for data editing, browsing and visualization is required. The availability of a convenient graphical user interface plays a critical role in the use of scientific computation systems.
Most approaches to generating user interfaces provide some interactive layout facility together with a specialized language for describing user interaction. Realistic automated generation approaches are largely lacking, especially for applications in the area of scientific computing.
This thesis presents two approaches to automatically generating user interfaces from specifications. The first is a semi-automatic approach, that uses information from object-oriented mathematical models, together with a set of predefined elementary types and manually supplied layout and grouping information. This system is currently in industrial use for generating user interfaces that include forms, pull-down menus and pop-up windows. The current primary application is bearing simulation, which typically accepts several thousand input parameters and produces gigabytes of output data. A serious disadvantage is that some manual changes need to be made after each update of the high-level model.
The second approach avoids most of the limitations of the first generation graphical user interface generating system. We have designed a tool, PDGen (Persistence and Display Generator) that automatically creates a graphical user interface from the declarations of data structures used in the application (e.g., C++ class declarations). This largely eliminates the manual update problem. Structuring and grouping information is automatically extracted from the inheritance and part-of relations in the object-oriented model and transferred to PDGen which creates the user interface. The attributes of the generated graphical user interface can be altered in various ways if necessary.
This is one of very few existing practical systems for automatically generating user interfaces from type declarations and related object-oriented structure information.
MULTIDATABASE INTEGRATION USING POLYMORPHIC QUERIES AND VIEWS
Modern organizations need tools that support coordinated access to data stored in distributed, heterogeneous, autonomous data repositories.
Database systems have proven highly successful in managing information. In the area of information integration multidatabase systems have been proposed as a solution to the integration problem.
A multidatabase system is a system that allows users to access several different autonomous information sources. These sources may be of a very varying nature. They can use different data models or query languages. A multidatabase system should hide these differences and provide a homogeneous interface to its users by means of multidatabase views.
Multidatabase views require the query language to be extended with multidatabase queries, i.e. queries spanning multiple information sources allowing information from the different sources to be combined and automatically processed by the system.
In this thesis we present the integration problem and study it in an object-oriented setting. Related work in the area of multidatabase systems and object views is reviewed. We show how multidatabase queries and object views can be used to attack the integration problem. An implementation strategy is described, presenting the main difficulties encountered during our work. A presentation of a multidatabase system architecture is also given.
No FiF-a 1/96
AFFÄRSPROCESSINRIKTAD FÖRÄNDRINGSANALYS - UTVECKLING OCH TILLÄMPNING AV SYNSÄTT OCH METOD
Affärsprocesstänkande är ett idag mycket populärt synsätt då verksamheter utvärderas och utvecklas. Definitionen av begreppet affärsprocess varierar bland olika författare och osäkerhet råder om hur det skall tolkas. I studien som ligger till grund för denna rapport används begreppet affärsprocess för den samling av aktiviteter som utförs i samband med affärer, d v s det sätt på vilket verksamheten gör affärer. Det affärsprocessinriktade perspektivet innebär en fokusering på kunden och en tydlig koppling till verksamhetens affärsidé. Det är därför viktigt att detta perspektiv anläggs då verksamhetsutveckling bedrivs. En förändringsanalys innebär att förutsättningar för verksamhetsutveckling skapas, där utveckling av informationssystem är en väsentlig aspekt. Att genomföra en förändringsanalys innebär att verksamheten analyseras i syfte att generera ett antal förändringsbehov till vilka sedan olika förändringsåtgärder föreslås och värderas. De föreslagna förändringsåtgärderna skall utgöra ett beslutsunderlag för fortsatt verksamhetsutveckling. I studien har förändringsanalys enligt SIMMetoden (FA/SIMM) vidareutvecklats till affärsprocessinriktad förändringsanalys (AFA) genom att ta hänsyn till de konsekvenser som affärsprocesstänkande innebär. Processen att (vidare)utveckla en metod innebär att metoden grundas såväl internt som i teori och empiri. I denna rapport presenteras resultatet av ett aktionsforskningsprojekt där AFA både har generats och prövats empiriskt, varvid metoden har tillämpats i en omfattande förändringsanalys på ett medelstort stålföretag. Metoden utgör en kongruent helhet och dess kategorier har relaterats till andra teorier och begrepp. Metodutvecklingen har särskilt fokuserats på verksamhetsanalys och processmodellering. Resultatet av studien är ett metodkoncept som är vidareutvecklat med FA/SIMM som basmetod vad gäller såväl synsätt, arbetssätt, begrepp som notation.
HIGH-LEVEL SYNTHESIS UNDER LOCAL TIMING CONSTRAINTS
High-level synthesis deals with the problem of transforming a behavioral description of a design into a register transfer level implementation. This enables the specification of designs at a high level of abstraction. However, designs with timing requirements can not be implemented in this way, unless there is a way to include the timing requirements in the behavioral specification. Local timing constraints (LTCs) enable the designer to specify the time between the execution of operations and more closely model the external behavior of a design. This thesis deals with the modelling of LTCs and the process of high-level synthesis under LTCs.
Since high-level synthesis starts from behavioral specifications an approach to transfer LTCs from behavioral VHDL to an intermediate design representation has been adopted. In the intermediate design representation the LTCs are modelled in such a way that they can be easily analyzed and interpreted. Before the high-level synthesis process is started a consistency check is carried out to discover contradictions between the specified LTCs.
If the LTCs are consistent a preliminary scheduling of the operations can be performed and the clock period decided. For that purpose two different approaches, based on unicycle and multicycle scheduling, have been developed. The unicycle scheduling approach assumes that the longest delay between two registers equals the clock period. Design transformations are used to change the number of serialized operations between the registers and, thus, change the clock period to satisfy the LTCs. The multicycle scheduling approach allows the longest delay between two registers to be several clock periods long. Thus, the goal is to find a reasonable clock period and a preliminary schedule that satisfy the LTCs. Furthermore, the multicycle scheduling approach does trade-offs between speed and cost (silicon area) when deciding on which modules to be used to implement the operations. Both Genetic algorithms and Tabu search are used to solve the combinatorial optimization problem that arises during the multicycle scheduling.
If the preliminary schedule fulfills all the LTCs then module allocation and binding is performed. The goal is to perform all the operations while using as few functional modules as possible. This is achieved by module sharing. Experimental results show that the high-level synthesis performed by the proposed methods produces efficient designs.
FÖRUTSÄTTNINGAR OCH BEGRÄNSNINGAR FÖR ARBETE PÅ DISTANS - ERFARENHETER FRÅN FYRA SVENSKA FÖRETAG
Att arbeta på distans är inget nytt fenomen, men dagens informationsteknik har gjort arbetsformen tillgänglig för fler yrkeskategorier än tidigare och intresset för distansarbete är idag stort. Denna undersökning avser arbete på distans i företag där ett flertal personer i samma grupp distansarbetar. I dessa grupper arbetar de anställda både på distans från arbetsgivaren och från varandra. Syftet har varit att utifrån de studerade fallen identifiera möjliga förutsättningar och begränsningar för arbetsformen. Särskilt har arbetsuppgifter och koordination varit i fokus.
Undersökningens empiriska del utgörs av studier i fyra företag där distansarbete tillämpas. I de studerade fallen är förutsättningarna något olika. I två fall sker distansarbetet på heltid och i två fall på deltid. Arbetsuppgifterna och de övriga förutsättningarna varierar. Datainsamlingen har skett i form av kvalitativa intervjuer med distansarbetare och deras chefer. Det empiriska materialet relateras till teorier om koordination och kommunikation och till resultat från ett antal andra undersökningar om distansarbete. I undersökningen görs också en jämförelse mellan de studerade fallen.
I undersökningen konstateras att behovet av koordination och kommunikation vid distansarbete i första hand styrs av de arbetsuppgifter som utförs. Många av de resultat som presenteras hänför sig därför till den typ av arbetsuppgifter som utförs. Arbetsuppgifter som är speciellt lämpliga för distansarbete är självständiga uppgifter, uppgifter som innehåller många externa kontakter och uppgifter där behovet av koncentration är stort. Koordinationen av distansarbete underlättas av en stark företagskultur och tydliga resultatvariabler. Det krävs också rutiner för att befrämja erfarenhetsutbyte mellan kollegor. Undersökningens resultat pekar på att behovet av formaliserade rutiner och styrprocesser kan öka vid distansarbete. Vidare konstateras att motivationen hos anställda och chefer är fundamental för att distansarbete ska fungera bra. Undersökningen visar att informationsteknik på flera sätt kan vara ett stöd för distansarbete, men att sårbarheten inom organisationen ökar. En intressant fråga som undersökningen ger upphov till är vilka konsekvenser distansarbete får på sikt.
QUALITY FUNCTIONS FOR REQUIREMENTS ENGINEERING METHODS
The objectives for this thesis is to establish what aspects of Requirements Engineering (RE) methods are considered important by the users. The thesis is to study an alternative RE method (Action Design), present evaluation results and establish general quality characteristics for RE methods. It is also an attempt to invoke the need to reflect over quality aspects of use and development of ethods within the field of RE.
The research is based on a grounded theory perspective where the studies together form the final results. The data collection was performed by intreviews and focus groups. The analysis of data was done by using (1) Action Design (AD) methodology as an instrument to evaluate AD itself, (2) Quality Function Deployment to structure and rank quality characteristics, and (3) by phenomenological analyses.
The results show the importance of considering social and organizational issues, user participation, project management and method customizing in the process of RE.
Further, the results suggest that support which integrate different methods, or parts of methods to achieve a suitable collection of instruments tailored for a specific project is needed. It is also found that RE is to be considered, not only in the early parts of the software development cycle, but as an integrated part of the whole software development cycle.
The conclusion is that RE methods beside the integation, need to be approached diffrently in the future. The integrated view of RE (as a part of the entire development process) could also be a way to solve some of the current problems that are discussed in relation to requirements in software development.
THE SIMULATION OF ROLLING BEARING DYNAMICS ON PARALLEL COMPUTERS
In this thesis we consider the simulation of rolling bearing dynamics on parallel computers. Highly accurate rolling bearing models currently require unacceptably long computation times, in many cases several weeks, using sequential computers.
We present two novel methods on how to utilize parallel computers for the simulation of rolling bearing dynamics. Both approaches give a major improvement in elapsed computation time.
We also show that, if knowledge about the application domain is used, the solution of stiff ordinary differential equations can successfully be performed on parallel computers. Potential problems with scalability of the Newton iteration method in the numerical solver are addressed and solved.
This thesis consists of five papers. One paper deals with more general approaches on how to solve ordinary differential equations on parallel computers. The other four papers are more focused on specific solution methods including applications to rolling bearing.
EXPLORATION OF POLYGONAL ENVIRONMENTS
Several robotic problems involve the systematic traversal of environments, commonly called exploration. This thesis presents a strategy for exploration of finite polygonal environments, assuming a point robot that has 1) no positional uncertainty and 2) an ideal range sensor that measures range in N uniformly distributed directions in the plane. The range data vector, obtained from the range sensor, corresponds to a sampled version of the visibility polygon. Edges of the visibility polygon that do not correspond to environmental edges are called jump edges and the exploration strategy is based on the fact that jump edges indicate directions of possibly unexplored regions.
This thesis describes a) the conditions under which it is possible to detect environmental edges in the range data, b) how the exploration strategy can be used in an exploration algorithm, and c) the conditions under which the exploration algorithm is guaranteed to terminate within a finite number of measurements.
COMPILATION OF MATHEMATICAL MODELS TO PARALLEL CODE
Generating parallel code from high-level mathematical models is in its general form an intractable problem. Rather than trying to solve this problem, a more realistic approach is to solve specific problem instances for limited domains.
In this thesis, we focus our efforts on problems where the main computation is to solve ordinary differential equation systems. When solving such a system of equations, the major part of the computing time is spent in application specific code, rather than in the serial solver kernel. By applying domain specific knowledge, we can generate efficient parallel code for numerical solution.
We investigate automatic parallelisation of the computation of ordinary differential equation systems at three different levels of granularity: equation system level, equation level, and clustered task level. At the clustered task level we employ existing scheduling and clustering algorithms to partition and distribute the computation.
Moreover, an interface is provided to express explicit parallelism through annotations in the the mathematical model.
This work is performed in the context of ObjectMath, a programming environment and modelling language that supports classes of equation objects, inheritance of equations, and solving systems of equations. The environment is designed to handle realistic problems.
SIMULATION AND DATA COLLECTION IN BATTLE TRAINING
To achieve realism in force-on-force battle training, it is important that the major factors of the battlefield are simulated in a realistic way. We describe an architecture for battle training and evaluation which provides a framework for integrating multiple sensors, simulators and registration equipment together with tools for analysis and presentation. This architecture is the basis for the MIND system, which is used in realistic battle training and for advanced after-action review. MIND stores the information recorded in a database which is the basis for subsequent analysis of training methods and improvement of tactics and military equipment. Data collected during battle training can support both modelling of Distributed Interactive Simulation (DIS) objects and the presented Time- delayed DIS (TDIS) approach. TDIS facilitates the training of staffs and commanders on high levels under realistic circumstances without the need of trainees and trainers on the lower unit levels. Systematic evaluation and assessment of the MIND system and its influence on realistic battle training can provide information about how to maximise the effect of the conducted battle training and how to best support other applications that use information from the system.
SOFTWARE QUALITY ENGINEERING BY EARLY IDENTIFICATION OF FAULT-PRONE MODULES
Quality improvement in terms of lower costs, shorter development times and increased reliability are not only important to most organisations, but also demanded by the customers. To enable management to early identify problems, and subsequently to support planning and scheduling of development processes, methods for identifying fault–prone modules are desirable. This thesis demonstrates how software metrics can form the basis for reducing development costs by early identification, at the completion of design, of the most fault–prone software modules. Based on empirical data, i.e. design metrics and fault data, that have been collected from two successive releases of switching systems developed at Ericsson Telecom AB, models for predicting the most fault–prone modules were successfully developed. Apart from reporting the successful analysis, this thesis outlines a quality framework for evaluation of quality efforts, provides a guide for quantitative studies, introduces a new approach to evaluating the accuracy of prediction models, Alberg diagrams, suggests a strategy for how variables can be combined, and evaluates and improves strategies by replicating analyses suggested by others.
COMMENTING SYSTEMS AS DESIGN SUPPORT—A WIZARD-OF-OZ STUDY
User-interface design is an activity with high knowledge requirements, as evidenced by scientific studies, professional practice, and the amounts of textbooks and courses on the subject. A concrete example of the professional need for design knowledge is the increasing tendency of customers in industrial systems development to require style-guide compliance. The use of knowledge-based tools, capable of generating comments on an evolving design, is seen as a promising approach to providing user-interface designers with some of the knowledge they need in their work. However, there is a lack of empirical explorations of the idea.
We have conducted a Wizard-of-Oz study in which the usefulness of a commenting tool integrated in a design environment was investigated. The usefulness measure was based on the user's perception of the tool. In addition, the appropriateness of different commenting strategies was studied: presentation form (declarative or imperative) and delivery timing (active or passive).
The results show that a commenting tool is seen as disturbing but useful. Comparisons of different strategies show that comments from an active tool risk being overlooked, and that comments pointing out ways of overcoming identified design problems are the easiest to understand. The results and conclusions are used to provide guidance for future research as well as tool development.
CHEFERS ANVÄNDNING AV KOMMUNIKATIONSTEKNIK
Under senare år har affärsmiljön för de flesta företag förändrats i snabb takt samtidigt som informations- och kommunikationstekniken genomgått en snabb utveckling. Den förändrade affärsmiljön kan tänkas förändra arbetssituationen för exempelvis chefer. Eftersom tidigare studier visat att ett mycket väsentligt inslag i chefers arbete är kommunikation skulle den nya tekniken kunna användas för att möta denna förändring i arbetssituation. Syftet med den här studien är mot denna bakgrund att skapa en förståelse för chefers inställning till och användning av kommunikationsteknik.
Studien har huvudsakligen genomförts genom intervjuer med chefer i två företag. Totalt har sexton chefer och sju medarbetare till dessa chefer intervjuats. De i studien intervjuade cheferna upplever en arbetssituation som är mycket tidspressad, med fragmenterade arbetsdagar, ett högt kommunikationstryck och ont om tid till ostört arbete. Vidare uttrycker respondenterna av en mängd olika skäl en stark preferens för kommunikation ansikte mot ansikte. Denna preferens är delvis en följd av att arbetsuppgifterna ibland är så komplexa att de kräver ett personligt möte för att kunna hanteras effektivt. En annan mycket viktig aspekt är de symboliska faktorerna, att chefen genom att närvara personligen signalerar att en viss fråga, en viss enhet, en viss kund etc. är viktig för organisationen.
Vad gäller användandet av kommunikationsteknik så förefaller det av studien att döma som om modern teknik i ganska liten utsträckning används för att minska resande och lätta på arbetsbördan för den studerade kategorin människor. Detta tycks bero på att man av sociala, symboliska och andra skäl vill och behöver träffas personligen. Dessa krav och önskemål på kontakter ansikte mot ansikte tycks vara så starka att inte ens mycket sofistikerad teknik kan ersätta den typen av kontakter. Det tycks i stället närmast vara så att teknikutvecklingen ökat mängden kommunikation genom att kommunikation via exempelvis mobiltelefon eller elektronisk post till viss del adderats till den tidigare kommunikationen.
DATA MANAGEMENT IN CONTROL APPLICATIONS - A PROPOSAL BASED ON ACTIVE DATABASE SYSTEMS
Active database management systems can provide general solutions to data management problems in control applications. This thesis describes how traditional control algorithms and high-level operations in a control system can be combined by using an embedded active object-relational database management system as middleware. The embedded database stores information about the controlled environment and machinery. The control algorithms execute on a separate real-time server. Active rules in the database are used to interface the model of the environment, as stored in the database, and the control algorithms. To improve information access, the control system is tightly integrated with the database query processor.
A control-application language specifies high-level manufacturing operations which are compiled into queries and active rules in the database. The thesis describes how the generated active rules can be organized to solve problems with rule interaction, rule cycles, and cascading rule triggering. Efficiency issues are addressed. The thesis also presents practical experience of building the integrated control system and the requirements such systems put on an embedded adbms.
A DEFAULT EXTENSION TO DESCRIPTION LOGICS AND ITS APPLICATIONS
This thesis discusses how to extend a family of knowledge representation formalisms known as description logics with default rules. Description logics are tailored to express knowledge in problem domains of a hierarchical or taxonomical nature, that is domains where the knowledge is easily expressed in terms of concepts, objects and relations. The proposed extension makes it possible to express "rules of thumb" in a restricted form of Reiter's default rules. We suggest that defaults of this form improve both the representational and inferential power of description logics. The default rules are used to compute the preferential instance relation which formally expresses when it is plausible that an object is an instance of a concept. We demonstrate the usefulness of the extension by describing two applications. The first solves a configuration problem where the goal is to find a suitable wine for a given meal where defaults are used as recommendations of wines. The second is a document retrieval application where default rules are used to enhance the search of WWW documents. The applications are based on an extended version of the knowledge-based system CLASSIC.
EKONOMISK STYRNING OCH ORGANISATORISK PASSION - ETT INTERAKTIVT PERSPEKTIV
Denna fallstudie handlar om ekonomisk styrning inom sjukvårdsverksamhet, där verksamheten utgörs av en vårdavdelning och dess personal. Studien fokuserar interaktionen mellan budgeten och de aktiviteter och personer som arbetar på lokal nivå i organisationen. Budgeten kan betraktas som den mekanism vilken dominerar den ekonomiska styrningen på vårdavdelningen.
I denna studie ses ekonomisk styrning ur ett interaktivt perspektiv. Med detta menar jag att styrningen måste förstås i ett praktiskt och konkret sammanhag där tekniska och sociala aspekter integreras. Detta görs genom att fyra begrepp lyfter fram specifika kontextuella faktorer vilka är av väsentlig betydelse för förståelsen av den ekonomiska styrningen på vårdavdelningen. Begreppen är professionalism, språk, mätbarhet och etik.
Avhandlingen bygger upp en teoretisk referensram kring begreppen styrning och budgetering. Genom att beskriva teorier ur olika perspektiv kan den mångfald som finns i begreppen lyftas fram. Samtidigt är den praktiska verkligheten på vårdavdelningen av stort intresse. Syftet är framförallt att visa den komplexitet som är förknippad med det praktiska användandet av teoretiska modeller, under specifika kontextuella förhållanden. Med detta menar jag bland annt att studien genomförs i en organisation där passionen och ansvarskänslan för arbetsuppgiften, det vill säga omhändertagandet av patienten, i de flesta fall prioriteras framför ekonomiska och administrativa kriterier.
För att beskriva det enskilda fallet har tre olika datainsamlingsmetoder använts. Insamling av dokument, deltagande observation och intervju. Genom de olika metoderna beskrivs budgetens roll på avdelningen utifrån det formella perspektivet, observatörens perspektiv samt de intervjuades perspektiv. Genom metodvalet har ett försök till ett holistiskt synsätt gjorts.
Studiens slutsatser presenteras i en modell där de organisatoriska krafterna systematisera i kategorierna administrativ, social och individbaserad styrning. Modellen betonar vidare att förståelse för styrning i en organisation uppnås genom förståelse för respektive kategori samt kopplingarna mellan dessa.
A VALUE-BASED INDEXING TECHNIQUE FOR TIME SEQUENCES
A time sequence is a discrete sequence of values, e.g. temperature measure ments, varying over time. Conventional indexes for time sequences are built on the time domain and cannot deal with inverse queries on time sequences under some interpolation assumptions (i.e. computing the times when the values satisfy some conditions). To process an inverse query the entire time sequence has to be scanned.
This thesis presents a dynamic indexing technique, termed the IP-index (Interpolation-index), on the value domain for large time sequences. This index can be implemented using regular ordered indexing techniques such as B-trees.
Performance measurements show that this index dramatically improves the query processing time of inverse queries compared to linear scanning. For periodic time sequences that have a limited range and precision on their value domain (most time sequences have this property), the IP-index has an upper bound for insertion time and search time.
The IP-index is useful in various applications such as scientific data analysis or medical symptom analysis. In this thesis we show how this index can be applied in the aeroplane navigation problem and dramatically improve the real-time performance.
C3 FIRE - A MICROWORLD SUPPORTING EMERGENCY MANAGEMENT TRAINING
The objective of this work has been to study how to support emergency management training using computer simulations. The work has focused on team decision making and the training of situation assessment in a tactical reasoning process. The underlying assumption is that computer simulations in decision-making training systems should contain pedagogical strategies. Our investigations started with empirical studies of an existing system for training infantry battalion staffs. In order to promote controlled studies in the area, we developed a microworld simulation system, C3Fire. By using a microworld, we can model important characteristics of the real world and create a small and well-controlled simulation system that retains these characteristics. With a microworld training system, we can create similar cognitive tasks to those people normally encounter in real-life systems. Our experimental use of C3Fire focuses on the problem of generating an information flow that will support training in situation assessment. Generated messages should contain information about the simulated world that will build up the trainees' mental pictures of the encountered situations. The behaviour of the C3Fire microworld was examined in an experimental study with 15 groups of subjects. The aim of the system evaluation of C3Fire was mainly to study the information flow from the computer simulation through the training organisation, involving role-playing training assistants, to the trained staff. The training domain, which is the co-ordination of forest fire fighting units, has been chosen to demonstrate principles rather than for its own sake.
A ROBUST TEXT PROCESSING TECHNIQUE APPLIED TO LEXICAL ERROR RECOVERY
This thesis addresses automatic lexical error recovery and tokenization of corrupt text input. We propose a technique that can automatically correct misspellings, segmentation errors and real-word errors in a unified framework that uses both a model of language production and a model of the typing behavior, and which makes tokenization part of the recovery process.
The typing process is modeled as a noisy channel where Hidden Markov Models are used to model the channel characteristics. Weak statistical language models are used to predict what sentences are likely to be transmitted through the channel. These components are held together in the Token Passing framework which provides the desired tight coupling between orthographic pattern matching and linguistic expectation.
The system, CTR (Connected Text Recognition), has been tested on two corpora derived from two different applications, a natural language dialogue system and a transcription typing scenario. Experiments show that CTR can automatically correct a considerable portion of the errors in the test sets without introducing too much noise. The segmentation error correction rate is virtually faultless.
TOWARD A GROUNDED THEORY FOR SUPPORT OF COMMAND AND CONTROL IN MILITARY COALITIONS
Command and control in military operations constitute a complex web of interrelated cognitive activities and information processes. Today, the practice of command and control is affected by simultaneous social and technological changes. Intensified research is justified in order to conceive the nature of the practice, the changes, and develop relevant theories for future evolution.
The purpose of the study is to generate theories, providing new insights in the traditional practices as the basis for continued research. In particular, we have studied coalition command and control during the UN operation in former Yugoslavia from the perspective of participating Swedish forces. We conduct a qualitative analysis of interview data, and apply a grounded theory approach within the paradigm of information systems research.
We have found that constraint management, for instance physical, communicative, and social constraints, dominates the command and control activities. We describe the intense communication and personal interaction and clarify the interaction between informal procedures and the traditional formal structures and rules when constraints appear. The evolving grounded theory is a recognition of the non-orderly components within command and control.
Based on the results of this study we suggest that support for constraint management, within information systems research, becomes the common framework for continued research on support for military command and control. This perspective affects the design of information systems, modelling efforts, and ultimately the doctrines for command and control. Hopefully our result will encourage cooperation between military practitioners, systems designers and researchers, and the development of adequate tools and techniques for the management of constraints and change in the military and elsewhere.
A SCALABLE DATA STRUCTURE FOR A PARALLEL DATA SERVER
Jonas S Karlsson
Modern and future applications, such as in the telecommunication industry and real-time systems, store and manage very large amounts of information. This information needs to be accessed and searched with high performance, and it must have high availability. Databases are traditionally used for managing high volumes of data. Currently, mostly administrative systems use database technology. However, newer applications need the facilities of database support. But just applying traditional database technology to these applications is not enough. The high-performance demands and the required ability to scale to larger data sets are generally not met by current database systems.
Data Servers are dedicated computers which manage the internal data in a database system (DBMS). Modern powerful workstations and parallel computers are used for this purpose. The idea is that an Application Server handles user input and data display, parses the queries, and sends the parsed query to the data server that executes it. A data server, using a dedicated machine, can be better tuned in memory management than a general purpose computer.
Network multi-computers, such as clusters of workstations or parallel computers, provide a solution that is not limited to the capacity of one single computer. This provides the means for building a data server of potentially any size. This gives rise to the interesting idea of enabling the system to grow over time by adding more components to meet the increased storage and processing demands. This exhibits the need for scalable solutions that allow for data to be reorganized smoothly, unnoticed by the clients, the applications servers, accessing the data server.
In this thesis we identify the importance of appropriate data structures for parallel data servers. We focus on Scalable Distributed Data Structures (SDDSs) for this purpose. In particular LH*, and our new data structure LH*LH. An overview is given of related work, and systems that have traditionally implicated the need of such data structures. We begin by discussing high-performance databases, and this leads us to database machines and parallel data servers. We sketch an architecture for an LH*LH-based file storage that we will use for a parallel data server. We also show performance measures for the LH*LH and present its algorithm in detail. The testbed, the Parsytec switched multi-computer, is described along with experience acquired during the implementation process.
VIDEOMÖTESTEKNIK I OLIKA AFFÄRSSITUATIONER - MÖJLIGHETER OCH HINDER
Användning av informationsteknik (IT) kan skapa ett geografiskt och tidsmässigt oberoende. Det är inte längre nödvändigt att man befinner sig på en viss plats för att arbeta tillsammans med andra människor. Datorstöd i form av groupware och videomötessystem gör att man kan befinna sig var som helst och ändå stå i nära kontakt med dem man arbetar tillsammans med. I denna rapport studeras videomötesteknik i olika affärssituationer. Syftet har varit att identifiera möjligheter och hinder för införande och användande.
Sju fallstudier har genomförts i företag i Jönköpings län. Dessa företag har under perioden introducerat videomötesteknik. Företagen har valts bland mindre och medelstora företag, vilka främst har använt videomötestekniken för produktutveckling, d v s videoutrustningen har använts som ett arbetsredskap för samarbete i lokalmässigt skilda grupper. Datainsamlingen har skett med hjälp av kvalitativa intervjuer med sådana personer i de sju företagen som har erfarenhet av tekniken. Det empiriska materialet har sedan ananlyserats med hjälp av en modifierad form av ”grounded theory”.
I undersökningen konstateras att intresset för att använda videomöten som komplement till möten ”ansikte mot ansikte” är mycket stort. Motiven är kortare ledtider samt ökad kundtillfredställelse genom den högre kvalitet på kundkontakter man vinner genom möjligheter till tätare kontakt. Däremot kan inte videomöten ersätta möten ”ansikte mot ansikte”. Tre mötestyper har identifierats:
1. Personlig kontakt (uppbyggnad av sociala relationer)
3. Fysisk kontakt (man måste vrida och vända och känna på produkten)
Vid typ 1 och 3 är det inte lämpligt med videomöte.
Undersökningen visar också att det är svårighter förknippat med införande av videomötesteknik. Det krävs mycket energi och motivation för att komma igång. Antalet sålda system har ännu inte nått ”kritisk massa”. Det är fortfarande, trots ett omfattande standardiseringsarbete, inte självklart att kunna kommunicera mellan system av olika märken, och tekniken upplevs i somliga fall som instabil. Tekniken är ännu så pass ny för dessa företag att man provar sig fram. Det har inte kunnat identifieras några fall av planerad förändring av arbetsprocesser, utan de förändringar som skett har varit ad hoc. Har man emellertid skaffat utrustning, kommer det fram många bra idéer om tillämpningar som skulle kunna användas, om det fanns flera motparter att kommunicera med. Videomöten är ingen tvingande teknik. Vid stress faller man lätt tillbaka i gammalt beteende, d v s använder telefon eller fax.
ATT SKAPA EN FÖRETAGSANPASSAD SYSTEMUTVECKLINGSMODELL - GENOM REKONTSTRUKTION, VÄRDERING OCH VIDAREUTVECKLING I T50-BOLAG INOM ABB
En systemutvecklare arbetar med olika hjälpmedel vid verksamhetsutveckling och systemutveckling. Inom ABB används ofta T50-programmet för att utveckla verksamheten till ett s k T50-bolag. I några av dessa T50-bolag har systemutvecklare använt Pegasus-modellen vid systemutveckling. Pegasus-modellen utvecklades inom Pegasus-projektet, vilket är ett av de största systemutvecklingsprojekten som har genomförts inom ABB. Denna studie handlar om hur man kan skapa en företagsanpassad systemutvecklingsmodell. Med företagsanpassad avses att den utgår ifrån T50-programmet och Pegasus-modellen. Studien handlar om både skapandet av en systemutvecklingsmodell och själva systemutvecklingsmodellen d v s både process och produkt. I processen ingår arbetsmomenten rekonstruktion med modellering, värdering och vidareutveckling. Resultatet av processen är en produkt. Produkten i studien utgörs av den skapade företagsanpassade systemutvecklingsmodellen. Denna systemutvecklingsmodell skall stödja utveckling av både verksamheten och ett informationssystem i ett T50-bolag.
I denna rapport presenteras resultatet av ett forsknings- och utvecklingsprojekt tillsammans med ABB Infosystems. Studien har genomförts hos två T50-bolag inom ABB. De två T50-bolagen är ABB Control och ABB Infosystems i Västerås.
A DECISION-MECHANISM FOR REACTIVE AND COORDINATED AGENTS
In this thesis we present preliminary results in the development of LIBRA (LInköping Behavior Representation for Agents). LIBRA is a rule-based system for specifying the behavior of automated agents that combine reactivity to an uncertain and rapidly changing envi-ronment and coordination. A central requirement is that the behavior specification should be made by users who are not computer and AI specialists. Two application domains are considered: air-combat simulation and a simulated soccer domain from the perspective of the RoboCup competition. The behavior of an agent is specified in terms of prioritized production rules organized in a decision tree. Coordinated behaviors are encoded in the decision trees of the individual agents. The agents initiate tactics depending on the situation and recognize the tactics that the other team members have initiated in order to take their part in it.What links individual behavior descriptions together are explicit communication and common means to describe the situation the agents find themselves in.
No FiF-a 34
NÄTVERKSINRIKTAD FÖRÄNDRINGSANALYS - PERSPEKTIV OCH METODER SOM STÖD FÖR FÖRSTÅELSE OCH UTVECKLING AV AFFÄRSRELATIONER OCH INFORMATIONSSYSTEM
Att förstå och att utveckla affärsverksamheter som bedrivs i samverkan mellan ett flertal företag inbegriper problemställningar som inte är lika aktuella vid analyser inom en organisation. Vid inter-organisatorisk verksamhetsutveckling, t.ex. vid utveckling av inter-organisatoriska informationssystem, är de förväntade effekterna inte lika möjliga att förutse som i samband med intern verksamhetsutveckling.
Vid analys av samverkande affärsverksamheter, såsom vid all analys, styrs utredaren av både perspektiv och de metoder som används. Med denna utgångspunkt fokuserar avhandlingen på hur perspektiv- och metoddriven analys kan utföras inom samverkande affärsverksamheter för att styra utredarens uppmärksamhet också till relationer och nätverksdynamiska aspekter.
Den teoretiska grunden utgörs av ett handlings- och affärsaktsperspektiv med utgångspunkt från Förändringsanalys enligt SIMMetoden, samt ett nätverksperspektiv med utgångspunkt från den så kallade Uppsalaskolan. Genom denna deduktiva ansats och med empiriskt stöd från två genomförda studier inom turismnäringen, har metodvarianten Nätverksinriktad Förändringsanalys enligt SIMMetoden utvecklats, vilken baserar sig på en perspektivväxlande strategi. De empiriska studierna bidrar dessutom till ökad förståelse för kommunikation och utveckling inom samverkande affärsverksamheter.
CAFE: TOWARDS A CONCEPTUAL MODEL FOR INFORMATION MANAGEMENT IN ELECTRONIC MAIL
Managing information in the form of text is the main concern of this thesis. In particular,we investigate text that is daily made available through the increasing use of e-mail. Wedescribe the design and implementation of a conceptual model,CAFE (CategorizationAssistant For E-mail). We also present the results of a case study and a survey that moti-vate the model.
The case study studies the effect of the computer screen on people’s structural build-up ofcategories for e-mail messages. Three different representations of categories are used:desktop, tree, and mind map. Cognitive science theories have served as an inspiration andmotivation for the study. The survey presents a selection of currently available e-mail cli-ents and the state-of-the-art and trends in e-mail management.
Our conceptual model provides support for organization, searching, and retrieval of infor-mation in e-mail. Three working modes are available for satisfying the user’s variousneeds in different situations: the Busy mode for intermittent usage at times of high stress,the Cool mode for continuous usage at the computer, and the Curious mode for sporadicusage when exploring and (re-)organizing messages when more time is at hand.
A prototype implementation has been developed. Each mode required using a different in-formation retrieval and filtering technique. Busy uses the text-based Naive Bayesian ma-chine learning algorithm, Cool uses common e-mail filtering rules, and Curious uses acombination of clustering techniques known asSCATTER/GATHER. Preliminary tests of theprototype have proved to be promising.
BANKENS VILLKOR I LÅNEAVTAL VID KREDITGIVNING TILL HÖGT BELÅNADE FÖRETAGSFÖRVÄRV
UNIQUE KERNEL DIAGNOSIS
The idea of using logic in computer programs to perform systematic diagnosis was introduced early in computation history. There are several systems using punch-cards and rulers described as early as the mid 1950’s. Within the area of applied artificial intelligence the problem of diagnosis made its definite appearance in the form of expert systems during the 1970’s. This research eventually introduced model based diagnosis in the field of artificial intelligence during the mid 1980’s. Two main approaches to model based diagnosis evolved: consistency based diagnosis and abductive diagnosis. Later kernel
diagnosis complemented these two approaches. Unique kernel diagnosis is my contribution to model based diagnosis within artificial intelligence.
Unique kernel diagnosis addresses the problem of ambiguous diagnoses, situations where several possible diagnoses exist with no possibility to determine which one describes the actual state of the device that is diagnosed. A unique kernel diagnosis can per definition never be ambiguous. A unique kernel diagnosis can be computed using the binary decision
diagram (BDD) data structure by methods presented in this thesis. This computational method seems promising in many practical situations even if the BDD data structure is known to be exponential in size with respect to the number of state variabels in the worst case. Model based diagnosis in the form of consistency based-, abductive and kerneldiagnosis
is known to be an NP-complete problem. A formal analysis of the computational complexity of the problem of finding a unique kernel diagnosis reveals that it is in PNP.
INFORMATIONSTEKNIK OCH DRIVKRAFT I GRANSKNINGSPROCESSEN - EN STUDIE AV FYRA REVISIONSBYRÅER
Utvecklingen i slutet av 1900-talet inom området informationsteknik har förändrat förutsättningarna för revision, vilket har inneburit att revisionsbyråerna har förändrat sina granskningsprocesser. Granskningsmoment som tidigare inte har varit praktiskt möjliga att genomföra kan numera utföras tack vare att kapaciteten att lagra, sortera och överföra information har ökat dramatiskt. Informationstekniken har fått en framträdande roll i arbetet med att öka effektiviteten utan att försämra kvaliteten. Granskningsarbetet har dock inte automatiskt blivit effektivare enbart genom att användningen av informationsteknik har ökat. En frågeställning i detta arbete är hur revisionsbyråerna har hanterat de nya förutsättningarna som informationstekniken har skapat. Syftet med studien är att öka förståelsen för hur och varför globala revisionsbyråer idag använder informationsteknik i granskningsprocessen. För att uppfylla studiens syfte har fyra praktikfall genomförts. De fyra praktikföretagen är de fyra största revisionsbyråerna i Sverige. Arbetet har utgått från ett systemsynsätt med en kvalitativ ansats.
Revisionsbyråernas drivkrafter att använda informationsteknik i granskningsprocessen är främst att minska kostnaderna via kvalitetsförbättringar och produktivitetsökningar.
Informationstekniken stärker möjligheterna till att verifiera och granska en affärshändelse i efterhand. Trenden är att granska allt mindre affärshändelser allt längre ifrån primärkällorna. Två av de undersökta revisionsbyråerna använder främst informationsteknikens möjligheter till att automatisera befintliga rutiner och arbetssätt, medan de två andra byråerna försöker att arbeta utifrån nya koncept. De byråer, som har valt att främst satsa på att automatisera befintliga rutiner och arbetsmoment, behöver på längre sikt också förändra sina revisionsprocesser. Byråer, som har valt att satsa på ett förändrat revisionssynsätt och infört informationsteknik utifrån dessa tankar, behöver troligen på kort sikt automatisera delar av den befintliga revisionsprocessen. Dessa två olika utvecklingsvägar kommer förmodligen att mer eller mindre mötas i framtiden.
VAD KOSTAR HUNDEN? MODELLER FÖR FÖR INTERN REDOVISNING
Allt fler industriella företag erbjuder kundanpassade produkter. Produkterna utvecklas inom ramen för långvariga relationer med kunder. Detta har betydelse för hur de ekonomiska informationssystemen bör utformas. I uppsatsen föreslås modeller för den interna redovisningen som stödjer fokusering av både kunder och produkter. Därtill studeras hur kunder påverkar företagets kostnader. Litteraturen om redovisning och kalkylering, som har utvecklats för att i första hand fördela kostnader på produkter, analyseras utifrån att både produkter och kunder är väsentliga för uppföljningen. De föreslagna modellerna illustreras via en analys av kostnadsdata från Paroc AB. Dessutom diskuteras hur de föreslagna modellerna kan stödja kalkylering inför olika kundrelaterade handlingssituationer.
Både kund och produkt bör användas som kostnadsbärare i redovisningen för industriella företag som arbetar med en hög grad av kundanpassning. Detta ökar möjligheterna att analysera kostnaderna, jämfört med att enbart produkter fokuseras. Vidare kan kategorisering av kostnader efter hur resurser som förbrukas har anskaffats ge värdefull information, som inte är beroende av analysobjekt. I uppsatsen skiljs mellan prestationsberoende, kapacitetsberoende och nedlagda kostnader. Att använda kund och produkt som kostnadsbärare i redovisningssystem komplicerar dock att etablera nivåer av kostnadsställen i redovisningen som föreslagits i ABC-litteraturen. Det är bättre att använda den traditionella nivåindelningen av kostnadsbärare enligt stegkalkylsmetoden.
Kundrelaterade aktiviteter är ofta avsedda att skapa värden i senare tidsperioder. Framåtriktade aktiviteter kan redovisas som ’goda kostnader’, om de inte anses vara investeringar i formell bemärkelse. Detta medför att kostnaderna kan analyseras i senare tidsperioder utan att värdering av investeringar eller avskrivningar av tillgångar behöver göras. Investeringar i kunder skapar ofta immateriella tillgångar. För att värdera immateriella tillgångar är det väsentligt att studera möjligheterna till alternativ användning. I uppsatsen föreslås en kategorisering av tillgångar i specifika, begränsade och icke begränsade tillgångar.
No FiF-a 32
KUNSKAPSANVÄNDLING OCH KUNSKAPSUTVECKLING HOS VERKSAMHETSKONSULTER - ERFARENHETER FRÅN ETT FOU-SAMARBETE
Att förstå hur kunskap kan användas och utvecklas är viktigt för alla som arbetar med kunskapsutveckling. Detta gäller inte minst forskare som ofta hoppas och tror att deras forskningsresultat på något sätt kommer att bidraga till samhällets utveckling. Mitt arbete har fokuserat hur verksamhetskonsulter kan utveckla och anpassa yrkesinriktad praktisk kunskap genom att samarbeta med forskare och verksamhetskonsult från andra discipliner.
Mycket av den kunskap som forskare inom informationssystemutveckling utvecklar är tänkt att i slutändan användas av praktiker som dagligen arbetar med de frågor vi behandlar i våra forskningsprojekt. Därför känns det både viktigt och naturligt att utveckla kunskap som gör att vi bättre kan förstå hur systemutvecklare och andra verksamhetskonsulter arbetar. Vi behöver utveckla kunskap om den praktik som verksamhetskonsulter tillhör – dvs vad systemutvecklare och andra verksamhetsutvecklare gör, hur de använder och anpassar olika typer av kunskap som stöd för sitt agerande. Vi måste förstå hur systemutvecklare arbetar och resonerar.
Ett sätt att få bättre kunskap om den rationalitet som styr verksamhetskonsulters praktik är genom att arbeta med aktiva verksamhetsutvecklare som använder både forskningsbaserad och praktikbaserad kunskap som stöd i sin yrkesutövning. Under tre år har jag observerat och arbetat tillsammans med två verksamhetskonsulter, och har på så sätt utvecklat en ökad förståelse för hur kunskap kan översättas, utvecklas och användas av konsulter som på olika sätt arbetar med verksamhetsutveckling.
Studiens resultat beskriver och relaterar omständigheter, handlingar och konsekvenser kring verksamhetskonsulters kunskapsutveckling. Kunskap i användning översätts och anpassas till den specifika situationen samt kunskapsanvändarens förförståelse och referensram, vilket också innebär att kunskapen utvecklas och förändras.
No FiF-a 37
ORGANISATIONERS KUNSKAPSVERKSAMHETER - EN KRITISK STUDIE AV "KNOWLEDGE MANAGEMENT"
Att utveckla, tillvarata och återanvända kunskap är centrala företeelser för organisationers framåtskridande och utveckling. Härmed har kunskapsmanagement (KM) en viktig roll för och i organisationer. Med KM eftersträvas bl a att medvetandegöra medarbetarnas kunskaper i syfte att hantera, utveckla och sprida dem på ett för organisationen fruktbart sätt. Genom en framgångsrik KM finns potential att öka organisationers handlingsförmåga, följaktligen även verksamheters värdeskapande och konkurrenskraft. Icke desto mindre är kunskap en abstrakt och svårhanterlig organisatorisk tillgång. Därtill, trots att det finns en hel del skrivet kring KM, kan det vara svårt för organisationer att förstå hur de praktiskt ska arbeta med detta verksamhetsområde, samt vad det innebär.
I syfte att öka förståelsen för KM har jag studerat och kritiskt analyserat en del existerande litteratur kring området. Med analysen som utgångspunkt har ett antal forskningsfrågor preciserats. För att överbrygga en del av de oklarheter som identifierats i samband med litteraturgenomgången, samt för att svara på avhandlingens forskningsfrågor, har stöd sökts i andra teorier, bl a kunskapsteori och teori om hur vi kan se på verksamheter. Därtill har hanteringen av och synen på kunskap studerats genom en fallstudie genomförd på ett konsultbolag inom IT-branschen. Utifrån litteraturanalysen, grundning i annan teori, samt avhandlingens empiriska data har jag presenterat min syn på organisationers kunskapsverksamheter (min benämning på kunskapsmanagement).
Resultatet av avhandlingsarbetet är bl a en utvecklad och preciserad begreppsapparat för organisatorisk kunskapsverksamhet (KM). Detta innefattar bl a en klassificering av begreppet organisatorisk kunskap och dess relation till organisatorisk handling. I avhandlingen klassificeras även ett antal vanliga situationer för kunskapande (lärande), vilka i sin tur relateras till organisationers kärnverksamhet respektive kunskapsverksamhet. Ett av huvudbidragen är en modell över organisatoriskt kunskapsverksamhet. Modellen inkluderar kunskapsverksamhetens centrala förutsättningar, handlingar, resultat, samt dess relation till kärnverksamheten. Genom denna avhandling vill jag bidra med en ökad förståelse för vad kunskapsverksamheter handlar om och vad som behöver beaktas för att utveckla en framgångsrik kunskapsverksamhet.
No FiF-a 40
WEBBASERADE AFFÄRSPROCESSER - MÖJLIGHETER OCH BEGRÄNSNINGAR
Dagens litteratur kring området webbaserade affärsprocesser är ofta möjlighetsinriktad och framtidsorienterad. Det finns således en risk för att litteraturen ger en alltför ensidig bild av forskningsområdet. För att söka erhålla en mer nyanserad bild av området ställer jag mig den övergripande forskningsfrågan: Vilka möjligheter och begränsningar medför webbaserade affärsprocesser?
För att besvara denna fråga används en triangulerande ansats och förutsättningslösa empiriska studier, för att undvika blockerande förförståelse. Jag genomför två fallstudier på företag vilka båda bedriver handel mot konsumentledet uteslutande via Internet. Fallstudieföretagen är NetShop och BuyOnet, där NetShop är anonymiserad. Dessa fallstudieföretag har valts så att de skiljer sig på ett flertal punkter för att erhålla ett komparativt analysmaterial. Den kanske främsta skillnaden är att NetShop säljer en fysisk produkt och BuyOnet en digital produkt. Metodologisk inspirationskälla är grounded theory, men datainsamling och dataanalys utförs med stöd av generiska teorier. Som stöd för dataanalysen har jag även utvecklat ett informationssystem. Inledningsvis analyseras producenten med avseende på dess förutsättningar och hur den webbaserade affärsprocessen genomförs. Därefter kartläggs det webbaserade affärsgörandets effekter genom att kunders förfrågningar och producentens svar analyseras. För respektive fallstudie analyseras också vilka möjligheter och begränsningar som det webbaserade affärsgörandet medför. Resultaten från fallstudierna jämförs därefter med avseende på framkomna aspekter. För att söka erhålla ett mer generellt resultat med avseende på webbaserade affärsprocessers möjligheter och begränsningar förs avslutningsvis en resultatdiskussion med utgångspunkt tagen i teori.
Resultatet karakteriseras som empiri- och teorigrundade tendenser. Bland resultaten utmärker sig den webbaserade affärsmodellen, verksamhetskarakteriserande kategorier och olika webbaserade aspekters möjliggörande och begränsande relationer till varandra.
TOWARDS BEHAVIORAL MODEL FAULT ISOLATION FOR OBJECT ORIENTED CONTROL SYTEMS
We use a system model expressed in a subset of the Unified Modeling Language to perform fault isolation in large object oriented control systems. Due to the severity of the failures considered and the safety critical nature of the system we cannot perform fault isolation online. Thus, we perform post mortem fault isolation which has implications in terms of the information available; the temporal order in the error log can not be trusted. In our previous work we have used a structural model for fault isolation. In this thesis we provide a formal framework and a prototype implementation of an approach taking benefit of a behavioral model. This gives opportunities to perform more sophisticated reasoning at the cost of a more detailed system model. We use a model-checker to reason about causal dependencies among the events of the modeled system. The model-checker performs reasoning about temporal dependencies among the events in the system model and the scenario at hand, allowing for conclusions about the causal relation between the events of the scenario. This knowledge can then be transferred to the corresponding fault in the system, allowing us to pinpoint the cause of a system failure among a set of potential causes.
XML-BASED FRAMEWORKS FOR INTERNET COMMERCE AND AN IMPLEMENTATION OF B2B E-PROCUREMENT
It is not easy to apply XML in e-commerce development for achieving interoperability in heterogeneous environments. One of the reasons is a multitude of XML-based Frameworks for Internet Commerce (XFIC), or industrial standards. This thesis surveys 15 frameworks, i.e., ebXML, eCo Framework, UDDI, SOAP, BizTalk, cXML, ICE, Open Applications Group, RosettaNet, Wf-XML, OFX, VoiceXML, RDF, WSDL and xCBL.
This thesis provides three models to systematically understand how the 15 frameworks meet the requirements of e-commerce. A hierarchical model is presented to show the purpose and focus of various XFIC initiatives. A relationship model is given to show the cooperative and competitive relationships between XFIC. A chronological model is provided to look at the development of XFIC. In addition, the thesis offers guidelines for how to apply XFIC in an e-commerce development.
We have also implemented a B2B e-procurement system. That not only demonstrates the feasibility of opensource or freeware, but also validates the complementary roles of XML and Java: XML is for describing contents and Java is for automating XML documents (session handling). Auction-based dynamic pricing is also realized as a feature of interest. Moreover, the implementation shows the suitability of e-procurement for educational purposes in e-commerce development.
No FiF-a 47
WEBBASERADE IMAGINÄRA ORGANISATIONERS SAMVERKANSFORMER: INFORMATIONSSYSTEMARKITEKTUR OCH AKTÖRSSAMVERKAN SOM FÖRUTSÄTTNINGAR FÖR AFFÄRSPROCESSEN
De Internetbutiker som skapades under senare delen av 1990-talet, präglades av snabb etablering och deras övergripande mål var att ta så stora marknadsandelar som möjligt. Detta högt prioriterade krav på snabbhet var en av anledningarna till att dessa företag ofta tog hjälp av partners för att sköta delar av verksamheten. På detta sätt skapas imaginära organisationer, som via webben erbjuder varor och tjänster till sina kunder. Dessa företag fokuserar på sin kärnkompetens och låter denna kompletteras med andra externa aktörers kompetenser. För att lyckas måste man erbjuda sina kunder minst samma priser, kvalitet och service som en traditionell butik, vilket kräver en verksamhet med hög processeffektivitet och -kvalitet. Denna avhandling fokuserar på vilka faktorer som är avgörande för att dessa önskvärda effekter ska kunna uppnås, vilket också innebär att brister och problem kommer att beröras.
I två fallstudier har studerats hur dessa imaginära organisationers olika informationssystem, och den informationssystemarkitektur de bildar, stödjer affärsprocessen i syfte att nå önskvärda effekter. Organisationernas aktörer, och den aktörsstruktur de bildar, har också studerats för att klargöra stöd till affärsprocessen.
Studiens resultat visar att samverkan mellan den imaginära organisationens informationssystem och aktörer är av central betydelse för att nå dessa effekter. Den visar också på problem som finns i detta sammanhang. För att uppnå denna goda samverkan mellan informationssystem krävs en hög kvalitet i densamma, något som också är av stor vikt för att uppnå förtroende och tillit i den samverkan som sker mellan aktörer inom den imaginära organisationen. För att nå en hög förändringsbarhet och följsamhet i den imaginära organisationens processer, är det nödvändigt att fokusera på systemstrukturering, både intern och extern. Studien pekar också på de rationaliseringseffekter som kan uppnås genom ett högt utnyttjande av modern informationsteknik.
DOMAIN KNOWLEDGE MANAGEMENT IN INFORMATION-PROVING DIALOGUE SYSTEMS
In this thesis a new concept called domain knowledge management for informationproviding dialogue systems is introduced. Domain knowledge management includes issues related to representation and use of domain knowledge as well as access of background information sources, issues that previously have been incorporated in dialogue management.
The work on domain knowledge management reported in this thesis can be divided in two parts. On a general theoretical level, knowledge sources and models used for dialogue management, including domain knowledge management, are studied and related to the capabilities they support. On a more practical level, domain knowledge management is examined in the contexts of a dialogue system framework and a specific instance of this framework, the ÖTRAF system. In this system domain knowledge management is implemented in a separate module, a Domain Knowledge Manager.
The use of a specialised Domain Knowledge Manager has a number of advantages. The first is that dialogue management becomes more focused as it only has to consider dialogue phenomena, while domain-specific reasoning is handled by the Domain Knowledge Manager. Secondly, porting of a system to new domains is facilitated since domain-related issues are separated out in specialised domain knowledge sources. The third advantage with a separate module for domain knowledge management is that domain knowledge sources can be easily modified, exchanged, and reused.
STYRNING AV INVESTERINGAR I DIVISIONALISERADE FÖRETAG - ETT KONCERNPERSPEKTIV
Avhandlingen beskriver hur större divisionaliserade företag, ur ett koncernperspektiv, styr investeringar av strategisk betydelse. Bakgrunden till forskningsarbetet är bland annat att flera forskare har visat den traditionella investeringsforskningens brist på helhetssyn. Skälet till att investeringsforskningen ansetts sakna helhetssynen är att den i hög utsträckning inriktats på investeringsbeslutet och då främst investeringskalkylen, även om det är otvivelaktigt att investeringsbeslutet inte är någon skild företeelse från företaget i övrigt. Ett annat faktum som förbisetts av många forskare är att företagsledningen inte styr investeringarna i olika divisioner genom att rangordna och välja enskilda investeringar, utan genom att påverka spelreglerna för investeringsprocessen.
Det övergripande syftet med forskningsarbetet är att ur ett koncernperspektiv förklara hur divisionaliserade företag styr investeringar av strategisk betydelse. Avhandlingen tar sin utgångspunkt i att koncernstrategin kan förväntas påverka hur investeringsstyrningen utformas och används. Studien kan indelas i fyra delar; teoretisk referensram, föreställningsram, empirisk studie i fyra koncerner samt analys och slutsatser. Den teoretiska referensramen utgör grunden för föreställningsramen, som redogör för hur investeringsstyrningen kan förväntas vara utformad och användas i koncerner med olika koncernstrategisk inriktning. Den empiriska studien har genomförts genom intervjuer med ansvariga för investeringsstyrning på koncernnivå i fyra koncerner; Investment AB Bure, Finnveden AB, Munksjö AB och Svedala Industri AB.
Avhandlingens slutsatser sammanfattas i en typologi för investeringsstyrning. Det finns, enligt avhandlingens typologi, fyra olika alternativa sätt att utforma och använda investeringsstyrningen. Huvudvariablerna i typologin är ”Koncernstrategi” och ”Dominerande investeringstyp”. Båda är variabler vars betydelse för investeringsstyrningen påvisas genom hela avhandlingen.
SECURE AND SCALABLE E-SERVICE SOFTWARE DELIVERY
Due to the complexity of software and end-user operating environments, software management in general is not an easy task for end-users. In the context of e-service, what end-users buy is the service package. Generally speaking, they should not have to be concerned with how to get the required software and how to make it work properly on their own sites. On the other hand, service providers would not like to have their service-related software managed in a non-professional way, which might cause problems when providing services.
E-service software delivery is the starting point in e-service software management. It is the functional foundation for performing further software management tasks, e.g., installation, configuration, activation, and so on.
This thesis concentrates on how to deliver e-service software to a large number of geographically distributed end-users. Special emphasis is placed on the issues of efficiency (in terms of total transmission time and consumed resources), scalability (in terms of the number of end-users), and security (in terms of confidentiality and integrity). In the thesis, we propose an agent-based architectural model for e-service software delivery, aiming at automating involved tasks, such as registration, key management, and recipient status report collection. Based on the model, we develop a multicast software delivery system, which provides a secure and scalable solution to distributing software over publicly accessible networks. By supplying end-users with site information examination, the system builds a bridge towards further software management tasks. We also present a novel strategy for scalable multicast session key management in the context of software delivery, which can efficiently handle a dynamic reduction in group membership of up to 50% of the total. An evaluation is provided from the perspective of resource consumption due to security management activities.
REDOVISNING I SKUGGAN AV EN BANKKRIS - VÄRDERING AV FASTIGHETER
Den grundläggande frågeställningen för denna avhandling är huruvida extern redovisningsinformation kan medföra snedvridande ekonomiska konsekvenser, genom dess påverkan på ekonomiska aktörers agerande. En utgångspunkt härvid är att frågeställningarna kring redovisningens ekonomiska konsekvenser ställs på sin spets under finansiella kriser. I denna avhandling studeras redovisning vid tidpunkten för 1990-talets svenska bankkris.
Till följd av bankkundernas bristande betalningsförmåga blev bankernas fordringar avhängiga av värdet av ställda säkerheter. Säkerheterna bestod i stor utsträckning av fastigheter, varför frågan kring värdering av fastigheter i redovisningen kom att bli betydelsefull.
I avhandlingen beskrivs och analyseras Finansinspektionens regelverk avseende värdering av fastigheter som övertagits för skyddande av fordran respektive fastigheter som utgör säkerhet för osäker fordran. Studien avser regelverket, dess utveckling och tillämpning med koncentration
på krisåren 1990-1993.
Regelverket kom under krisen att utvecklas successivt och kompletteras för då rådande behov. Att regelverket, såsom i vissa sammanhang antytts, skulle ha medfört en undervärdering av bankernas fastighetsbestånd ifrågasätts i avhandlingen. Regelverket var dock vagt formulerat, vilket under hela perioden medförde tolknings- och tillämpningsproblem. Finansinspektionens arbete med regelverket måste dock anses ha bidragit till att frågorna om fastighetsvärdering kom att fokuseras i samband med bankernas årsbokslut under krisåren.
OPTIONSPROGRAM FÖR ANSTÄLLDA: EN STUDIE AV SVENSKA BÖRSFÖRETAG
An important strategic question for companies today is how to recruit, motivate and retain employees. It is becoming more important to consider the incentive programs in the valuation process of companies. Recent studies show that employee stock option plans are more commonly used in Swedish companies than earlier. However, there are few studies about how employee stock options influence company performance and affect employees.
The purpose of this thesis is to increase the awareness of what the introduction of an employee stock option plan means, both from a management perspective and from the employees’ perspective. There are many different kinds of stock option plans and the plans can vary in terms of type of options, time to expiry, exercise price and tax consequences. This study started with a pre-study of which types of employee stock option plans that are used in Swedish companies. A closer study was then carried out in four companies in different industries with different stock option plans.
The motives for introducing employee stock option plans can be divided into five categories: personnel motives, incentive motives, salary motives, accounting motives and tax motives. The case studies show how motives for option plans can be dependent on different circumstances within the companies. Further, the study also shows that the consequences of the stock option plans varies according to factors such as motives, design of the stock option plan, share price performance and other context factors. Context factors that could have an effect are the company’s business, organisational structure, corporate culture and experiences from employee stock options in the industry, employees’ education and tax rules. The consequences for the company are also dependent on how the employees react to the options.
To be able to estimate what an employee stock option plan means for the company, all these factors must be taken under consideration. Further one must take into account the costs for the stock options such as dilution effects, hedging costs, personnel costs and costs for designing the program.
No FiF-a 51
INFORMATIONSSÄKERHET I VERKSAMHETER: BEGREPP OCH MODELLER SOM STÖD FÖR FÖRSTÅELSE AV INFORMATIONSSÄKERTHET OCH DESS HANTERING I VERKSAMHETER
Verksamheters säkerhetsproblem i samband med informationssystem och informationsteknik (IS/IT) är ett område som uppmärksammats kraftigt de senaste åren. Denna avhandling syftar till att ge en ökad förståelse av informationssäkerhet; begreppet i sig, dess hantering i verksamheter samt dess betydelse för verksamheter. För att nå en ökad förståelse för informationssäkerhet och dess hantering består avhandlingen till stora delar av konceptuella resonemang. Avhandlingens huvudsakliga kunskapsbidrag är:
- En kritisk granskning och en revidering av den begreppsapparat som dominerar i Sverige inom området informationssäkerhet och dess hantering.
- En generisk modell (ISV-modellen) avseende verksamheters hantering av informationssäkerhet.
ISV-modellen beskriver vilka grundläggande förutsättningar, aktiviteter, resultat och konsekvenser som kan kopplas till hantering av informationssäkerhet i verksamheter. Informationssäkerhetsområdet betraktas utifrån ett perspektiv som har sin grund i Skandinavisk informationssystemforskning. Ett viktigt kännetecken hos detta perspektiv är att IS/IT betraktas i en verksamhetskontext där bl a människors roller och aktiviteter utgör en viktig del.
Studien bygger på både teoretiska och empiriska studier som har skett parallellt och genom växelverkan. De teoretiska studierna har främst bestått av litteraturstudier och konceptuellt modellerande som har konfronterats med empiriskt material vilket huvudsakligen har hämtats genom en fallstudie på en kommun i Bergslagen.
A PETRI NET BASED MODELING AV VERIFICATION TECHNIQUE FOR REAL-TIME EMBEDDED SYSTEMS
Luis Alejandro Cortes
Embedded systems are used in a wide spectrum of applications ranging from home appliances and mobile devices to medical equipment and vehicle controllers. They are typically characterized by their real-time behavior and many of them must fulfill strict requirements on reliability and correctness.
In this thesis, we concentrate on aspects related to modeling and formal verification of realtime embedded systems.
First, we define a formal model of computation for real-time embedded systems based on Petri nets. Our model can capture important features of such systems and allows their representations at different levels of granularity. Our modeling formalism has a welldefined semantics so that it supports a precise representation of the system, the use of formal methods to verify its correctness, and the automation of different tasks along the design process.
Second, we propose an approach to the problem of formal verification of real-time embedded systems represented in our modeling formalism. We make use of model checking to prove whether certain properties, expressed as temporal logic formulas, hold with respect to the system model. We introduce a systematic procedure to translate our model into timed automata so that it is possible to use available model checking ools. Various examples, including a realistic industrial case, demonstrate the feasibility of our approach on practical applications.
ETT DYNAMISKT PERSPEKTIV PÅ INDIVIDUELLA SKILLNADER AV HEURISTISK KOMPETENS, INTELLIGENS, MENTALA MODELLER, MÅL OCH KONFIDENS I KONTROLL AV MIKROVÄRLDEN MORO
Theories predicting performance of human control of complex dynamic systems must assess how decision makers capture and utilise knowledge for achieving and maintaining control. Traditional problem solving theories and corresponding measures such as Ravens matrices have been applied to predict performance in complex dynamic systems. While they assume stable properties of decision makers to predict control performance in decision-making tasks these tests have shown to provide only a limited degree of prediction in human control of complex dynamic systems. This paper reviews theoretical developments from recent empirical studies and tests the theoretical predictions of a model of dynamic decision-making using a complex dynamic microworld – Moro. The requirements for control of the microworld is analysed in study one. Theoretical predictions from the reviewed theory and results from study one are tested in study two. In study three additional hypotheses are derived by including meta cognitive dynamics to explain anomalies found in study two. A total of 21 Hypotheses are tested. Results indicate that for predicting human control of complex dynamic opaque systems a number of meta cognitive processes play an important role in determining outcome. Specifically, results show that we cannot expect a lower risk of failure in complex dynamic opaque systems from people with high problem solving capabilities when these also express higher goals. Further research should seek to explore the relative contribution of task characteristics to determine conditions under which these meta cognitive processes of decision makers take a dominant role over problem-solving capabilities – enabling improved decision-maker selection and support.
AUTOMATIC PARALLELIZATION OF SIMULATION CODE FROM EQUATION BASED SIMULATION LANGUAGES
Modern state-of-the-art equation based object oriented modeling languages such as Modelica have enabled easy modeling of large and complex physical systems. When such complex models are to be simulated, simulation tools typically perform a number of optimizations on the underlying set of equations in the modeled system, with the goal of gaining better simulation performance by decreasing the equation system size and complexity. The tools then typically generate efficient code to obtain fast execution of the simulations. However, with increasing complexity of modeled systems the number of equations and variables are increasing. Therefore, to be able to simulate these large complex systems in an efficient way parallel computing can be exploited.
This thesis presents the work of building an automatic parallelization tool that produces an efficient parallel version of the simulation code by building a data dependency graph (task graph) from the simulation code and applying efficient scheduling and clustering algorithms on the task graph. Various scheduling and clustering algorithms, adapted for the requirements from this type of simulation code, have been implemented and evaluated. The scheduling and clustering algorithms presented and evaluated can also be used for functional dataflow languages in general, since the algorithms work on a task graph with dataflow edges between nodes.
Results are given in form of speedup measurements and task graph statistics produced by the tool. The conclusion drawn is that some of the algorithms investigated and adapted in this work give reasonable measured speedup results for some specific Modelica models, e.g. a model of a thermofluid pipe gave a speedup of about 2.5 on 8 processors in a PC-cluster. However, future work lies in finding a good algorithm that works well in general.
FUZZY CONTROL FOR AN UNMANNED HELICOPTER
The overall objective of the Wallenberg Laboratory for Information Technology and Autonomous Systems (WITAS) at Linköping University is the development of an intelligent command and control system, containing vision sensors, which supports the operation of a unmanned air vehicle (UAV) in both semi- and full-autonomy modes. One of the UAV platforms of choice is the APID-MK3 unmanned helicopter, by Scandicraft Systems AB. The intended operational environment is over widely varying geographical terrain with traffic networks and vehicle interaction of variable complexity, speed, and density.
The present version of APID-MK3 is capable of autonomous take-off, landing, and hovering as well as of autonomously executing pre-defined, point-to-point flight where the latter is executed at low-speed. This is enough for performing missions like site mapping and surveillance, and communications, but for the above mentioned operational environment higher speeds are desired. In this context, the goal of this thesis is to explore the possibilities for achieving stable ‘‘aggressive’’ manoeuvrability at high-speeds, and test a variety of control solutions in the APID-MK3 simulation environment.
The objective of achieving ‘‘aggressive’’ manoeuvrability concerns the design of attitude/velocity/position controllers which act on much larger ranges of the body attitude angles, by utilizing the full range of the rotor attitude angles. In this context, a flight controller should achieve tracking of curvilinear trajectories at relatively high speeds in a robust, w.r.t. external disturbances, manner. Take-off and landing are not considered here since APIDMK3 has already have dedicated control modules that realize these flight modes.
With this goal in mind, we present the design of two different types of flight controllers: a fuzzy controller and a gradient descent method based controller. Common to both are model based design, the use of nonlinear control approaches, and an inner- and outer-loop control scheme. The performance of these controllers is tested in simulation using the nonlinear model of APID-MK3.
PREDICTION AS A KNOWLEDGE REPRESENTATION PROBLEM: A CASE STUDY IN MODEL DESIGN
The WITAS project aims to develop technologies to enable an Unmanned Airial Vehicle (UAV) to operate autonomously and intelligently, in applications such as traffic surveillance and remote photogrammetry. Many of the necessary control and reasoning tasks, e.g. state estimation, reidentification, planning and diagnosis, involve prediction as an important component. Prediction relies on models, and such models can take a variety of forms. Model design involves many choices with many alternatives for each choice, and each alternative carries advantages and disadvantages that may be far from obvious. In spite of this, and of the important role of prediction in so many areas, the problem of predictive model design is rarely studied on its own.
In this thesis, we examine a range of applications involving prediction and try to extract a set of choices and alternatives for model design. As a case study, we then develop, evaluate and compare two different model designs for a specific prediction problem encountered in the WITAS UAV project. The problem is to predict the movements of a vehicle travelling in a traffic network. The main difficulty is that uncertainty in predictions is very high, du to two factors: predictions have to be made on a relatively large time scale, and we have very little information about the specific vehicle in question. To counter uncertainty, as much use as possible must be made of knowledge about traffic in general, which puts emphasis on the knowledge representation aspect of the predictive model design.
The two mode design we develop differ mainly in how they represent uncertainty: the first uses coarse, schema-based representation of likelihood, while the second, a Markov model, uses probability. Preliminary experiments indicate that the second design has better computational properties, but also some drawbacks: model construction is data intensive and the resulting models are somewhat opaque.
ON THE INSTRUMENTS OF GOVERNANCE - A LAW & ECONOMICS STUDY OF CAPITAL INSTRUMENTS IN LIMITED LIABILITY COMPANIES
The foundation of this thesis is the connection between corporate finance and corporate governance. Corporate finance has predominantly been analysed by financial economics models and thereby not recognised significant intrinsic features of the capital instrument design. In this paper, the principles of corporate governance are utilised to remedy these shortcomings, elaborating the control contents of capital instrument design.
The methodology of this thesis is derived from law & economics. Traditionally, the methodology encompass an economic ordering of legal subject matter but according to an integrated version of the methodology, legal and economic analytical models may be used on equal standing. Certain residual discrepancies between legal and economics reasoning are noted in the paper.
The capital instrument design is explored in an analysis of rationale and composition. The rationale of capital instruments is derived from the preferred state of the company technique, as it is understood in company law and agency theory. The composition is analysed in three levels - mechanistic, contractual and structural - based on a conjecture that governance rights counterbalance control risks.
The conclusions include that capital instruments are designed to establish flexibility and balance in the company technique. The governance rights are similar in both equity and debt instruments, which enable a condensed description of capital instrument design. The holders are empowered by the capital instruments and may use their governance rights to allocate and reduce their risks, adapting the company into a balanced structure of finance and governance.
No FiF-a 58
LOKALA ELEKTRONISKA MARKNADSPLATSER: INFORMATIONSSYSTEM FÖR PLATSBUNDNA AFFÄRER
Intresset för olika yttringar av elektroniska affärer fokuseras inte sällan på globala och nationella ansträngningar. Detta gäller inte minst sedan uppmärksamheten under senare år kommit att riktas mot Internetbaserade tillämpningar och implikationer av dessa. Uppmärksamheten har också innefattat förutsägelser där förändrade förutsättningar hotar traditionella mellanhänders existens. Föreliggande avhandling berör också denna typ av tillämpning men fokuserar en situation där just traditionella mellanhänder i form av små och medelstora detaljhandelsföretag valt att även möta sina lokala kunder via Internet. Det är en arena för dessa möten som står i avhandlingens fokus – den lokala elektroniska marknadsplatsen. En sådan marknadsplats erbjuder förutom möjligheter till detaljhandel även tjänster, medium för diskussion och spridning av information som inte nödvändigtvis är affärsinriktad.
Arbetet har genomförts i en pilotstudie och två efterföljande fallstudier där aktiviteterna på två svenska webbplatser med den beskrivna inriktningen stod i centrum. Avhandlingen förmedlar kunskap om sammansättningen av aktörer på en lokal elektronisk marknadsplats, deras aktiviteter och samspelet dem emellan. Fokus är riktat mot hur informationssystem (IS) kan stödja aktörernas olika ändamål. Avhandlingens resultat rör därmed utformningen och funktionalitet hos ett möjliggörande marknadsplats-IS. Denna kunskap är ämnad för applicering i en beskriven situation av platsbundna elektroniska affärer.
MANAGEMENT CONTROL AND STRATEGY - A CASE STUDY OF PHARMACEUTICAL DRUG DEVELOPMENT
How are formal management controls designed and used in research & development (R&D)? The purpose of this study is to explain how such systems are designed and used in formulating and implementing strategies in a pharmaceutical product development organisation. The study uses a contingency approach to investigate how the control system is adjusted to the business strategy of the firm. A case study was conducted in AstraZeneca R&D where strategic planning, budgeting, project management, goals and objective systems and the reward systems were studied.
Managers, external investors and researchers increasingly recognize the strategic importance of R&D activities. This has inspired researchers and practitioners to develop formal systems and methods for controlling R&D activities. There is, however, previous research in which a resistance towards using formal control systems to manage R&D was observed. This contrasts the general perception of management control systems as important in implementing and formulating strategies.
The results of this study show that formal management control have an important role in managing R&D. It also explains how the system is adjusted to the business strategy of the studied firm. Different control systems (e.g. budget, project management) were found to be designed and used in different ways. This implies that it is not meaningful to discuss whether the entire control system of a firm is tight or loose and/or used interactively or diagnostically. Rather, the systems may demonstrate combinations of these characteristics. The control systems of the studied firm were found to be used differently in the project and the functional dimensions. The control systems were also designed and used in different ways at different organisational levels. Comprehensive and rather detailed studies of control systems are called for in order to understand how they are designed and used in organisations. Such studies may explain some contradictory results in previous studies on how control systems are adjusted to business strategy.
DEBUGGING AND STRUCTURAL ANALYSIS OF DECLARATIVE EQUATION-BASED LANGUAGES
A significant part of the software development effort is spent on detecting deviations between software implementations and specifications, and subsequently locating the sources of such errors. This thesis illustrates that is possible to identify a significant number of errors during static analysis of declarative object-oriented equation-based modeling languages that are typically used for system modeling and simulation. Detecting anomalies in the source code without actually solving the underlying system of equations provides a significant advantage: a modeling error can be corrected before trying to get the model compiled or embarking on a computationally expensive symbolic or numerical solution process. The overall objective of this work is to demonstrate that debugging based on static analysis techniques can considerably improve the error location and error correcting process when modeling with equation-based languages.
A new method is proposed for debugging of over- and under-constrained systems of equations. The improved approach described in this thesis is to perform the debugging process on the flattened intermediate form of the source code and to use filtering criteria generated from program annotations and from the translation rules. Each time when an error is detected in the intermediate code and the error fixing solution is elaborated, the debugger queries for the original source code before presenting any information to the user. In this way, the user is exposed to the original language source code and not burdened with additional information from the translation process or required to inspect the intermediate code.
We present the design and implementation of debugging kernel prototypes, tightly integrated with the core of the optimizer module of a Modelica compiler, including details of the novel framework required for automatic debugging of equation-based languages.
This thesis establishes that structural static analysis performed on the underlying system of equations from object-oriented mathematical models can effectively be used to statically debug real Modelica programs. Most of our conclusions developed in this thesis are also valid for other equation-based modeling languages.
HIGH-LEVEL TEST GENERATION AND BUILT-IN SELF-TEST TECHNIQUES FOR DIGITAL SYSTEMS
The technological development is enabling production of increasingly complex electronic systems. All those systems must be verified and tested to guarantee correct behavior. As the complexity grows, testing is becoming one of the most significant factors that contribute to the final product cost. The established low-level methods for hardware testing are not any more sufficient and more work has to be done at abstraction levels higher than the classical gate and register-transfer levels. This thesis reports on one such work that deals in particular with high-level test generation and design for testability techniques.
The contribution of this thesis is twofold. First, we investigate the possibilities of generating test vectors at the early stages of the design cycle, starting directly from the behavioral description and with limited knowledge about the final implementation architecture. We have developed for this purpose a novel hierarchical test generation algorithm and demonstrated the usefulness of the generated tests not only for manufacturing test but also for testability analysis.
The second part of the thesis concentrates on design for testability. As testing of modern complex electronic systems is a very expensive procedure, special structures for simplifying this process can be inserted into the system during the design phase. We have proposed for this purpose a novel hybrid built-in self-test architecture, which makes use of both pseudorandom and deterministic test patterns, and is appropriate for modern system-on-chip designs. We have also developed methods for optimizing hybrid built-in self-test solutions and demonstrated the feasibility and efficiency of the proposed technique.
No FiF-a 61
META - METHOD FOR METHOD CONFIGURATION: A RATIONAL UNIFIED PROCESS CASE
The world of systems engineering methods is changing as rigorous ‘off-the-shelf’ systems engineering methods become more popular. One example of such a systems engineering method is Rational Unified Process. In order to cover all phases in a software development process, and a wide range of project-types, such methods need to be of an impressive size. Thus, the need for configuring such methods in a structured way is increasing accordingly. In this thesis, method configuration is considered as a particular kind of method engineering that focuses on tailoring a standard systems engineering method. We propose a meta-method for method configuration based on two fundamental values: standard systems engineering method’s rationality and reuse. A conceptual framework is designed, introducing the concepts Configuration Package and Configuration Template. A Configuration Package is a pre-made ideal method configuration suitable for a delimited characteristic of a (type of) software artifact, or a (type of) software development project, or a combination thereof. Configuration Templates with different characteristics are built combining a selection of Configuration Packages and used as a base for a situational method. The aim of the proposed meta-method is to ease the burden of configuring the standard systems engineering method in order to reach an appropriate situational method.
PERFORMANCE AND AVAILABILITY TRADE-OFFS IN FAULT-TOLERANT MIDDLEWARE
Distributing functionality of an application is in common use. Systems that are built with this feature in mind also have to provide high levels of dependability. One way of assuring availability of services is to tolerate faults in the system, thereby avoiding failures. Building distributed applications is not an easy task. To provide fault tolerance is even harder.
Using middlewares as mediators between hardware and operating systems on one hand and high-level applications on the other hand is a solution to the above difficult problems. It can help application writers by providing automatic generation of code supporting e.g. fault tolerance mechanisms, and by offering interoperability and language independence.
For over twenty years, the research community is producing results in the area of . However, experimental studies of different platforms are performed mostly by using made-up simple applications. Also, especially in case of CORBA, there is no fault-tolerant middleware totally conforming to the standard, and well studied in terms of trade-offs.
This thesis presents a fault-tolerant CORBA middleware built and evaluated using a realistic application running on top of it. Also, it contains results obtained after experiments with an alternative infrastructure implementing a robust fault-tolerant algorithm using basic CORBA. In the first infrastructure a problem is the existence of single points of failure. On the other hand, overheads and recovery times fall in acceptable ranges. When using the robust algorithm, the problem of single points of failure disappears. The problem here is the memory usage, and overhead values as well as recovery times that can become quite long.
SCHEDULABILITY ANALYSIS OF REAL-TIME SYSTEMS WITH STOCHASTIC TASK EXECUTION TIMES
Systems controlled by embedded computers become indispensable in our lives and can be found in avionics, automotive industry, home appliances, medicine, telecommunication industry, mecatronics, space industry, etc. Fast, accurate and flexible performance estimation tools giving feedback to the designer in every design phase are a vital part of a design process capable to produce high quality designs of such embedded systems.
In the past decade, the limitations of models considering fixed task execution times have been acknowledged for large application classes within soft real-time systems. A more realistic model considers the tasks having varying execution times with given probability distributions. No restriction has been imposed in this thesis on the particular type of these functions. Considering such a model, with specified task execution time probability distribution functions, an important performance indicator of the system is the expected deadline miss ratio of tasks or task graphs.
This thesis proposes two approaches for obtaining this indicator in an analytic way. The first is an exact one while the second approach provides an approximate solution trading accuracy for analysis speed. While the first approach can efficiently be applied to monoprocessor systems, it can handle only very small multi-processor applications because of complexity reasons. The second approach, however, can successfully handle realistic multiprocessor applications. Experiments show the efficiency of the proposed techniques.
GOOD TO USE!: USE QUALITY OF MULTI-USER APPLICATIONS IN THE HOME
Traditional models of usability are not sufficient for software in the home, since they are built with office software in mind. Previous research suggest that social issues among other things, separate software in homes from software in offices. In order to explore that further, the use qualities to design for, in software for use in face-to-face meetings at home were contrasted to such systems at offices. They were studied using a pluralistic model of use quality with roots in socio-cultural theory, cognitive systems engineering, and architecture. The research approach was interpretative design cases. Observations, situated interviews, and workshops were conducted at a Swedish bank, and three interactive television appliances were designed and studied in simulated home environments. It is concluded that the use qualities to design for in infotainment services on interactive television are laidback interaction, togetherness among users, and entertainment. This is quite different from bank office software that usually is characterised by not only traditional usability criteria such as learnability, flexibility, effectiveness, efficiency, and satisfaction, but also professional face management and ante-use. Ante-use is the events and activities that precedes the actual use that will set the ground for whether the software will have quality in use or not. Furthermore, practices for how to work with use quality values, use quality objectives, and use quality criteria in the interaction design process are suggested. Finally, future research in design of software for several co-present users is proposed.
MODELING AND SIMULATION OF CONTACTING FLEXIBLE BODIES IN MULTIBODY SYSTEMS
This thesis summarizes the equations, algorithms and design decisions necessary for dynamic simulation of flexible bodies with moving contacts. The assumed general shape function approach is also presented. The approach is expected to be computationally less expensive than FEM approaches and easier to use than other reduction techniques. Additionally, the described technique enables studies of the residual stress release during grinding of flexible bodies.
The overall software system design for a flexible multi-body simulation system BEAST is presented and the specifics of the flexible modeling is specially addressed. An industrial application example is also described in the thesis. The application presents some results from a case where the developed system is used for simulation of flexible ring grinding with material removal.
PDEMODELICA - TOWARDS A HIGH-LEVEL LANGUAGE FOR MODELING WITH PARTIAL DIFFERENTIAL EQUATIONS
This thesis describes initial language extensions to the Modelica language to define a more general language called PDEModelica, with built-in support for modeling with partial differential equations (PDEs). Modelica® is a standardized modeling language for objectoriented, equation-based modeling. It also supports component-based modeling where existing components with modified parameters can be combined into new models. The aim of the language presented in this thesis is to maintain the advantages of Modelica and also add partial differential equation support.
Partial differential equations can be defined using a coefficient-based approach, where a predefined PDE is modified by changing its coefficient values. Language operators to directly express PDEs in the language are also discussed. Furthermore, domain geometry description is handled and language extensions to describe geometries are presented. Boundary conditions, required for a complete PDE problem definition, are also handled.
A prototype implementation is described as well. The prototype includes a translator written in the relational meta-language, RML, and interfaces to external software such as mesh generators and PDE solvers, which are needed to solve PDE problems. Finally, a few examples modeled with PDEModelica and solved using the prototype are presented.
SECURE EXECUTION ENVIRONMENT FOR JAVA ELECTRONIC SERVICES
Private homes are becoming increasingly connected to the Internet in fast and reliable ways. These connections pave the way for networked services, i.e. services that gain their value through their connectivity. Examples of such electronic services (e-services) are services for remote control of household appliances, home health care or infotainment.
Residential gateways connect the private home with the Internet and are the home access point and one execution platform for e-services. Potentially, a residential gateway runs e-services from multiple providers. The software environment of such a residential gateway is a Java execution environment where e-services execute as Java threads within the Java virtual machine. The isolation of these Java e-services from each other and from their execution environment is the topic of this thesis.
Although the results of this thesis can be applied to most Java servers—e.g. Javaenabled web browsers, web servers, JXTA, JINI—this work focuses on e-services for the private home and their execution platform. Security for the private home as a prerequisite for end user acceptance is the motivation for this approach.
This thesis establishes requirements that prevent e-services on the Java execution platform from harming other e-services on the same or other network nodes and that prevent e-services from harming their underlying execution environment. Some of the requirements can be fulfilled by using the existing Java sandbox for access control. Other requirements, concerned with availability of e-services and network nodes, need a modified Java environment that supports resource control and e-service-specific access control. While some of the requirements result in implementation guidelines for Java servers, and in particular for the e-service environment, other requirements have been implemented as a proof of concept.
CONTRIBUTIONS TO PROGRAM- AND SPECIFICATION-BASED TEST DATA GENERATION
Software testing is complex and time consuming. One way to reduce testing effort is to automatically generate test data. In the first part of this thesis we consider a framework by Gupta et al. for generating tests from programs. In short, their approach consists of a branch predicate collector, which derives a system of linear inequalities representing an approximation of the branch predicates for a given path in the program. This system is solved using their constraint solver called the Unified Numerical Approach (UNA). In this thesis we show that in contrast to traditional optimization methods the UNA is not bounded by the size of the solved system. Instead it depends on how input is composed. That is, even for very simple systems consisting of one variable we can easily get more than a thousand iterations. We will also give a formal proof that UNA does not always find a mixed integer solution when there is one. Finally, we suggest using some traditional optimization method instead, like the simplex method in combination with branch-andbound and/or a cutting-plane algorithm as a constraint solver.
In the second part we study a specification-based approach for generation of software tests developed by Meudec. Briefly, tests are generated by an automatic partitioning strategy based on partition rules. An important step in the process is to reduce the number of generated subdomains and find a minimal partition. However, we have found that Meudec’s algorithm does not always produce a minimal partition. In this work we present an alternative solution to the minimal partition problem by formulating it as an integer programming problem. By doing so, we can use well known optimization methods to solve this problem.
A more efficient way to derive a minimal partition would be using Meudec’s conjectured two-step reduction approach: vertex merging and minimal path coverage. Failing to find a general solution to either of the steps, Meudec abandoned this approach. However, in this work we present an algorithm based on partial expansion of the partition graph for solving the first step. Furthermore, our work in partial expansion has led to new results: we have determined an upper bound on the size of a minimal partition. In turn, this has led to a stronger definition of our current minimal partition algorithm. In some special cases we can also determine lower bounds.
ADAPTIVE SEMI-STRUCTURED INFORMATION EXTRACTION
The number of domains and tasks where information extraction tools can be used needs to be increased. One way to reach this goal is to construct user-driven information extraction systems where novice users are able to adapt them to new domains and tasks. To accomplish this goal, the systems need to become more intelligent and able to learn to extract information without need of expert skills or time-consuming work from the user.
The type of information extraction system that is in focus for this thesis is semistructural information extraction. The term semi-structural refers to documents that not only contain natural language text but also additional structural information. The typical application is information extraction from World Wide Web hypertext documents. By making effective use of not only the link structure but also the structural information within each such document, user-driven extraction systems with high performance can be built.
The extraction process contains several steps where different types of techniques are used. Examples of such types of techniques are those that take advantage of structural, pure syntactic, linguistic, and semantic information. The first step that is in focus for this thesis is the navigation step that takes advantage of the structural information. It is only one part of a complete extraction system, but it is an important part. The use of reinforcement learning algorithms for the navigation step can make the adaptation of the system to new tasks and domains more user-driven. The advantage of using reinforcement learning techniques is that the extraction agent can efficiently learn from its own experience without need for intensive user interactions.
An agent-oriented system was designed to evaluate the approach suggested in this thesis. Initial experiments showed that the training of the navigation step and the approach of the system was promising. However, additional components need to be included in the system before it becomes a fully-fledged user-driven system.
A DYNAMIC PROGRAMMING APPROACH TO OPTIMAL RETARGETABLE CODE GENERATION FOR IRREGULAR ARCHITECTURES
In this thesis we address the problem of optimal code generation for irregular architectures such as Digital Signal Processors (DSPs). Code generation consists mainly of three tasks: instruction selection, instruction scheduling and register allocation. These tasks have been discovered to be \NP-difficult for most of the architectures and most situations.
A common approach to code generation consists in solving each task separately, i.e. in a decoupled manner, which is easier from an engineering point of view. Decoupled phase based compilers produce good code quality for regular architectures, but if applied to DSPs the resulting code is of significantly lower performance due to strong interdependencies between the different tasks.
We report on a novel method for fully integrated code generation based on dynamic programming. It handles the most important tasks of code generation in a single optimization step and produces optimal code sequence. Our dynamic programming algorithm is applicable to small, yet not trivial problem instances with up to 50 instructions per basic block if data locality is not an issue, and up to 20 instructions if we take data locality on irregular processor architectures into account.
In order to obtain a retargetable framework we developed a first version of a structured hardware description language, ADML, which is based on XML. We implemented a prototype framework of such a retargetable system for optimal code generation.
As far as we know from the literature, this is the first time that the main tasks of code generation are solved optimally in a single and fully integrated optimization step that additionally considers data placement in registers.
No FiF-a 62
Utveckling av en projektivitetsmodell: om organisationers förmåga att tillämpa projektarbetsformen
I dagens affärsdrivande organisationer genomförs projekt inte enbart för att skapa förändringar av organisation, arbetssätt eller infrastruktur. Marknadens rörlighet och kundspecifika krav på komplexa produkter, medför att projekt även genomförs inom den ordinarie operativa verksamheten för att hantera temporära, komplexa engångsuppgifter i form av både kundorder och produktutvecklingar. Projektarbetsformen kan öka engagemanget och samarbetet över organisationsgränserna, men det är vanligt att organisationer även upplever problem med projekten. En stor del av problemen kan antas bero på organisationens förmåga att tillämpa projektarbetsformen – organisationens projektivitet. Avhandlingens övergripande forskningsfråga lyder: Hur kan en organisations projektivitet beskrivas i en modell? Med utgångspunkt i Ericsson Infotechs projektivitetsmodell har syftet med forskningsarbetet varit att utveckla en ny projektivitetsmodell som skall kunna tillämpas vid fortsatt utveckling av en metod för projektivitetsanalys. En explorativ studie har genomförts i fem etapper, där valideringen av modellversioner varit ett viktigt inslag. Resultatet av arbetet är dels ett utvecklat projektbegrepp med en klar åtskillnad mellan projektuppgift och projektarbetsform, dels en multidimensionell projektivitetsmodell (MDP-modellen) med fyra dimensioner: projektfunktioner, triader, influerande faktorer samt organisatoriskt lärande. Avhandlingens resultat är avsett att ligga till grund för framtida forskning inom området, exempelvis fortsatt utveckling av en projektivitetsanalys eller organisatorisk lärande genom tillämpning av projektmodeller
USER EXPERIENCE OF SPOKEN FEEDBACK IN MULTIMODAL INTERACTION
The area of multimodal interaction is fast growing, and is showing promising results in making the interaction more efficient and Robust. These results are mainly based on better recognizers, and studies of how users interact with particular multimodal systems. However, little research has been done on users’ subjective experience of using multimodal interfaces, which is an important aspect for acceptance of multimodal interfaces. The work presented in this thesis focuses on how users experience multimodal interaction, and what qualities are important for the interaction. Traditional user interfaces and speech and multimodal interfaces are often
described as having different interaction character (handlingskaraktär). Traditional user interfaces are often seen as tools, while speech and multimodal interfaces are often described as dialogue partners. Researchers have ascribed different qualities as important for performance and satisfaction for these two interaction characters. These statements are examined by studying how users react to a multimodal timetable system. In this study spoken feedback was used to make the interaction more human-like. A Wizard-of-Oz method was used to simulate the recognition and generation engines in the timetable system for public transportation. The results from the study showed that users experience the system having an interaction character, and that spoken feedback
influences that experience. The more spoken feedback the system gives, the more users will experience the system as a dialogue partner. The evaluation of the qualities of interaction showed that user preferred no spoken feedback, or elaborated spoken feedback. Limited spoken feedback only distracted the users.
VISUALIZATION OF DYNAMIC MULTIBODY SIMULATION - WITH SPECIAL REFERENCE TO CONTACTS
This thesis describes the requirements for creating a complete multibody visualization system. The complete visualization process includes everything from data storage to image rendering, and what is needed for a meaningful user-to-data interaction. Other topics covered in this thesis are 2D data packing for parallel simulation and remote simulation control.
System modeling is an important aspect in multibody simulation and visualization. An object oriented approach is used for the multibody model, its basic simulation data structures, and for the visualization system. This gives well structured models and supports both efficient computation and visualization without additional transformations.
The large amount of data and time steps require data compression. An compression algorithm specially designed for numerical data of varying step size is used for all time-varying data. All data is organized in blocks which allows fast selective data access during animation. The demands on a multibody simulation tool focusing on contact analysis represents a special challenge in the field of scientific visualization. This is especially true for multidimensional time-varying data, i.e. two dimensional surface related data.
A surface data structure is presented which is designed for efficient data storage, contact calculation, and visualization. Its properties include an oriented multibody modeling approach, memory allocation on demand, fast data access, effective data compression, and support for interactive visualization.
Contact stresses between two surfaces penetrate the material underneath the surface. These stresses need to be stored during simulation and visualized during animation. We classify this stresses as sub-surface stresses, thus a thin layer volume underneath the surface.
A sub-surface data structure has been created. It has all the good properties of the surface data structure and additional capabilities for visualization of volumes.
In many application fields the simulation process is computation intensive and fast remotely located computers, e.g. parallel computers or workstation clusters, are needed to obtain results in reasonable time. An application is presented which addresses all the major problems related to the data transfers over networks, unified access to different remote systems and administration across different organizational domains.
TOWARDS UNANTICIPATED RUNTIME SOFTWARE EVOLUTION
For some software systems with high availability requirements, it is not acceptable to have the system shut down when a new version of it is to be deployed. An alternative is to use unanticipated runtime software evolution, which means making changes to the Software system while it is executing. We propose a classification of unanticipated runtime software changes. Our classification consists of a code change aspect, a state change aspect, an activity aspect and a motivation aspect. The purpose of the classification is to get a greater understanding of the nature of such changes, and to facilitate an abstract view of them. We also present a case study, where historical changes to an existing software system have been categorized according to the classification. The data from the case study gives an indication that the Java Platform Debugger Architecture, a standard mechanism in Java virtual machines, is a viable technical foundation for runtime software evolution systems.
We also discuss taxonomies of unanticipated runtime software evolution and propose an extension to the concept of validity of runtime changes.
ADAPTIVE QOS-AWARE RESOURCE ALLOCATION FOR WIRELESS NETWORKS
Wireless communication networks are facing a paradigm shift. From providing only voice communication, new generations of wireless networks are designed to provide different types of multimedia communications together with different types of data services and aim to seamlessly integrate in the big Internet infrastructure.
Some of these applications and services have strong resource requirements in order to function properly (e.g. videoconferences), others are flexible enough to adapt to whatever is available (e.g. FTP). Also, different services (or different users), might have different importance levels, and should be treated accordingly. Providing resource assurance and differentiation is often referred to as quality of service (QoS). Moreover, due to the constrained and fluctuating bandwidth of the wireless link, and user mobility, wireless networks represent a class of distributed systems with a higher degree of unpredictability and dynamic change as compared to their wireline counterparts.
In this thesis we study how novel resource allocation algorithms can improve the behaviour (the offered QoS) of dynamic unpredictable distributed systems, such as a wireless network, during periods of overload. This work concerns both low level enforcement mechanisms and high-level policy dependent optimisation algorithms.
First, we propose and evaluate adaptive admission control algorithms for controlling the load on a processor in a radio network controller. We use feedback mechanisms inspired by automatic control techniques to prevent CPU overload, and policy-dependent deterministic algorithms to provide service differentiation.
Second, we propose and evaluate a QoS-aware bandwidth admission control and allocation algorithm for the radio link in a network cell. The acceptable quality levels for a connection are specified using bandwidth dependent utility functions, and our scheme aims to maximise system-wide utility. The novelty in our approach is that we take into account bandwidth reallocation, which arise as a consequence of the dynamic environment, and their effects on the accumulated utility of the different connections.
MANAGEMENT INFORMATION SYSTEMS IN PROCESS-ORIENTED HEALTHCARE ORGANISATIONS
The aim of this thesis work was to develop a management information system model for process-oriented healthcare organisations. The study explores two questions: “What kinds of requirements do healthcare managers place on information systems?” and “How can the work and information systems of healthcare managers and care providers be incorporated into process-oriented healthcare organisations?”
The background to the study was the process orientation of Swedish healthcare organisations. The study was conducted at the paediatric clinic of a county hospital in southern Sweden. Organisational process was defined as “a sequence of work procedures that jointly constitute complete healthcare services”, while a functional unit was the organisational venue responsible for a certain set of work activities.
A qualitative research method, based on a developmental circle, was used. The data was collected from archives, interviews, observations, diaries and focus groups. The material was subsequently analysed in order to categorise, model and develop small-scale theories about information systems.
The study suggested that computer-based management information systems in processoriented healthcare organisations should: (1) support medical work; (2) integrate clinical and administrative tools; (3) facilitate the ability of the organisation to measure inputs and outcomes.
The research effort concluded that various healthcare managers need the same type of primary data, though presented in different ways. Professional developers and researchers have paid little attention to the manner in which integrated administrative, financial and clinical systems should be configured in order to ensure optimal support for process-oriented healthcare organisations. Thus, it is important to identify the multiple roles that information plays in such an organisation.
FEEDFORWARD CONTROL IN DYNAMIC SITUATIONS
This thesis proposal discusses control of dynamic systems and its relation to time. Although much research has been done concerning control of dynamic systems and decision making, little research exists about the relationship between time and control. Control is defined as the ability to keep a target system/process in a desired state. In this study, properties of time such as fast, slow, overlapping etc, should be viewed as a relation between the variety of a controlling system and a target system. It is further concluded that humans have great difficulties controlling target systems that have slow responding processes or "dead" time between action and response. This thesis proposal suggests two different studies to adress the problem of human control over slow responding systems and dead time in organisational control.
SCHEDULING AND OPTIMISATION OF HETEROGENEOUS TIME/EVENT-TRIGGERED DISTRIBUTED EMBEDDED SYSTEMS
Day by day, we are witnessing a considerable increase in number and range of applications which entail the use of embedded computer systems. This increase is closely followed by the growth in complexity of applications controlled by embedded systems, often involving strict timing requirements, like in the case of safety-critical applications. Efficient design of such complex systems requires powerful and accurate tools that support the designer from the early phases of the design process.
This thesis focuses on the study of real-time distributed embedded systems and, in particular, we concentrate on a certain aspect of their real-time behavior and implementation: the time-triggered (TT) and event-triggered (ET) nature of the applications and of the communication protocols. Over the years, TT and ET systems have been usually considered independently, assuming that an application was entirely ET or TT. However, nowadays, the growing complexity of current applications has generated the need for intermixing TT and ET functionality. Such a development has led us to the identification of several interesting problems that are approached in this thesis. First, we focus on the elaboration of a holistic schedulability analysis for heterogeneous TT/ET task sets which interact according to a communication protocol based on both static and dynamic messages. Second, we use the holistic schedulability analysis in order to guide decisions during the design process. We propose a design optimisation heuristic that partitions the task-set and the messages into the TT and ET domains, maps and schedules the partitioned functionality, and optimises the communication protocol parameters. Experiments have been carried out in order to measure the efficiency of the proposed techniques.
No FiF-a 65
KUNDKOMMUNIKATION PÅ DISTANS - EN STUDIE OM KOMMUNIKAITONSMEDIETS BETYDELSE I AFFÄRSTRANSAKTIONER
Tidigare var det vanligaste, och ofta enda, sättet att skaffa varor av olika slag, att besöka en butik och där välja ut och betala de produkter vi behövde. Sätten att införskaffa produkter har dock förändrats. Under senare år har det blivit mer vanligt att handla på distans. Det som började med postorder har i allt högre grad kompletterats med handel via webben och sätten att kommunicera mellan företag och kunder har blivit fler.
Många företag erbjuder sina kunder flera olika kommunikationsmedier såsom e-post, fax och telefon. Utgångspunkten för studien har varit att både kunder och företag väljer, medvetet eller omedvetet, att använda sig av olika kommunikationsmedier vid genomförandet av affärstransaktioner. Huvudsyftet med avhandlingen är att bidra med kunskap som kan användas av företag till att fatta mer genomtänkta beslut avseende vilka kommunikationsmedier som bör inkluderas i deras strategier för kundkommunikation. För att kunna värdera hur olika kommunikationsmedier påverkar kund och företag måste de betraktas ur både kundens och företagets ögon. För att belysa detta har en fallstudie genomförts, där dessa båda perspektiv på olika kommunikationsmedier har undersökts.
Vad som klart framgår av studien är att samtliga studerade kommunikationsmedier har både för- och nackdelar. De faktorer som huvudsakligen påverkade både kundens och företagets val av kommunikationsmedium var vilken kommunikationshandling (t ex beställning eller förfrågan) som skulle utföras samt tidsfaktorn; tidpunkten samt tidsåtgången för genomförandet.
En slutsats som kan dras av denna studie är att företag med en heterogen kundgrupp eller med en kundgrupp som inte är väl segmenterad, bör erbjuda sina kunder flera olika kommunikationsmedier för att inte utestänga vissa kundkategorier från att interagera med företaget på ett sätt som passar dem.
TOWARDS ASPECTUAL COMPONENT-BASED REAL-TIME SYSTEM DEVELOPMENT
Increasing complexity of real-time systems and demands for enabling their configurability and tailorability are strong motivations for applying new software engineering principles such as aspectoriented and component-based software development. The integration of these two techniques into real-time systems development would enable: (i) efficient system configuration from the components in the component library based on the system requirements, (ii) easy tailoring of components and/or a system for a specific application by changing the behavior (code) of the component by aspect weaving, and (iii) enhanced flexibility of the real-time and embedded software through the notion of system configurability and components tailorability.
In this thesis we focus on applying aspect-oriented and component-based software development to real-time system development. We propose a novel concept of aspectual component-based real-time system development (ACCORD). ACCORD introduces the following into real-time system development: (i) a design method that assumes the decomposition of the real-time system into a set of components and a set of aspects, (ii) a real-time component model denoted RTCOM that supports aspect weaving while enforcing information hiding, (iii) a method and a tool for performing worstcase execution time analysis of different configurations of aspects and components, and (iv) a new approach to modeling of real-time policies as aspects.
We present a case study of the development of a configurable real-time database system, called COMET, using ACCORD principles. In the COMET example we show that applying ACCORD does have an impact on the real-time system development in providing efficient configuration of the realtime system. Thus, it could be a way for improved reusability and flexibility of real-time software, and modularization of crosscutting concerns.
In connection with development of ACCORD, we identify criteria that a design method for component-based real-time systems needs to address. The criteria include a well-defined component model for real-time systems, aspect separation, support for system configuration, and analysis of the composed real-time system. Using the identified set of criteria we provide an evaluation of ACCORD. In comparison with other approaches, ACCORD provides a distinct classification of crosscutting concerns in the real-time domain into different types of aspects, and provides a real-time component model that supports weaving of aspects into the code of a component, as well as a tool for temporal analysis of the weaved system.
SVENSKA BANKERS REDOVISNINGSVAL VID RESERVERING FÖR BEFARADE KREDITFÖRLUSTER - EN STUDIE VID INFÖRANDET AV NYA REDOVISNINGSREGLER
Den 1 januari 2002 infördes i Sverige nya regler avseende redovisningsmässig reservering för befarade kreditförluster i banker. Tidigare bankkriser hade aktualiserat frågan om huruvida traditionella individuella kreditreserveringsregler tenderade till att fördröja redovisningen av osäkra fordringar och därmed verka destabiliserande på det finansiella systemet. Den stora förändringen i de nya reglerna är krav på bedömning av behov av gruppvis reservering i det fall någonting inträffat med negativ inverkan på kreditkvalitén i en grupp av lånefordringar som skall värderas individuellt men där den försämrade kreditkvalitén ännu inte kan spåras i individuella låntagares beteende. Reglerna syftar därmed till att minska tiden från det att en händelse, med negativ effekt på kreditkvalitén för en grupp av krediter inträffar, till det att denna händelse leder till en ökad reservering för befarade kreditförluster.
Föreliggande empiriska studie av de svenska storbankskoncernernas redovisning under 2002 och intervjuer med företrädare för dessa banker, visar att införandet av de nya reglerna kring gruppvis reservering inte medförde den ökning av de totala reserverna hos bankerna som kunde förväntas. Istället skedde en omfördelning från tidigare gjorda individuella och generella reserver till gruppvis reserv. En stor oenighet i tolkningen av reglerna avseende innebörden av begreppet inträffad händelse, en skild syn på behovet av nya regler och en osäkerhet bland bankerna, Finansinspektionen och de externa revisorerna kring reglernas innebörd, fick till följd att redovisningen hos de svenska bankerna vid utgången av år 2002 väsentligen skiljer sig åt. Studien visar vidare, i enlighet med i referensramen presenterade studier, att aktiva redovisningsval görs vid bedömning av reserv för befarade kreditförluster och att dessa redovisningsval kan antas vara påverkade av ett antal föreliggande incitament. Utifrån bankernas externa redovisning är det som läsare svårt att förstå hur bankerna fastställer den gruppvisa reserven, vilket kan antas försvåra möjligheten att ”se igenom” redovisningen och öka risken för att eventuell earnings management skall få negativa konsekvenser på resursallokeringen.
DESIGNING FOR USE IN A FUTURE CONTEXT - FIVE CASE STUDIES IN RETROSPECT
This thesis presents a framework – Use Oriented Service Design – for how design can be shaped by people’s future communications needs and behaviour. During the last ten years we have seen the telecom industry go through several significant changes. It has been re-regulated into much more of an open market and, as a result of this, other actors and role-holders have entered the market place and taken up the competition with traditionally monopolistic telecom players. Systems and applications are opening up in order to support interoperability. The convergence between the telecom and IT sector with respect to technology, market and business models is continuing. In this process, we have seen a continuous development which involves a change of focus: from the user interface towards the services and from users towards usage situations. The Use Oriented Service Design approach (UOSD for short) addresses this change.
In UOSD three different design views are explored and analysed: the needs view, the behavioural view, and the technical R & D view.
UOSD was developed with the specific aim of helping companies to meet pro-actively the requirements a future use context will place on their service offerings. Two gaps are defined and bridged: the needs gap and the product gap. The needs gap, defines a set of needs that is not met in a current context of study. Three different needs categories are addressed: needs that users easily can articulate, needs that users can articulate only by indirect means and, finally, needs users can neither foresee nor anticipate. The second gap is the product gap, it provides a measure of the enabling power of a company’s technical initiatives. Technology as it is applied, or as it readily can be applied to meet a set of defined needs, together with planned R & D initiatives will predict the company’s ability to meet a future use context.
An Integrated Prototyping Environment (IPE) was defined and partly developed to support four modes of operation: collection, Analysis, design and evaluation. IPE consists of a collection & analysis module, a sketching & modelling module and a module for prototyping & simulation. It also provides an access port that supports communication with an external development environment.
The thesis reflects the evolution from before the widespread introduction of the web to today’s pervasive computing and is based on work done within both research and industrial settings. In the first part of the thesis, the UOSD framework is presented together with a background and a discussion of some key concepts. Part two of the thesis includes five case studies of which the two first Represent a more traditional human factors work approach and its application in an industrial context. The three remaining studies exemplify the industrial application of UOSD as it is presented in this thesis.
No FiF-a 69
INFORMATION TECHNOLOGY FOR LEARNING AND ACQUIRING OF WORK KNOWLEDGE AMONG PRODUCTION WORKERS
This thesis is about information technology for learning and acquiring of work knowledge among production workers in a manufacturing company. Focus is on production or factory workers in workplaces where the job workers do have a routine character. The thesis builds upon a research project aiming at developing an information system for learning and acquiring of work knowledge among production workers. The system manages manufacturing related operational disturbances and production workers use the system to learn from operational disturbances in such a way that workers do the job grounded on knowledge of prior disturbances. The thesis covers intervention measures aiming at integrating learning and work by developing an information system. The thesis presents and elaborates on the process and outcome of such a development. The empirical work in this thesis is based on an action case study research approach.
The thesis proposes three interrelated aspects concerning use of information technology for learning and acquiring work knowledge among production workers. Such aspects are the (a)work practice, (b)learning and acquiring of work knowledge and (c)information systems.
These aspects must be considered as a coherent whole to seek to integrate learning and work (i.e. to create a learning environment). The work practice sets the scope for workplace learning (to what extent learning takes place at work). The scope for learning is related to for example, machinery and equipment, management and the organizing principle of work. Learning and acquiring of work knowledge is related to in what ways workers learn about the job. Information systems must be in alignment with the practice and the ways workers learn and acquire work knowledge.
TOWARDS FINE-GRAINED BINARY COMPOSITION THROUGH LINK TIME WEAVING
This thesis presents ideas for a system composing software components in binary form. Binary components are important since most off-the-shelf components on a mature component market can be expected to be delivered in binary form only. The focus in this
work is to enable efficient composition and bridging of architectural mismatch between such components.
The central result is a model for describing binary components and their interactions. This model supports invasive composition, i.e., the code of the components themselves can be transformed for more efficient adaptation. The model is also designed to be independent of the source and binary language of the individual components. It supports unforeseen composition, by finding interaction points between software objects and making them available for modification. Therefore, it can be used to insert variability in places where it was not originally intended.
In addition to the model, an architecture for a composition system is presented. In this architecture, language dependent parts of the composition process are separated into specific modules. Thus, the central parts of the architecture become language independent,
allowing complex composition operators to be defined and reused for a multitude of languages.
INCREASING THE AUTOMATION OF RADIO NETWORK CONTROL
The efficient utilization of radio frequencies is becoming more important with new technology, new telecom services and a rapidly expanding market. Future systems for radio network management are therefore expected to contain more automation than today’s systems.
This thesis describes a case study performed at a large European network operator. The first purpose of this study was to identify and describe elements in the current environment of telecommunication radio network management, in order to draw conclusions about the impact of a higher degree of automation in future software systems for radio network management.
The second purpose was to identify specific issues for further analysis and development.
Based on a case study comprising eight full-day observations and eleven interviews with the primary user category, and their colleagues on other teams, this thesis:
- Describes the
work environment by presenting findings regarding task performance and the
use of knowledge, qualities of current tools and the expected qualities of new technology.
- Based on the empirical findings, it concludes that full automation is
not feasible at this time,
but that a supervisory control system including both a human operator and a machine is
therefore the best solution.
- Describes the design considerations for such a supervisory control system for this domain.
- Based on the finding that users allocate function in order to learn
about a tool, it introduces
the concept of adaption through praxis, as a way of introducing a supervisory control system
which includes automation.
- In conclusion, it discusses research issues for future studies in this area.
SECURITY AND EFFICIENCY TRADEOFFS IN MULTICAST GROUP KEY MANAGEMENT
An ever-increasing number of Internet applications, such as content and software distribution, distance learning, multimedia streaming, teleconferencing, and collaborative workspaces, need efficient and secure multicast communication. However, efficiency and security are competing requirements and balancing them to meet the application needs is still an open issue.
In this thesis we study the efficiency versus security requirements tradeoffs in group key management for multicast communication. The efficiency is in terms of minimizing the group rekeying cost and the key storage cost, while security is in terms of achieving backward secrecy, forward secrecy, and resistance to collusion.
We propose two new group key management schemes that balance the efficiency versus resistance to collusion. The first scheme is a flexible category-based scheme, and addresses applications where a user categorization can be done based on the user accessibility to the multicast channel. As shown by the evaluation, this scheme has a low rekeying cost and a low key storage cost for the controller, but, in certain cases, it requires a high key storage cost for the users. In an extension to the basic scheme we alleviate this latter problem.
For applications where the user categorization is not feasible, we devise a cluster-based group key management. In this scheme the resistance to collusion is measured by an integer parameter. The communication and the storage requirements for the controller depend on this parameter too, and they decrease as the resistance to collusion is relaxed. The results of the analytical evaluation show that our scheme allows a fine-tuning of security versus efficiency requirements at runtime, which is not possible with the revious group key management schemes.
No FiF-a 71
EFFEKTANALYS AV IT-SYSTEMS HANDLINGSUTRYMME
Syftet med design av IT-system är att förändra eller stödja användares handlingar genom att göra vissa handlingar möjliga att utföra och andra omöjliga. Detta görs genom att systemet tilldelats vissa egenskaper i utvecklingsprocessen som skall möjliggöra och begränsa vissa typer av handlingar. Detta resulterar i ett designat handlingsutrymme. Kontrollen som designers haft över sin design tappas när applikationen börjar användas. Det uppstår då effekter i användarens användning av systemet som designers inte har kontroll över. En effekt av användningen är användarens upplevda handlingsutrymme och konsekvenserna av den upplevelsen. En designer är därmed delvis ansvarig över de möjligheter och begräsningar som har implementerats i form av funktioner i systemet. IT-system kan ses som en ställföreträdare som kommunicerar till användarna vad designern förväntade, och användarna kan endast kommunicera med designerns ställföreträdare inte med designern. Därmed kan effekter av IT-systemets design identifieras i användarens upplevelse av IT-systemet. Men hur går man tillväga för att studera effekter av ett IT-systems design? I denna avhandling presenteras utvecklingen av ett tillvägagångssätt (effektanalys) med tillhörande analysmodeller (D.EU.PS. Modellen och fenomenanalys) av IT-system användning, som kan användas för att studera effekter av ett designat handlingsutrymme. Detta görs genom att fokusera användares upplevelser av IT-systemet. Detta arbete genomförs i en pilotstudie och två efterföljande fallstudier. D.EU.PS. Modellen används för att klassificera IT-systems funktionalitet och erbjuder ett praktiskt stöd för att värdera specifika egenskaper av ett IT-system. Den bidrar även med en förståelse för vad designers avser och vad användare upplever. Begreppet handlingsutrymme konkretiseras genom att det egenskapsbestäms i avhandlingen. Med egenskaper avser jag sådant som påverkar användningen av IT-systemet i dess handlingskontext och upplevelsen av IT-systemets handlingsutrymme.
EXPERIMENTS IN INDIRECT FAULT INJECTION WITH OPEN SOURCE AND INDUSTRIAL SOFTWARE
Software fault injection is a technique in which faults are injected into a program and the response of the program is observed. Fault injection can be used to measure the robustness of the program as well as to find faults in the program, and indirectly contributes to increased robustness. The idea behind software fault injection is that the better the system handles the faults, the more robust the system is. There are different ways of injecting faults, for example, by changing a variable value to a random value or by changing the source code to mimic programmer errors. The thesis presents an overview of fault injection in hardware and software. The thesis deals with a special case of fault injection, i.e., indirect fault injection. This means that the faults are injected into one module and the response is observed in another module that communicates with the first one. The thesis presents two experiments designed to measure the effect of the fault model used when faults are injected using the indirect fault injection method. The first experiment is conducted on open source software. The result from the experiment was not entirely conclusive, but there are indications that the fault model does matter, but this needs to be further examined. Therefore, a second experiment is designed and presented. The second experiment is conducted on larger, industrial software. The goals of both experiments are to find out whether or not the results of fault injection are affected by how the injected faults are generated. The second experiment shows the feasibility of using fault injection in industrial strength software. The thesis concludes with the proposal for a PhD thesis on a suite of different experiments.
TOWARDS FORMAL VERIFICATON IN A COMPONENT-BASED REUSE METHODOLOGY
Embedded systems are becoming increasingly common in our everyday lives. As techonology progresses, these systems become more and more complex. Designers handle this increasing complexity by reusing existing components (Intellectual Property blocks). At the same time, the systems must still fulfill strict requirements on reliability and correctness.
This thesis proposes a formal verification methodology which smoothly integrates with component-based system-level design using a divide and conquer approach. The methodology assumes that the system consists of several reusable components. Each of these components are already formally verified by their designers and are considered correct given that the environment satisfies certain properties imposed by the component. What remains to be verified is the glue logic inserted between the components. Each such glue logic is verified one at a time using model checking techniques.
The verification methodology as well as the underlying theoretical framework and algorithms are presented in the thesis. Experimental results have shown the efficiency of the proposed methodology and demonstrated that it is feasible to apply it on real-life examples
No FiF-a 73
ATT ETABLERA OCH VIDMAKTHÅLLA FÖRBÄTTRINGSVERKSAMHET - BEHOVET AV KOORDINATION OCH INTERAKTION VID FÖRÄNDRING AV SYSTEMUTVECKLINGSVERKSAMHETER
Det har sedan länge konstaterats att det är komplicerat och problematiskt att utveckla informationssystem. Det har exempelvis visat sig att de informationssystem som utvecklats ibland inte överensstämmer med de mål som den användande organisationen har. Informationssystemen har därtill en tendens av att inte bli färdiga i tid eller inom budget. Informationssystemsutveckling kan således betecknas som en komplex verksamhet vilken återkommande måste förändras och utvecklas för att kunna fungera framgångsrikt.
Att medvetet arbeta med att förbättra systemutvecklingsverksamheten har sedan länge varit ett fenomen som fokuserats i forskning. Resultatet av forskningen har inneburit att metoder, modeller och strategier för hur förbättringsarbete skall bedrivas har utvecklats. Ett tillvägagångssätt för att genomföra dessa förbättringsintentioner är att organisera arbetet i en temporär förbättringsverksamhet och därtill frigöra denna verksamhet från den ordinarie systemutvecklingsverksamheten. Härigenom skapas ett förbättringsprojekt som genomförs på en separerad arena. Projektet har som syfte att utarbeta förbättringar som sedan skall implementeras i systemutvecklingsverksamheten. De problem som kan uppstå vid denna organisering innebär att projektet kan hamna i ett »vakuum« vilket innebär att förbättringsintentionerna ej får utväxling i form av en förbättrad systemutvecklingsverksamhet.
I denna avhandling har jag studerat projektorganiserad förbättringsverksamhet utifrån detta problem. Det övergripande syftet med studien har varit att utveckla råd för hur en framgångsrik projektorganiserad förbättringsverksamhet etableras och vidmakthålls. För att nå detta resultat har jag skapat mig en förståelse för genomförandet av projektorganiserad förbättringsverksamhet genom att under tre år följa ett förbättringsprogram på ett mindre IT-företag. Jag har här kunnat kartlägga vilka problem och styrkor som uppstår under denna typ av förbättringsarbete. Denna empiri har jag använt för att pröva och vidareutveckla en praktikteoretiskt grundad vision om hur framgångsrik projektbaserad förbättringsverksamhet bör etableras och vidmakthållas. Resultatet av forskningsarbetet har primärt inneburit kunskapsbidrag i form av råd vilka framhäver behovet av samt understödjer möjligheten till interaktion vid och koordination av projektorganiserad förbättringsverksamhet i systemutvecklingssammanhang.
DESIGN AND DEVELOPMENT OF RECOMMENDER DIALOGUE SYSTEMS
The work in this thesis addresses design and development of multimodal dialogue recommender systems for the home context-of-use. In the design part, two investigations on multimodal recommendation dialogue interaction in the home context are reported on. The first study gives implications for the design of dialogue system interaction including personalization and a three-entity multimodal interaction model accommodating dialogue feedback in order to make the interaction more efficient and successful. In the second study a dialogue corpus of movie recommendation dialogues is collected and analyzed, providing a characterization of such dialogues. We identify three initiative types that need to be addressed in a recommender dialogue system implementation: system-driven preference requests, user-driven information requests, and preference volunteering. Through the process of dialogue distilling, a dialogue control strategy covering system-driven preference requests from the corpus is arrived at.
In the development part, an application-driven development process is adopted where re-usable generic components evolve through the iterative and incremental refinement of dialogue systems. The Phase Graph Processor (PGP) design pattern is one such evolved component suggesting a phase-based control of dialogue systems. PGP is a generic and flexible micro architecture accommodating frequent change of requirements inherent of agile, evolutionary system development. As PGP has been used in a series of previous information-providing dialogue system projects, a standard phase graph has been established that covers the second initiative type; user-driven information requests. The phase graph is incrementally refined in order to provide user preference modeling, thus addressing the third initiative type, and multimodality as indicated by the user studies. In the iterative development of the multimodal recommender dialogue system MADFILM the phase graph is coupled with the dialogue control strategy in order to cater for the seamless integration of the three initiative types.
A STUDY OF CAUL CENTRE LOCATIONS IN A SWEDISH RURAL REGION
The business economy is undergoing structural changes as we are moving towards more information based businesses. Most studies of industrial location have however focused on manufacturing activities and there is a lack in knowledge of the exact determinants for the location of information based and geographically independent activities. Traditional locational theories have to be complemented with factors that consider these types of businesses. A focus on information based and geographically independent organisations, such as call centres, has a potential to fuel research into industrial location.
The general aim of this thesis is, from a business perspective, to explore and identify a number of factors that are of importance for call centre locations in a specific region. More specifically, the thesis deals with the fact that development and use of information and communication technology, organisational prerequisites in form of changed organisational structures and management of organisations and also more individually related aspects nowadays seem to play an important role for both how business are organised and for where they are located. The thesis is mainly based on a case study of a Swedish rural region that has been successful in its efforts to attract and develop call centre activities.
First, it is shown that the call centre concept is full of nuance and researchers as well as practitioners use the concept differently. In order to enhance and balance discussions about call centres and also facilitate the process of comparing research findings, ten characteristics that are regarded as useful for discriminating among call centre activities are presented. Second, the importance of distinguishing location choices for information based business from location choices for more traditional service business and manufacturing businesses is an important finding and examples that support this are given. A third finding is that even though call centres are often regarded as geographically independent, the proximity that can be offered with cluster formations seems to be of importance also for this type of businesses. It is however more about horizontal integration and not about vertical integration, which is often present for manufacturing businesses. Finally, call centres seem to offer opportunities for regions and localities that wish to create work opportunities and achieve a positive regional development and this applies especially to rural or peripheral areas. However, in order to be successful there are many interacting factors that have to be considered and dealt with and it is important to notice that it often takes time to build up a positive climate for call centre businesses in a region, i.e. different regional actors can and have to do much more than just call for call centres.
No FiF-a 74
DECIDING ON USING APPLICATON SERVICE PROVISION IN SMES
The use of external providers for the provision of information and communication technology (ICT) in small and medium-sized enterprises (SMEs) is expected to increase. At the end of the 1990s the concept of Application Service Provision (ASP) and Application Service Providers (ASPs) was introduced. This is described as one way for SMEs to provide themselves with software applications. However, it can be stated that the concept has not taken off. This study examines what reasons influence the decision-making when deciding to use or not use ASP. The research question is: How do SMEs decide on using an Application Service Provider for the provision and maintenance of ICT? In order to answer the question decision-making processes in SMEs have been investigated in an interpretive case study. This study consisted of mainly semi-structured interviews that were done with three different ASPs and customers related to them. It also consisted of a questionnaire to the customers of one of the service providers. The analysis was then made as a withincase analysis, consisting of detailed write-ups for each site. The interviews and a literature survey of the ASP concept and theories that have been used to explain the ASP decision-making process generated seven constructs. From the presented and discussed theories, models and proposed constructs seven propositions were formulated. These propositions were used for the analysis and presentation of the findings in the study. The main conclusion of the study is the disparate view of what affects the adoption or non-adoption of the ASP concept. The service providers express the decision as a wish from the prospective customer to decrease costs and increase the predictability of costs. The customers on the other hand express it as a wish to increase accessibility; the cost perspective is found to be secondary.
LANGUAGE MODELLING AND ERROR HANDLING IN SPOKEN DIALOGUE SYSTEMS
Language modelling for speech recognition is an area of research currently divided between two main approaches: stochastic and grammar-based approaches are each being differently preferred for their respective strengths and weaknesses. At the same time, dialogue systems researchers are becoming aware of the potential value of handling recognition failures better to improve the user experience. This work aims to bring together these two areas of interest, in investigating how language modelling approaches can be used to improve the way in which speech recognition errors are handled.
Three practical ways of combining approaches to language modelling in spoken dialogue systems are presented. Firstly, it is demonstrated that a stochastic language model-based recogniser can be used to detect out-of-vocabulary material in a grammar-based system with high precision. Ways in which the technique could be used are discussed. Then, two approaches to providing users with recognition failure assistance are described. In the first, poor recognition results are re-recognised with a stochastic language model, and a decision tree classifier is then used to select a context-specific help message. The approach thereby improves on traditional pproaches, where only general help is provided on recognition failure. A user study shows that the approach is well-received. The second differs from the first in its use of layered recognisers and a modified dialogue, and uses Latent Semantic Analysis for the classification part of the task. Decision-tree classification outperforms Latent Semantic Analysis in the work presented here, though it is suggested that there is the potential to improve LSA performance such that it may ultimately prove superior.
RULE EXTRACTION - THE KEY TO ACCURATE AND COMPREHENSIBLE DATA MINING MODELS
The primary goal of predictive modeling is to achieve high accuracy when the model is applied to novel data. For certain problems this requires the use of complex techniques, such as neural networks, resulting in opaque models that are hard or impossible to interpret. For some domains this is unacceptable, since the model needs to be comprehensible. To achieve comprehensibility, accuracy is often sacrificed by using simpler models; a tradeoff termed the accuracy vs. comprehensibility tradeoff. In this thesis the tradeoff is studied in the context of data mining and decision support. The suggested solution is to transform high-accuracy opaque models into comprehensible models by applying rule extraction. This approach is contrasted with standard methods generating transparent models directly from the data set. Using a number of case studies, it is shown that the application of rule extraction generally results in higher accuracy and comprehensibility.
Although several rule extraction algorithms exist and there are well-established evaluation criteria (i.e. accuracy, comprehensibility, fidelity, scalability and generality), no existing algorithm meets all criteria. To counter this, a novel algorithm for rule extraction, named GREX (Genetic Rule EXtraction), is suggested. G-REX uses an extraction strategy based on genetic programming, where the fitness function directly measures the quality of the extracted model in terms of accuracy, fidelity and comprehensibility; thus making it possible to explicitly control the accuracy vs. comprehensibility tradeoff. To evaluate G-REX, experience is drawn from several case studies where G-REX has been used to extract rules from different opaque representations; e.g. neural networks, ensembles and boosted decision trees. The case
studies fall into two categories; a data mining problem in the marketing domain which is extensively studied and several well-known benchmark problems. The results show that GREX, with its high flexibility regarding the choice of representation language and inherent ability to handle the accuracy vs. comprehensibility tradeoff, meets the proposed criteria well.
COMPUTATIONAL MODELS OF SOME COMMUNICATIVE HEAD MOVEMENTS
Speech communication involves normally not only speech but also face and head movements. In the present investigation the visual correlates to focal accent in read speech and to confirmation in Swedish are studied and a computational model for the movements is hypothesized. Focal accent is signalling “new” information in speech and is signalled by means of the fundamental frequency manifestation and by prolonged segment durations. The head movements are recorded by the Qualisys MacReflex motion tracking system simultaneously with the speech signal. The results show that head movements that co-occur with the signalling of focal accent in the speech signal will have the extreme values at the primary stressed syllable of the word carrying focal accent independent of the word accent type in Swedish. To be noted is that focal accent in Swedish will have the fundamental frequency manifestation in words carrying the word accent II at the secondary stressed vowel. The nodding that is signalling confirmation is signalled by means of a damped oscillation of the head. The head movements in both cases may be simulated by a second order linear system and the different patterns are two of the three possible solutions to the equations.
INTRA-FAMILY INFORMATION FLOW AND PROSPECTS FOR COMMUNICATION SYSTEMS
Today, information and communication technology is not only for professional use, but also for private tasks. In this thesis, the use of such technology for managing family information flow is investigated. Busy family life today, with school, work and leisure activites, makes coordination and synchronisation a burden. In what way cell-phones and Internet provides a support for those tasks is investigated, together with proposals for future technology.
The problem with coordination and synchronisation were found to be managed by a bulletin board placed at a central point at home. Besides the bulletin board, we found that calendars, shopping lists, and to-do lists are important. The families we investigated in field studies were all intensive users of both Internet and cell-phones.
Since the bulletin board played such an important role in the family life, we equipped families with cameras to be able to track what happened at those places with help of photo diaries. The field studies revealed that each family had their own unconscious procedure to manage the flow of notes on the bulletin board.
With technology, new problem will emerge. We investigated how notes on typical family bulletin boards may be visualised on a computer screen, and compared click-expand, zoompan and bifocal interfaces. The click-expand interface was substantially faster for browsing, and also easier to use.
An advantage of information and communication technology is that it may provide possibilities for multiple interfaces to information, and not only different terminals but also from different places. At home, a digital refrigerator door or a mobile web tablet; at work or at school, a conventional computer; when on the move, a cell-phone or a PDA. System architecture for these possibilities is presented.
ON THE VALUE OF CUSTOMER LOYALTY PROGRAMS - A STUDY OF POINT PROGRAMS AND SWITCHING COSTS
The increased prevalence of customer loyalty programs has been followed by an increased academic interest in such schemes. This is partly because the Internet has made competition ‘one click away’. It is also because information technology has made it more economical for firms to collect customer information by way of loyalty programs. Point programs are a type of loyalty program where firms’ reward customers for repeat purchases or sum spent in order to induce switching costs on them. Researchers have paid attention to how to measure the value of such schemes. Previous research indicates disparity of what determines the value of point programs.
The main aim of this thesis is to explore dimensions of point programs and analyse these dimensions in regards to loyalty program value. A particular aim is to discuss and define the concepts customer loyalty and customer loyalty program. A better understanding of these concepts are necessary in order to be able to better understand what determines loyalty program value.
Six dimensions of point programs are explored; value of choice, reward valuation, alliances, consumer arbitrage, non-linearity and the principal-agent relation. A theoretical model of loyalty program value to the firm is developed. In the model, loyalty program value is a function of the following parameters; the customer’s subjective value of rewards, the customers subjective best alternative forgone, the firm’s marginal cost of producing rewards and the firm’s ability to exploit customer switching costs induced.
The most interesting findings from analysing the dimensions are: a) researchers seem to not have distinguished between the non-linearity of point functions and the non-linearity of the reward function of point programs. I suggest that the non-linearity of the reward function does not necessarily have to depend on the non-linearity of the point function; b) Previous research points out that customers cash value of rewards depend on the number of reward alternatives (value of choice). I also suggest that how multidimensional and demand inelastic each reward is impact on customer’s value of choice in point programs; c) I also propose that principal-agent relations and consumer arbitrage may impact on firm’s ability to utilize customer switching cost in terms of raising price. Generally, raising price has been suggested as the firm strategy to utilize customer switching cost. I propose that firms may not want to charge higher prices from loyal customers and that one important value of
customer-lock in might be reduced uncertainty of future cash flows.
No FiF-a 77
DESIGNARBETE I DIALOG - KARAKTÄRISERING AV INTERAKTIONEN MELLAN ANVÄNDARE OCH UTVECKLARE I EN SYSTEMUTVECKLINGSPROCESS
Att utveckla IT-system handlar inte enbart om en teknikinriktad utveckling utan innebär att transformera och förändra verksamhetskommunikation för aktörer inom en yrkesroll. Utveckling och införande av IT-systemet i verksamheten innebär att verksamheten förändras. Detta har föranlett modern systemutveckling att på olika sätt inkludera ett aktivt deltagande från framtida användare av systemet. Det har även bedrivits forskning kring användares medverkan i systemutveckling vilket gett upphov till bl.a. metodutveckling. Men relativt lite forskningsfokus har lagts på studier av hur användare och utvecklare interagerar under systemutvecklingsarbetet.
Min avhandling är en induktiv studie av dialogen mellan aktörerna i systemutvecklingsprocessen, och mer specifikt söker jag i min analys ta fram faktorer i interaktionen som har positiv påverkan på designarbetet. Underlag för att studera denna interaktion mellan systemutvecklare och användare i IT-design, utgörs av ett antal videoinspelade möten mellan aktörer i ett systemutvecklingsprojekt för journal- och ärendehantering inom äldreomsorg. Projektet har bedrivits på s k aktionsforskningsbasis där forskare från Linköpings universitet har samverkat med en enhet för äldreomsorg inom en mellansvensk kommun. I denna IT-utveckling agerade forskare i rollen som systemutvecklare. Deltagarna från den kommunala omsorgsenheten representerade användarna av systemet, vårdbiträden och föreståndare.
Resultatet av min analys av ett antal samtalssekvenser ur dessa möten kallar jag designbefrämjande karaktäristika på dialogen mellan aktörerna. Några av dessa karaktäristika som frilagts genom analysen är: utnyttjande och syntetiserande av olika yrkeskunskaper; sökande efter problemlösning genom att ställa frågor; användning av verksamhetspraktikens språk; reflektion med verksamhetserfarenhet som grund; utveckling av samförstånd; och återfokusering på diskursens tema. Resultatet har jag sedan relaterat till principer för dialoger.
CONTRIBUTIONS TO MANAGEMENT AND VALIDATION OF NON-FUNCTIONAL REQUIREMENTS
Non-functional requirements (NFRs) are essential when considering software quality in that they shall represent the right quality of the intended software. It is generally hard to get hold of NFRs and to specify them in measurable terms, and most software development methods applied today focus on functional requirements (FRs). Moreover, NFRs are relatively unexplored in the literature and knowledge regarding real-world treatment of NFRs is particularly rare.
A case study and a literature survey were performed to provide this kind of knowledge, which also served as a problem inventory to outline future research activities. An interview series with practitioners at two large software development organizations was carried out. As a major result, it was established that too few NFRs are considered in development and that they are stated in vague terms. Moreover, it was observed that organizational power structures strongly influence the quality of the forthcoming software, and that processes need to be well suited for dealing with NFRs.
It was selected among several options to explore how processes can be better suited to handle NFRs by adding the information of actual feature use. A case study was performed in which the feature use of an interactive product management tool was measured indirectly from log files of an industrial user, and the approach was also applied to the problem of requirements selection. The results showed that the idea is feasible and that quality aspects can be effectively addressed by considering actual feature use.
An agenda for continued research comprises: further studies in system usage data acquisition, modelling of NFRs, and comparing means for predicting feasibility of NFRs. One strong candidate is weaving high-level requirement models with models of available components.
LARGE VOCABULARY SHORTHAND WRITING ON STYLUS KEYBOARD
We present a novel text entry method for pen-based computers. We view the trace obtained by connecting the letter keys comprising a word on a stylus keyboard as a pattern. This pattern can be matched against a user’s pen trace, invariant of scale and translation. Hence the patterns become an efficient form of shorthand gestures, allowing users to use eyes-free openloop motor actions to perform the gestures. This can result in higher text entry speed than optimized stylus keyboards, the fastest known text entry technique for pen-computers as of today. The approach supports a gradual and seamless skill transition from novices tracing the letter keys to experts articulating the shorthand gestures. Hence the ratio between the learning effort and efficiency in using the system can be said to be optimized at any given point in time
in the user’s experience with the technique. This thesis describes the rationale, architecture and algorithms behind a stylus keyboard augmented with a high-capacity gesture recognition engine. We also report results from an Expanding Rehearsal Interval (ERI) experiment which indicates that users can acquire about 15 shorthand gestures per 45 minute training session. Empirical expert speed estimates of the technique indicate text entry speeds much higher than any prior known pen-based text entry system for mobile computers.
SAFETY-ORIENTED COMMUNICATION IN MOBILE NETWORKS FOR VEHICLES
Accident statistics indicate that every year a large number of casualties and extensive property losses are recorded due to traffic accidents. Consequently, efforts are directed towards developing passive and active safety systems that help reducing the severity of crashes or prevent vehicles to collide with each other. Within the development of these systems, technologies such as sensor systems, computer vision and vehicular communication are considered of importance. Vehicular communication is defined as the exchange of data between vehicles, and is considered a key technology for traffic safety due to its ability to provide the vehicles with information that cannot be acquired using other means (e.g. radar and video systems). However, due to the current early stage in the development of in-vehicle safety systems, the applicability of communication for improving traffic safety is still an open issue. Furthermore, due to the specificity of the environment in which
vehicles travel, the design of communication systems that provide an efficient exchange of safety-related data between vehicles poses a series of major technical challenges.
In this thesis we focus on the development of a communication system that provides support for in-vehicle active safety systems such as collision warning and collision avoidance.
We begin by studying the applicability of communication for supporting the development of effective active safety systems. Within our study, we investigate different safety aspects of traffic situations. For performing such investigations we develop ECAM, a temporal reasoning system for modeling and analyzing accident scenarios. This system gives us the possibility of analyzing relations between events that occur in traffic and their possible consequences. We use ECAM for analyzing the degree of accident prevention that can be achieved by applying crash countermeasures based on communication in specific traffic scenarios.
By acknowledging the potential of communication for traffic safety, we further focus in the thesis on the design of a safety-oriented vehicular communication system. We propose a new solution for vehicular communication in the form of a distributed communication protocol that allows the vehicles to organize the network in an ad-hoc decentralized manner. For disseminating information, we develop an anonymous contextbased broadcast protocol that requires the receivers to determine whether they are the intended destination of sent messages based on knowledge about their momentary situation in traffic. We further design a vehicular communication platform that provides an implementation framework for the communication system, and integrates it within a vehicle. Investigations of the communication performances, which evaluate metrics such as transmission delay, send errors, packet collisions and information filtering, indicate that the proposed vehicular communication system is able to provide a reliable and timely exchange of data between vehicles.
INTERACTING WITH COMMAND AND CONTROL SYSTEMS: TOOLS FOR OPERATORS AND DESIGNERS
Command and control is central in all distributed tactical operations such as rescue operations and military operations. It takes place in a complex system of humans and artefacts, striving to reach common goals. The command and control complexity springs from several sources, including dynamism, uncertainty, risk, time pressure, feedback delays and interdependencies. Stemming from this complexity, the thesis approaches two important and related problem areas in command and control Research. On a general level, the thesis seeks to approach the problems facing the command and control operators and the problems facing the designers in the associated systems development process.
We investigate the specific problem of operators losing sight of the overall perspective when working with large maps in geographical information systems with limited screen area. To approach this problem, we propose high-precision input techniques that reduce the need for zooming and panning in touch-screen systems, and informative unit representations that make better use of the screen area available. The results from an experimental study show that the proposed input techniques are as fast and accurate as state-of-the-art techniques without the need to resort to zooming. Furthermore, results from a prototype design show that the proposed unit representation reduces on-screen clutter and makes use of off-screen units to better exploit the valuable screen area.
Developing command and control systems is a complex task with several pitfalls, including getting stuck in exhaustive analyses and overrated reliance on rational Methods. In this thesis, we employ a design-oriented research framework that acknowledges creative and pragmatic ingredients to handle the pitfalls. Our approach adopts the method of reconstruction and exploration of mission histories from distributed tactical operations as a means for command and control analysis. To support explorative analysis of mission histories within our framework, we propose tools for communication analysis and tools for managing metadata such as reflections, questions, hypotheses and expert comments. By using these tools together with real data from live tactical operations, we show that they can manage large amounts of data, preserve contextual data, support navigation within data, make original data easily accessible, and strengthen the link between metadata and supporting raw data. Furthermore, we show that by using these tools, multiple analysts, experts, and researchers can exchange comments on both data and metadata in a collaborative and explorative investigation of a complex scenario.
MAINTAINING DATA CONSISTENCY IN EMBEDDED DATABASES FOR VEHICULAR SYSTEMS
The amount of data handled by real-time and embedded applications is increasing. This calls for data-centric approaches when designing embedded systems, where data and its metainformation (e.g., temporal correctness requirements) are stored centrally. The focus of this thesis is on efficient data management, especially Maintaining data freshness and guaranteeing correct age on data.
The contributions of our research are updating algorithms and concurrency control algorithms using data similarity. The updating algorithms keep data items up-to-date and can adapt the number of updates of data items to state changes in the external environment. Further, the updating algorithms can be extended with a relevance check allowing for skipping of unnecessary calculations. The adaptability and skipping of updates have positive effects on the CPU utilization, and freed CPU resources can be reallocated to, e.g., more extensive diagnosis of the system. The proposed multiversion concurrency control algorithms guarantee calculations reading data that is correlated in time.
Performance evaluations show that updating algorithms with a relevance check give significantly better performance compared to well-established updating approaches, i.e., the applications use more fresh data and are able to complete more tasks in time. The proposed multiversion concurrency control algorithms perform better than HP2PL and OCC and can at the same time guarantee correct age on data items, which HP2PL and OCC cannot guarantee. Thus, from the perspective of the application, more precise data is used to achieve a higher data quality overall, while the number of updates is reduced.
A STUDY IN INTEGRATING MULTIPLE BIOLOGICAL DATA SOURCES
Life scientists often have to retrieve data from multiple biological data sources to solve their research problems. Although many data sources are available, they vary in content, data format, and access methods, which often vastly complicates the data retrieval process. The user must decide which data sources to access and in which order, how to retrieve the data and how to combine the results – in short, the task of retrieving data requires a great deal of effort and expertise on the part of the user.
Information integration systems aim to alleviate these problems by providing a uniform (or even integrated) interface to biological data sources. The information Integration systems currently available for biological data sources use traditional integration approaches. However, biological data and data sources have unique properties which introduce new challenges, requiring development of new solutions and approaches.
This thesis is part of the BioTrifu project, which explores approaches to integrating multiple biological data sources. First, the thesis describes properties of biological data sources and existing systems that enable integrated access to them. Based on the study, requirements for systems integrating biological data sources are formulated and the challenges involved in developing such systems are discussed. Then, the thesis presents a query language and a highlevel architecture for the BioTrifu system that meet these requirements. An approach to generating a query plan in the presence of alternative data sources and ways to integrate the data is then developed. Finally, the design and implementation of a prototype for the BioTrifu system are presented.
HIGH-LEVEL TECHNIQUES FOR BUILT-IN SELF-TEST RESOURCES OPTIMIZATION
Abdil Rashid Mohamed
Design modifications to improve testability usually introduce large area overhead and performance egradation. One way to reduce the negative impact associated with improved testability is to take testability as one of the constraints during high-level design phases so that systems are not only optimized for area and performance, but also from the testability point of view. This thesis deals with the problem of optimizing testing-hardware resources by taking into account testability constraints at high-levels of abstraction during the design process.
Firstly, we have provided an approach to solve the problem of optimizing built-in selftest (BIST) resources at the behavioral and register-transfer levels under testability and testing time constraints. Testing problem identification and BIST enhancement during the optimization process are assisted by symbolic testability analysis. Further, concurrent test sessions are generated, while signature analysis registers’ sharing conflicts as well as controllability and observability constraints are considered.
Secondly, we have introduced the problem of BIST resources insertion and optimization while taking wiring area into account. Testability improvement transformations have been defined and deployed in a hardware overhead minimization technique used during a BIST synthesis process. The technique is guided by the results of symbolic testability analysis and inserts a minimal amount of BIST resources into the design to make it fully testable. It takes into consideration both BIST components cost and wiring overhead. Two design space exploration approaches have been proposed: a simulated annealing based algorithm and a greedy heuristic. Experimental results show that considering wiring area during BIST synthesis results in smaller final designs as compared to the cases when the wiring impact is ignored. The greedy heuristic uses our behavioral and register-transfer levels BIST enhancement metrics to guide BIST synthesis in such a way that the number of testability improvement transformations performed on the design is reduced.
CONTRIBUTION TO META-MODELING TOOLS AND METHODS
Highly integrated domain-specific environments are essential for the efficient design of complex physical products. However, developing such design environments is today a resource-consuming error-prone process that is largely manual. Meta-modeling and meta-programming are the key to the efficient development of such
The ultimate goal of our research is the development of a meta-modeling approach and its associated metaprogramming methods for the synthesis of model-driven product design environments that support modeling and simulation. Such environments include model-editors, compilers, debuggers and simulators. This thesis presents
several contributions towards this vision, in the context of the Modelica framework. Thus, we have first designed a meta-model for the object-oriented declarative modeling language Modelica, which facilitates the development of tools for analysis, checking, querying, documentation, transformation and management of Modelica models. We have used XML Schema for the representation of the meta-model, namely, ModelicaXML. Next, we have focused on the automatic composition, refactoring and transformation of Modelica models. We have extended the invasive composition environment COMPOST to handle Modelica models described using ModelicaXML.
The Modelica language semantics has already been specified in the Relational Meta-Language (RML), which is an executable meta-programming system based on the Natural Semantics formalism. Using such a metaprogramming approach to manipulate ModelicaXML, it is possible to automatically synthesize a Modelica compiler. However, such a task is difficult without the support for debugging. To address this issue we have developed a debugging framework for RML, based on abstract syntax tree instrumentation in the RML compiler and support of efficient tools for complex data structures and proof-trees visualization.
Our contributions have been implemented within OpenModelica, an open-source Modelica framework. The evaluations performed using several case studies show the efficiency of our meta-modeling tools and methods.
ON THE INFORMATION EXCHANGE BETWEEN PHYSICIANS AND SOCIAL INSURANCE OFFICERS IN THE SICK LEAVE PROCESS: AN ACTIVITY THEORETICAL PERSPECTIVE
Fidel Vascós Palacios
In Sweden, there has been a substantial increase in the number of people on long-term sick leave. This phenomenon has awakened the interest of researchers for understanding its causes. So far, no simple and unambiguous reason explaining this phenomenon has been found. However, previous studies indicate that it may be caused by a combination of different aspects such as the state of the national economy, an ageing labour force in Sweden, and inefficiencies in the information exchange and cooperation among the participants in the sick leave process. This thesis deals with the information exchange between two of these participants, namely physicians from district health care centres and insurance officers from the Social Insurance Office.
The information exchange between these two parties constitutes a critical aspect in the sick leave process and has been reported in the scientific literature as having problems. Nevertheless, most of earlier studies dealing with the interaction between physicians and officers have been purely descriptive, of quantitative nature and lack a common theoretical basis for analysing it.
In this thesis, a philosophical theoretical framework, namely Activity Theory (AT), is applied to gain understanding into the interconnection between physicians and insurance officers and the problems of their information exchange. Based on concepts from AT, the elements that form the structure of these players’ work actions are identified and used to provide a picture of the interconnection between these parties and to uncover some reasons for the failure in their information exchange. Additionally, an activity theoretical perspective about how to see the participation of these players in the sick leave process is provided.
The analysis in this thesis shows that physicians and insurance officers form a fragmented division of labour of a common collective activity: the sick leave process. In this process physicians provide the officers with a tool of their work: information for decision-making. Physicians provide this information through the sickness certificate, which sometimes does not carry the information necessary for the officers to do their work. This failure is partly a result of the complexity of the
VIRTUAL LEARNING ENVIRONMENTS IN HIGHER EDUCATION. A STUDY OF STUDENTS´ ACCEPTANCE OF EDUCATIONAL TECHNOLOGY
Virtual Learning Environments (VLEs) are fundamental tools for flexible learning in higher education, used in distance education as well as a complement to teaching on campus (blended learning). VLEs imply changing roles for both teachers and students. The general aim of this thesis is to explore and analyse students’ acceptance of VLEs in a blended learning environment. In the explorative part of the study data were collected by means of a questionnaire distributed to students at two schools at Jönköping University College. Quantitative data were processed in factor analysis and multiple regression analysis and additional qualitative data in content analysis. The conceptual-analytical part of the study aimed at identifying perspectives that could describe critical and relevant aspects of the process of implementation and acceptance. Literature from Organisation Theory, Management and Information Systems Research was analysed. A retrospective analysis of the explorative findings, by means of the theoretical framework from the conceptual-analytical part of the study, focused on explanation of the findings.
This thesis gives rise to three main conclusions. First, organisational factors seem to have a stronger impact on students’ acceptance of VLEs in a blended learning environment than user factors. Second, Implementation models from Information Systems Research and Organisation Theory contribute to our understanding of students’ acceptance of VLEs by providing concepts describing the implementation process on both individual and organisational level. Third, the theoretical models of Unified Theory of Acceptance and Use of Technology and Innovation Diffusion Theory are able to explain differences in students’ acceptance of VLEs. The Learning Process Perspective obtains concepts to study the possibilities of learning about the VLE in a formal and informal way. Finally, a research model for students’ acceptance of VLEs in a blended learning environment is presented.
INTEGRATION OF ORGANIZATIONAL WORKFLOWS AND THE SEMANTIC WEB
The Internet and the Web provide an environment to do business-to-business in a virtual world where distance is less of an issue. Providers can advertise their products globally and consumers from all over the world obtain access to these products. However, the heterogeneous and continually changing environment leads to several problems related to finding suitable providers that can satisfy a consumer's needs. The Semantic Web aims to alleviate these problems.
By allowing software agents to communicate and understand the information published on the Web, the Semantic Web enables new ways of doing business and consuming services. Semantic Web technology will provide an environment where the comparison of different business contracts will be made in a matter of minutes, new contractors may be discovered continually and the organizations' routines may be automatically updated to reflect new forms of cooperation.
Organizations, however, do not necessarily use the Semantic Web infrastructure to communicate internally. Consequently, to be able to gain new advantages from using Semantic Web technology, this new technology should be integrated into existing routines.
In this thesis, we propose a model for integrating the usage of the Semantic Web into an organization's work routines. We provide a general view of the model as well as an agentbased view. The central component of the model is an sButler, a software agent that mediates between the organization and the Semantic Web. We describe an architecture for one of the important parts of the sButler, the process instance generation, and focus on its service retrieval capability. Further, we show the feasibility of our approach with a prototype implementation, and discuss an experiment.
No FiF-a 85
STANDARDISERING SOM GRUND FÖR INFORMATIONSSAMVERKAN OCH IT-TJÄNSTER - EN FALLSTUDIE BASERAD PÅ TRAFIKINFORMATIONSTJÄNSTEN RDS-TMC
I dagens samhälle ställs allt högre krav på samverkan och informationsutbyte mellan olika personer, organisationer, och informationssystem (IS). Detta betyder att utveckling och användning av IS tenderar att bli allt mer komplex. Standardisering kan i detta sammanhang spela en viktig roll för att hantera den ökande komplexiteten, samt för att underlätta utvecklingen av nya IS och IT-tjänster. Standardisering i detta sammanhang skapar förutsättningar för att på ett effektivt sätt kunna kommunicera information mellan olika personer, organisationer och IS. Den typ av standarder, som är i fokus i avhandlingen, innehåller konceptuella beskrivningar av funktionalitet, meddelandestrukturer och informationsmodeller. I avhandlingen benämns dessa standarder som ”konceptuella” standarder.
Frågan är dock om de standarder som utvecklas verkligen bidrar till en effektivare informationssamverkan, och om utvecklade standarder verkligen blir implementerade på ett riktigt sätt. Avhandlingen syftar till att beskriva och skapa förståelse för hur standarder används i samband med IS och leverans av IT-tjänster, samt vilka effekter det ger. Avhandlingen baseras på en fallstudie som genomförts på Vägverket med fokus på trafikinformationstjänsten RDS-TMC (Radio Data System - Traffic Message Chanel).
I avhandlingen identifieras och karaktäriseras tre olika användningsprocesser, d.v.s. systemutvecklings-, tjänsteleverans- och systemförvaltningsprocessen. Det beskrivs också hur konceptuella standarder används och påverkar dessa användningsprocesser.
Avhandlingen visar även på att konceptuella standarder utgör beskrivningar av en systemarkitektur på konceptuell nivå. Detta innebär att konceptuella standarder har stor betydelse för systemutvecklingsprocessen och för IT-strategiska beslut för de organisationer som berörs av denna typ av standarder.
Avhandlingen beskriver även hur konceptuella standarder påverkar den informationssamverkan som sker mellan olika IS och aktörer i samband med tjänsteleveransen. Tjänsteleveransen påverkas av konceptuella standarder genom att standardens beskrivningar är implementerad i den informationssystemarkitektur (ISA) som används i samband med tjänsteleveransen. Standarder påverkar även vilken information som bör skapas och hur meddelande bör kommuniceras på en instansnivå i samband med tjänsteleveransen.
Accident models are essential for all efforts in safety engineering. They influence the investigation and analysis of accidents, the assessment of systems and the development of precautions. Looking at accident statistics, the trend for Swedish roads is not pointing towards increased safety. Instead, the number of fatalities and accidents remains stable, and the number of injuries is increasing. This thesis proposes that this deterioration of road safety is due to the utilization of inadequate traffic accident models. The purpose of the thesis is to develop an adequate traffic accident model. This is done in two steps. The first step is to identify a proper type of general accident model. The second step is to adapt the general accident model to road traffic. Two reviews are made for this purpose. The first review identifies different categories of accident models. The second review surveys eleven existing traffic accident models. The results of these surveys suggest that an adequate accident model for modern road safety should be based on the systemic accident model. Future work will focus on the development of a risk assessment method for road traffic based on the systemic
No FiF-a 86
ATT MODELLERA UPPDRAG - GRUNDER FÖR FÖRSTÅELSE AV PROCESSINRIKTADE INFORMATIONSSYSTEM I TRANSAKTIONSINTENSIVA VERKSAMHETER
Ett informationssystem skall stödja den verksamhet som det är en del av. Vid systemutveckling finns det behov av att göra en verksamhetsanalys för att utveckla kunskap om den nuvarande verksamheten. Teorier och modeller är viktiga redskap för att fånga väsentliga aspekter vid en verksamhetsanalys. Detta arbete syftar till att visa på hur uppdrag, som en väsentlig aspekt, kan modelleras. I en verksamhet utför aktörer handlingar. Relationer mellan aktörer etableras genom uppdrag. Det gäller rolluppdrag mellan ”chef” och annan aktör, produktuppdrag mellan aktörer i organisationer såväl som produktuppdrag mellan organisationen och dess kunder.
Arbetet har sin utgångspunkt i två aktionsforskningsfall, ett postorderföretag och ett ehandelsföretag, där processkartläggningar har genomförts. De verksamheter som är kartlagda är att betrakta som transaktionsintensiva, det vill säga verksamheter som hanterar många order och som har låg marginal per order. Det gör att en sådan typ av verksamhet är komplex, och beroende av IT-system. Komplexiteten har ställt krav på att skapa modeller på olika generaliseringsnivåer (praktik, process och handling). En verksamhet kan ses som en praktik bestående av processer som i sin tur byggs upp av handlingar. Arbetet har resulterat i en teori om att tänka i uppdrag, samt hur uppdrag kan beskrivas i olika modeller för att skapa en kongruens mellan teori och modell.
En slutsats från arbetet är att det krävs en växelverkan, mellan såväl teorier och metoder som olika generaliseringsnivåer vid modellering av verksamheter. En fokusering på uppdrag tydliggör de olika förväntningar som olika aktörer har. Det skapar även förutsättning för att utveckla verksamhet och informationssystem som infriar dessa förväntningar.
AFFÄRSSTRATEGIER FÖR SENIORBOSTADSMARKNADEN
Den demografiska utvecklingen i Sverige går mot en befolkningssammansättning med allt högre medelålder. Enligt svenska befolkningsprognoser kommer nästan var fjärde svensk år 2025 att vara över 65 år. Den äldre andelen av befolkningen utgör en välbeställd grupp med relativt stora realekonomiska tillgångar. Attitydundersökningar på morgondagens pensionärer talar för att denna grupp ställer högre krav på boendeutformning, kringservice, vård och omsorg än tidigare generationer. Flera studier visar på en ökad betalningsvilja och betalningsförmåga för alternativa service- och boendeformer. Samtidigt försöker olika marknadsaktörer att positionera ett produkt- och tjänsteutbud inom en bostadsmarknadens nischer, här definierad som seniorbostadsmarknaden. På seniorbostadsmarknaden har ett särskilt segment identifierats där utöver seniorboende även service-, vård- och omsorgsrelaterade kringtjänster bjuds ut. Mot den bakgrunden har avhandlingens problemställning formulerats enligt följande: vad skapar en stark marknadsposition för en aktör på seniorbostadsmarknaden med integrerad service, vård och omsorg?
Utgångspunkten har varit ett sannolikt scenario där privata initiativ i allt större utsträckning kommer att bidra till framtida boendelösningar riktade till samhällets seniora och äldrebefolkningsgrupper. Syftet med avhandlingen har varit dels att identifiera de framgångsfaktorer som kan antas ligger till grund för en stark marknadsposition, dels att skapa en typologi över olika affärsstrategier. Genom en branschanalys har det i avhandlingen påvisats att seniorbostadsmarknaden är en nischmarknad med marginell omfattning. Avhandlingens empiriska undersökning har designats som en fältstudie. Fältstudien har i sin tur bl.a. genomförts i form av en förstudie och en intervjustudie. Intervjustudien ägde rum under hösten 2004 med platsbesök och intervjuer av verksamhetsföreträdare för elva utvalda fallstudieorganisationer. Utifrån ett antal i förhand uppställda kriterier har marknadsaktörernas framgångsfaktorer identifierats. Den bearbetnings- och analysmodell som konstruerats för detta syfte och som använts för att analysera fältstudiens empiri är baserad på studier inom strategiområdet. Modellen har bl.a. inspirerats av forskare som Miles & Snow (1978), Porter (1980) och Gupta & Govindarajan (1984). Vidare bygger den på antagandena om resursers och kompetensers betydelse för strategiformuleringen. Service management, och då särskilt tjänsters sammansättning, är ett annat område som beaktas. Analysmodellen har byggts upp kring fem dimensioner: omgivning, strategi, resurser, tjänstekoncept och konkurrens. De identifierade framgångsfaktorerna har baserats på intervjustudiens två mest framgångsrika aktörer. Resultatet har formulerats i ett antal strategiska vägval vilka kan sammanfattas i begreppen: differentiering, fokus, integration, samverkan, kontroll, verksamhetsutveckling, kärnkompetens och resurser. I avhandlingen påvisas att aktörer som bedriver framgångsrik verksamhet på seniorbostadsmarknaden till stora delar följer det Porter (1980) definierat som en differentieringsstrategi med fokus. Avhandlingen har också utmynnat i en affärsstrategisk typologi för seniorbostadsmarknaden. Dessa tentativa slutsatser har formulerats i fyra strategiska idealtyper: förvaltare, konceptbyggare, entreprenörer och idealister.
BEYOND IT AND PRODUCTIVITY - HOW DIGITIZATION TRANSFORMED THE GRAPHIC INDUSTRY
This thesis examines how IT and the digitization of information have transformed processes of the graphic industry. The aim is to show how critical production processes have changed when information in these processes have been digitized. Furthermore it considers if this has influenced changes in productivity while also identifying other significant benefits that have occurred as a result of the digitization. The debate concerning the productivity paradox is one important starting point for the thesis. Previous research on this phenomenon has mainly used different types of statistical databases as empirical sources. In this thesis though, the graphic industry is instead studied from a mainly qualitative and historical process perspective.
The empirical study shows that digitization of information flows in the graphic industry began in the 1970s, but the start of the development and use of digitized information happened in the early 1980s. Today almost all types of materials in the industry, for example text and pictures, have developed into a digital form and the information flows are hereby more or less totally digitized. A common demand in the industry is that information produced should be adaptable to the different channels in which it may be presented. The consequences from use of IT and the digitization of information flows are identified in this thesis as different outcomes, effects, and benefits. The outcomes are identified directly from the empirical material, whilst the resulting effects are generated based on theories about IT and business value. The benefits are in turn generated from a summarization of the identified effects.
Identified effects caused by IT and digitization of information include integration and merging of processes; vanishing professions; reduced number of operators involved; decreased production time; increased production capacity; increased amount and quality of communication; and increased quality in produced originals. One conclusion drawn from the analysis is that investments and use of IT have positively influenced changes in productivity. The conclusion is based on the appearance of different automational effects, which in turn have had a positive influence on factors that may be a part of a productivity index. In addition to productivity other benefits, based on mainly informational effects, are identified. These benefits include increased capacity to handle and produce information, increased integration of customers in the production processes, increased physical quality in produced products, and options for management improvements in the production processes. The conclusions indicate that it is not always the most obvious benefit, such as productivity, that is of greatest significance when IT is implemented in an industry.
BEYOND IT AND PRODUCTIVITY - EFFECTS OF DIGITIZED INFORMATION FLOWS IN GROCERY DISTRIBUTION
During the last decades organizations have made large investments in Information Technology (IT). The effects of these investments have been studied in business and academic communities over the years. A large amount of research has been conducted on the relation between the investments in IT and productivity growth. Researchers have however found it difficult to present a clear-cut answer; an inability defined as the productivity paradox.
Within the Impact of IT on Productivity (ITOP) research program the relevance of the productivity measure as an indicator of the value of IT is questionned. IT has over the years replaced physical interfaces with digital and in this way enabled new ways to process information. A retrospective research approach is therefore applied where the effects of digitized information flows are studied within specific organizational settings.
In this thesis the effects of digitized information flows within Swedish grocery distribution are studied. A comprehensive presentation of the development is first conducted and three focal areas are thereafter presented. The study identifies a number of effects from the digitization of information flows. The effects are analyzed according to a predefined analytical framework. The effects are divided into five categories and are thereafter evaluated when it comes to potential for generating value.
The study shows that the digitization of information flows has generated numerous, multifaceted effects. Automational, informational, transformational, consumer surplus and other effects are observed. They are difficult to evaluate using a single indicator. Specific indicators that are closely related to the effects can however be defined. The study also concludes that the productivity measure does not capture all positive effects generated by digitized information flows.
BEYOND IT AND PRODUCTIVITY - EFFECTS OF DIGITIZED INFORMATION FLOWS IN THE LOGGING INDUSTRY
The IT and productivity paradox has been the subject of considerable research in recent decades. Many previous studies, based mainly on macroeconomic statistics or on aggregated company data, have reached disparate conclusions. Consequently, the question whether IT investments contribute to productivity growth is still heavily debated. More recent research, however, has indicated that IT contributes positively to economic development but that this contribution is not fully revealed when only productivity is measured.
To explore the issue of IT and productivity further, the ITOP (Impact of IT On Productivity) research program was launched in 2003. An alternative research approach is developed with the emphasis on the microeconomic level and information flows in processes in specific industry segments. In the empirical study, the development of information flows is tracked over several decades. Effects of digitized information flows are hereby identified and quantified in order to determine their importance in terms of productivity.
The purpose of this study is to explore effects of information technology by studying digitized information flows in key processes in the logging industry. The research shows that several information flows in the logging process have been digitized leading to new ways to capture, use, spread, process, refine and access information throughout the logging process. A large variety of effects have also been identified from this development.
The results show that only a minor part of the effects identified have a direct impact on productivity and thus that a large number of significant effects do not. Effects with a major direct impact on productivity include increased efficiency in timber measurement registration, lower costs of timber accounting and increased utilization of harvesters and forest resources. Other significant effects with no direct impact on productivity are related to a more open timber market, increased timber customization, control, decisionmaking and access to information, as well as skill levels and innovation. The results thus demonstrate that it is questionable whether conventional productivity measures are sufficient for measuring the impact of IT.
ROLE AND IDENTITY – EXPERIENCE OF TECHNOLOGY IN PROFESSIONAL SETTINGS
In order to make technology easier to handle for its users, the field of Human-Computer Interaction is increasingly dependent on an understanding of the individual user and the context of use. By investigating the relation between the user and the technology, this thesis explores how roles and professional identity affect and interact with the design, use and views of the technology used.
By studies in two different domains, involving clinical medicine and media production respectively, professional identities were related to attitudes towards technology and ways of using computer-based tools. In the clinical setting, neurosurgeons and physicists using the Leksell GammaKnife for neurosurgical dose planning were studied. In the media setting, the introduction of new media technology to journalists was in focus. The data collection includes interviews, observations and participatory design oriented workshops. The data collected were analyzed with qualitative methods inspired by grounded theory.
In the study of the Leksell GammaKnife two different approaches towards the work, the tool and development activities were identified depending on the professional identity. Depending on if the user was a neurosurgeon or a physicist, the user's identity or professional background has a significant impact both on how he or she views his or her role in the clinical setting, and on how he or she defines what improvements are necessary and general safety issues. In the case of the media production tool, the study involved a participatory design development process. Here it was shown that both the identities and the roles possessed by individual participants affect how they want to use new technology for different tasks.
INCREASING THE STORAGE CAPACITY OF RECURSIVE AUTOASSOCIATIVE MEMORY BY SEGMENTING DATA
Recursive Auto-associative Memory (RAAM) has been proposed as a connectionist solution for all structured representations. However, it fails to scale up to representing long sequences of data. In order to overcome this, a number of different architectures and representations have been proposed. It is here argued that part of the problem of, and solution to, storing long sequences in RAAM is data representation. It is proposed that by dividing the sequences to be stored into smaller segments that are individually compressed the problem of storing long sequences is reduced.
This licentiate thesis investigates which different strategies there are for segmenting the sequences. Several segmentation strategies are identified and organized into a taxonomy. Also, a number of experiments based on the identified segmentation strategies that aims to clarify if, and how, segmentation affect the storage capacity of RAAM are performed. The results of the experiments show that the probability that a sequence of a specific length stored in RAAM can be correctly recalled is increased by up to 30% when dividing the sequence into segments. The performance increase is explained by that segmentation reduces the depth at which a symbol is encoded in RAAM which reduces a cumulative error effect during decoding of the symbols.
TOWARDS DETACHED COMMUNICATION FOR ROBOT COOPERATION
This licentiate thesis deals with communication among cooperating mobile robots. Up until recently, most robotics research has focused on developing single robots that should accomplish a certain task. Now, it looks like we have come to a point where the need for multiple, cooperating robots is increasing since there are certain things that simply are not possible to do with a single robot. The major reasons, as described in this thesis, that make the use of multiple robots particularly interesting are distribution (it may be impossible to be in two places at the same time), parallelism (major speed improvements can be achieved by using many robots simultaneously), and simplicity (several, individually simpler, robots might be more feasible than a single, more complex robot). The field of cooperative robotics is multi-faceted, integrating a number of distinct fields such as social sciences, life sciences, and engineering. As a consequence of this, there are several sub-areas within cooperative robotics that can be identified and these are subsequently described here as well. To achieve coordinated behaviour within a multi-robot team communication can be used to ease the necessity of individual sensing (because of, for instance, calculation complexity), and with respect to this two different explicit approaches have been identified. As the survey presented here shows, the first of these approaches has already been extensively investigated, whereas the amount of research covering the second approach within the domain of adaptive multi-robot systems has been very limited. This second path is chosen and preliminary experiments are presented that indicate the usefulness of more complex representations to accomplish cooperation. More specifically, this licentiate thesis presents initial experiments that will serve as a starting point where the role and relevance of the ability to communicate using detached representations in planning and communication about future actions and events will be studied. Here, an unsupervised classifier is found to have the basic characteristics needed to initiate future investigations. Furthermore, two projects are presented that in particular serve to support future research; a robot simulator and an extension turret for remote control and monitoring of a physical, mobile robot. Detailed descriptions of planned future investigations are also discussed for the subsequent PhD work.
TOWARDS DEPENDABLE VIRTUAL COMPANIONS FOR LATER LIFE
When we grow older, we become more vulnerable to certain reductions in quality of life. Caregivers can help, however human care is limited, and will become even scarcer in the near future. This thesis addresses the problem by contributing to the development of electronic assistive technology, which has the potential of effectively complementing human support. In particular, we follow the vision of a virtual companion for later life – an interactive computer-based entity capable of assisting its elderly user in multiple situations in everyday life.
Older adults will only benefit from such technology if they can depend on it and it does not intrude into their lives against their will. Assuming a software engineering view on electronic assistive technology, this thesis thus formulates both dependability requirements and ethical guidelines for designing virtual companions and related technology (such as smart homes).
By means of an iterative development process (the thesis covers the first iteration), a component-based design framework for defining dependable virtual companions is formed. Personalised applications can be generated efficiently by instantiating our generic architecture with a number of special-purpose interactive software agents. Scenario-based evaluation of a prototype confirmed the basic concepts of the framework, and led to refinements.
The final part of the thesis concerns the actual framework components and the applications that can be generated from them. From a field study with elders and experts, we construct a functional design space of electronic assistive technology applications. It relates important needs of different older people to appropriate patterns of assistance. As an example application, the feasibility of driving support by vehicular communication is studied in depth.
Future iterations with real-world experiments will refine our design framework further. If it is found to scale to the dynamic diversity of older adults, then work can begin on the ultimate project goal: a toolkit on the basis of the framework that will allow semi-automatic generation of personalised virtual companions with the involvement of users, caregivers, and experts.
DECISION-MAKING IN THE REQUIREMENTS ENGINEERING PROCESS: A HUMAN-CENTRED APPROACH
Complex decision-making is a prominent aspect of requirements engineering and the need for improved decision support for requirements engineers has been identified by a number of authors. A first step toward better decision support in requirements engineering is to understand decision-makers’ complex decision situations. To gain a holistic perspective of the decision situation from a decision-makers perspective, a decision situation framework has been created. The framework evolved through a literature analysis of decision support systems and decision-making theories. The decision situation of requirements engineers has been studied at Ericsson Microwave Systems and is described in this thesis. Aspects of decision situations are decision matters, decision-making activities, and decision processes. Another aspect of decision situations is the factors that affect the decision-maker. A number of interrelated factors have been identified. Each factor consists of problems and these are related to decision-making theories. The consequences of this for requirements engineering decision support, represented as a list that consists of desirable high-level characteristics, are also discussed.
SYSTEM-ON-CHIP TEST SCHEDULING AND TEST INFRASTRUCTURE DESIGN
There are several challenges that have to be considered in order to reduce the cost of System-on-Chip (SoC) testing, such as test application time, chip area overhead due to hardware introduced to enhance the testing, and the price of the test equipment.
In this thesis the test application time and the test infrastructure hardware overhead of multiple-core SoCs are considered and two different problems are addressed. First, a technique that makes use of the existing bus structure on the chip for transporting test data is proposed. Additional buffers are inserted at each core to allow test application to the cores and test data transportation over the bus to be performed asynchronously. The non-synchronization of test data transportation and test application makes it possible to perform concurrent testing of cores while test data is transported in a sequence. A test controller is introduced, which is responsible for the invocation of test transportations on the bus. The hardware cost, introduced by the buffers and test controller, is minimized under a designer-specified test time constraint. This problem has been solved optimally by using a Constraint Logic Programming formulation, and a tabu search based heuristic has also been implemented to generate quickly near-optimal solutions.
Second, a technique to broadcast tests to several cores is proposed, and the possibility to use overlapping test vectors from different tests in a SoC is explored. The overlapping tests serve as alternatives to the original, dedicated, tests for the individual cores and, if selected,they are broadcasted to the cores so that several cores are tested concurrently. This technique allows the existing bus structure to be reused; however, dedicated test buses can also be introduced in order to reduce the test time. Our objective is to minimize the test application time while a designer-specified hardware constraint is satisfied. Again Constraint Logic Programming has been used to solve the problem optimally.
Experiments using benchmark designs have been carried out to demonstrate the usefulness and efficiency of the proposed techniques.
POLICY AND IMPLEMENTATION ASSURANCE FOR SOFTWARE SECURITY
To build more secure software, accurate and consistent security requirements must be specified. We have investigated current practice by doing a field study of eleven requirement specifications on IT systems. The overall conclusion is that security requirements are poorly specified due to three things: inconsistency in the selection of requirements, inconsistency in level of detail, and almost no requirements on standard security solutions.
To build more secure software we specifically need assurance requirements on code. A way to achieve implementation assurance is to use effective methods and tools that solve or warn for known vulnerability types in code. We have investigated the effectiveness of four publicly available tools for run-time prevention of buffer overflow attacks. Our comparison shows that the best tool is effective against only 50 % of the attacks and there are six attack forms which none of the tools can handle. We have also investigated the effectiveness of five publicly available compile-time intrusion prevention tools. The test results show high rates of false positives for the tools building on lexical analysis and low rates of true positives for the tools building on syntactical and semantical analysis.
As a first step toward a more effective and generic solution we propose dependence graphs decorated with type and range information as a way of modeling and pattern matching security properties of code. These models can be used to characterize both good and bad programming practice. They can also be used to visually explain code properties to the programmer.
ÖVERSÄTTNINGAR AV EN MANAGEMENTMODELL - EN STUDIE AV INFÖRANDET AV BALANCED SCORECARD I ETT LANDSTING
Ekonomistyrningsområdet har till viss del förändrats i takt med nyare teknik, plattare organisationer och ökad konkurrens. Nya tekniker har introducerats som svar på förändringen. Dessa har många gånger fått akronymer och vissa har som produkter marknadsförts, lanserats och spritt sig över världen. Innehållet i dessa modeller kan dock variera då de har förts in i olika organisationer. Studiens övergripande syfte är att förstå hur modeller medvetet och omedvetet omformas då de börjar användas i en organisation. Studien använder sig av en fallstudieansats, där fallet utgörs av ett landsting i Sverige som tar emot och formar Balanced Scorecard. Ett översättningsperspektiv har i studien använts till att förstå vad som sker då ett verktyg, vars delar är mer eller mindre tolkningsbara, förs in och formas av organisationens olika aktörer genom en serie av förhandlingar. Inledningsvis studeras utvecklingen av BSC, vilket visar att konceptet inte är homogent utan har översatts till olika varianter då modellen har kommit till nya miljöer och nya organisationer. Införandet av BSC i organisationen beskrivs för att sedan analyseras utifrån översättningsperspektivet. Analysen pekar på flera aspekter som påverkat översättningsprocessen och i förlängningen modellens slutliga utseende, bl.a. hur och i vilket form BSC kom in i organisationen, vilka aktörer som på ett tidigt stadium engagerade sig i modellens utveckling, vilka problem modellen initialt skulle lösa samt hur pass väl tekniska element lyckats stabilisera dess tänkta användning.
ALIGNING AND MERGING BIOMEDICAL ONTOLOGIES
Due to the explosion of the amount of biomedical data, knowledge and tools that are often publicly available over the Web, a number of difficulties are experienced by biomedical researchers. For instance, it is difficult to find, retrieve and integrate information that is relevant to their research tasks. Ontologies and the vision of a Semantic Web for life sciences alleviate these difficulties. In recent years many biomedical ontologies have been developed and many of these ontologies contain overlapping information. To be able to use multiple ontologies they have to be aligned or merged. A number of systems have been developed for aligning and merging ontologies and various alignment strategies are used in these systems. However, there are no general methods to support building such tools, and there exist very few evaluations of these strategies. In this thesis we give an overview of the existing systems. We propose a general framework for aligning and merging ontologies. Most existing systems can be seen as instantiations of this framework. Further, we develop SAMBO (System for Aligning and Merging Biomedical Ontologies) according to this framework. We implement different alignment strategies and their combinations, and evaluate them in terms of quality and processing time within SAMBO. We also compare SAMBO with two other systems. The work in this thesis is a first step towards a general framework that can be used for comparative evaluations of alignment strategies and their combinations.
DESCRIPTIVE TYPES FOR XML QUERY LANGUAGE XCERPT
The thesis presents a type system for a substantial fragment of XML query language Xcerpt. The system is descriptive; the types associated with Xcerpt constructs are sets of data terms and approximate the semantics of the constructs. A formalism of Type Definitions, related to XML schema languages, is adopted to specify such sets. The type system is presented as typing rules which provide a basis for type inference and type checking algorithms, used in a prototype implementation. Correctness of the type system wrt.the formal semantics of Xcerpt is proved and exactness of the result types inferred by the system is discussed. The usefulness of the approach is illustrated by example runs of the prototype on Xcerpt programs.
Given a non-recursive Xcerpt program and types of data to be queried, the type system is able to infer a type of results of the program. If additionally a type specification of program results is given, the system is able to prove type correctness of a (possibly recursive) program. Type correctness means that the program produces results of the given type whenever it is applied to data of the given type. Non existence of a correctness proof suggests that the program may be incorrect. Under certain conditions (on the program and on the type specification), the program is actually incorrect whenever the proof attempt fails.
SAMPLING-BASED PATH PLANNING FOR AN AUTONOMOUS HELICOPTER
Per Olof Pettersson
Many of the applications that have been proposed for future small unmanned aerial vehicles (UAVs) are at low altitude in areas with many obstacles. A vital component for successful navigation in such environments is a path planner that can .nd collision free paths for the UAV. Two popular path planning algorithms are the probabilistic roadmap algorithm (PRM) and the rapidly-exploring random tree algorithm (RRT). Adaptations of these algorithms to an unmanned autonomous helicopter are presented in this thesis, together with a number of extensions for handling constraints at di.erent stages of the planning process. The result of this work is twofold: First, the described planners and extensions have been implemented and integrated into the software architecture of a UAV. A number of .ight tests with these algorithms have been performed on a physical helicopter and the results from some of them are presented in this thesis. Second, an empirical study has been conducted, comparing the performance of the di.erent algorithms and extensions in this planning domain. It is shown that with the environment known in advance, the PRM algorithm generally performs better than the RRT algorithm due to its precompiled roadmaps, but that the latter is also usable as long as the environment is not too complex. The study also shows that simple geometric constraints can be added in the runtime phase of the PRM algorithm, without a big impact on performance. It is also shown that postponing the motion constraints to the runtime phase can improve the performance of the planner in some cases.
ADAPTIVE REAL-TIME ANOMALY DETECTION FOR SAFEGUARDING CRITICAL NETWORKS
Critical networks require defence in depth incorporating many different security technologies including intrusion detection. One important intrusion detection approach is called anomaly detection where normal (good) behaviour of users of the protected system is modelled, often using machine learning or data mining techniques. During detection new data is matched against the normality model, and deviations are marked as anomalies. Since no knowledge of attacks is needed to train the normality model, anomaly detection may detect previously unknown attacks.
In this thesis we present ADWICE (Anomaly Detection With fast Incremental Clustering) and evaluate it in IP networks. ADWICE has the following properties:
(i) Adaptation - Rather than making use of extensive periodic retraining sessions on stored off-line data to handle changes, ADWICE is fully incremental making very flexible on-line training of the model possible without destroying what is already learnt. When subsets of the model are not useful anymore, those clusters can be forgotten.
(ii) Performance - ADWICE is linear in the number of input data thereby heavily reducing training time compared to alternative clustering algorithms. Training time as well as detection time is further reduced by the use of an integrated search-index.
(iii) Scalability - Rather than keeping all data in memory, only compact cluster summaries are used. The linear time complexity also improves scalability of training.
We have implemented ADWICE and integrated the algorithm in a software agent. The agent is a part of the Safeguard agent architecture, developed to perform network monitoring, intrusion detection and correlation as well as recovery. We have also applied ADWICE to publicly available network data to compare our approach to related works with similar approaches. The evaluation resulted in a high detection rate at reasonable false positives rate.
IMPLEMENTATION METHODOLOGY IN ACTION A STUDY OF AN ENTERPRISE SYSTEMS IMPLEMENTATION METHODOLOGY
Enterprise Systems create new opportunities but also new challenges and difficulties for implementers and users. The clear distinction between the development and the implementation of Enterprise Systems Software seems to influence not only the characteristics of methodologies but also how implementers use the Enterprise Systems implementation methodologies. The general aim of this thesis is to study an Enterprise Systems implementation methodology, SAP’s AcceleratedSAP implementation methodology.
An exploratory case research approach is employed and is initiated with the development of a framework which integrates current views on Method in Action and Information Systems Development with insights from Enterprise Systems research. The theoretically grounded framework outlines the characteristics of the implementation methodology recommended by SAP and used by implementers in Enterprise Systems implementations. The framework is enhanced by an empirical study.
Findings add a number of insights to the body of knowledge in the Information Systems field and the Enterprise Systems implementation methodology. For example, the Implementation Methodology in Action framework developed in this study outlines a set of components which influence the use of an implementation methodology, and implementers’ actions which occur through the use of an implementation methodology. The components have varying characteristics and exert a significant influence on the effectiveness of implementation methodology use, which may explain differences in implementers’ actions and consequently in the outcomes of the Enterprise Systems implementation processes. The notion of implementation methodology in action, as articulated in this study, integrates two complementary views, i.e. a technology view focusing on a formalised aspect and a structural view focusing a situational aspect, emphasising different features of the implementation methodology.
PUBLIC AND NON-PUBLIC GIFTING ON THE INTERNET
This thesis contributes to the knowledge of how computer-mediated communication and information sharing works in large groups and networks. In more detail, the research question put forward is: in large sharing networks, what concerns do end-users have regarding to whom to provide material? A theoretical framework of gift-giving was applied to identify, label and classify qualitative end-user concerns with provision. The data collection was performed through online ethnographical research methods in two large sharing networks, one music-oriented and one photo-oriented. The methods included forum message elicitation, online interviews, application use and observation. The result of the data collection was a total of 1360 relevant forum messages. A part from this there are also 27 informal interview logs, field notes and samples of user profiles and sharing policies. The qualitative analysis led up to a model of relationships based on the observation that many users experienced conflicts of interest between various groups of receivers and that these conflicts, or social dilemmas, evoked concerns regarding public and non-public provision of material. The groups of potential recipients were often at different relationship levels. The levels ranged from the individual (ego), to the small group of close peers (micro), to a larger network of acquaintances (meso) to the anonymous larger network (macro). It is argued that an important focal point for analysis of cooperation and conflict is situated in the relations between these levels. Deepened studies and analysis also revealed needs to address dynamic recipient groupings, the need to control the level of publicness of both digital material and its metadata (tags, contacts, comments and links to other networks) and that users often refrained from providing material unless they felt able to control its direction. A central conclusion is that public and non-public gifting need to co-emerge in large sharing networks and that non-public gifting might be an important factor for the support of continued provision of goods in sustainable networks and communities.
THE USE OF CASE-BASED REASONING IN A HUMAN-ROBOT DIALOG SYSTEM
As long as there have been computers, one goal has been to be able to communicate with them using natural language. It has turned out to be very hard to implement a dialog system that performs as well as a human being in an unrestricted domain, hence most dialog systems today work in small, restricted domains where the permitted dialog is fully controlled by the system.
In this thesis we present two dialog systems for communicating with an autonomous agent:
The first system, the WITAS RDE, focuses on constructing a simple and failsafe dialog system including a graphical user interface with multimodality features, a dialog manager, a simulator, and development infrastructures that provides the services that are needed for the development, demonstration, and validation of the dialog system. The system has been tested during an actual flight connected to an unmanned aerial vehicle.
The second system, CEDERIC, is a successor of the dialog manager in the WITAS RDE. It is equipped with a built-in machine learning algorithm to be able to learn new phrases and dialogs over time using past experiences, hence the dialog is not necessarily fully controlled by the system. It also includes a discourse model to be able to keep track of the dialog history and topics, to resolve references and maintain subdialogs. CEDERIC has been evaluated through simulation tests and user tests with good results.
MANAGING COMPETENCE DEVELOPMENT PROGRAMS IN A CROSS-CULTURAL ORGANISATION – WHAT ARE THE BARRIERS AND ENABLERS?
During the past decade, research on competence development and cross-cultural organisation has been acknowledged both in academic circles and by industrial organisations. Cross-cultural organisations that have emerged through globalisation are a manifestation of the growing economic interdependence among countries. In cross-cultural organisations, competence development has become an essential strategic tool for taking advantage of the synergy effects of globalisation. The objective of this thesis is to examine how competence development programs are conducted and to identify barriers and enablers for the success of such programs, especially in a cross-cultural organisation.
To identify the processes involved in managing competence development programs in a cross-cultural organisation, a case study method was chosen. A total of 43 interviews and 33 surveys were held with participants, facilitators and managers in competence development programs at four units of IKEA Trading Southeast Asia located in Thailand, Vietnam, Malaysia and Indonesia, respectively. In addition to the observations made on these four competence development programs, a study of the literature in related research areas was conducted. The interviews were held and the survey data collected in 2003 and 2004.
In the findings, the barriers identified were cultural differences, assumptions, language, and mistrust; the enablers were cultural diversity, motivation, management commitment, and communication. The conclusions are that competence development is a strategic tool for cross-cultural organisations and that it is extremely important to identify barriers to, and enablers of, successful competence development, and to eliminate the barriers and support the enablers right from the early stages of competence development programs.
No FiF a 90
ETT PRAKTIKPERSPEKTIV PÅ HANTERING AV MJUKVARUKOMPONENTER
Nyutveckling och förvaltning av ett informationssystem står ständigt inför nya krav och förutsättningar. Utvecklingen skall ske på kortare tid och med ökad produktivitet. Ur förvaltningssynpunkt skall IT-systemen snabbt kunna anpassas till förändringar i verksamhet och teknik, samtidigt som dessa IT-system även skall ha en hög kvalitet och säkerhet. Allt detta kräver nya sätt att arbeta och att organisera IT-verksamheten på. Ett av dessa nya arbetssätt gäller hantering av mjukvarukomponenter. Den grundläggande idén med detta arbetssätt är att utveckling och förvaltning av IT-system inte skall basera sig på nyutveckling av mjukvara, utan på återanvändning av befintliga mjukvarukomponenter.
Forskningsprocessen har haft en kvalitativ ansats med induktiva och deduktiva inslag. Datainsamlingen har skett via källstudier och intervjuer. Hanteringen av mjukvaru-komponenter har studerats på två interna IT-avdelningar hos två myndigheter. Syftet har varit att kartlägga vad komponenthantering innebär och på vilket sätt arbetet på IT-avdelningarna har förändrats. Komponenthanteringen beskrivs ur ett praktikperspektiv, vilket innebär att IT-verksamhetens förutsättningar, handlingar, resultat och klienter analyseras.
Avhandlingens resultat utgörs av en praktikteori för komponenthantering. Praktiken ”Komponenthantering” består av fyra subpraktiker: komponentanskaffning, komponent-förvaltning, komponentbaserad systemutveckling och komponentbaserad systemförvaltning. Produkten av denna praktik är användbara IT-system. I avhandlingen diskuteras olika sätt att organisera denna praktik, samt vilka grundläggande förutsättningar som behövs för att bedriva denna praktik. Syftet med den praktikteori som presenteras är att den skall visa på hur intern IT-verksamhet kan bedrivas för att kunna möta de nya krav på effektivitet, förändringsbarhet, kvalitet och säkerhet som ställs på verksamheten.
A FRAMEWORK FOR THE STRATEGIC MANAGEMENT OF INFORMATION TECHNOLOGY
Strategy and IT research has been extensively discussed during the past 40 years. Two scientific disciplines Management Science (MS) and Management Information Science (MIS) investigate the importance of IT as a competitive factor. However, although much research is available in both disciplines, it is still difficult to explain how to manage IT to enable competitive advantages. One reason is that MS research focuses on strategies and competitive environments but avoids the analysis of IT. Another reason is that MIS research focuses on IT as a competitive factor but avoids the analysis of the competitive environment. Consequently, there is a gap of knowledge in the understanding of the strategic management of information technology (SMIT).
The strategic analysis of IT as a competitive factor is important for achieving the competitive advantages of IT. This thesis explores factors related to strategy and IT that should be considered for the strategic analysis of IT as a competitive factor, and proposes a framework for SMIT. The research is conducted by means of a qualitative analysis of theoretical data from the disciplines of MS and MIS. Data is explored to find factors related to SMIT.
The results of the analysis show that the strategic management of information technology is a continuous process of evaluation, change, and alignment between factors such as competitive environment, competitive strategies (business and IT strategies), competitive outcome, and competitive factors (IT). Therefore, the understanding of the relationships between these factors is essential in order to achieve the competitive advantages of using IT.
This thesis contributes to strategic management research by clarifying the relationships between strategic management, competitive environment, and IT as competitive factor into a holistic framework for strategic analysis. The framework proposed is valuable not only for business managers and for IT managers, but also for academics. The framework is designed to understand the relationship between competitive elements during the process of strategic analysis prior to the formulation of competitive strategies. Moreover, it can also be used as a communication tool between managers, in order to achieve alignment among company strategies. To academics, this thesis presents the state-of-the-art related to strategic management research; it can also be a valuable reference for strategic managers, as well as researchers interested in the strategic management of IT.
SCHEDULING AND OPTIMIZATION OF FAULT-TOLERANT EMBEDDED SYSTEMS
Safety-critical applications have to function correctly even in presence of faults. This thesis deals with techniques for tolerating effects of transient and intermittent faults. Reexecution, software replication, and rollback recovery with checkpointing are used to provide the required level of fault tolerance. These techniques are considered in the context of distributed real-time systems with non-preemptive static cyclic scheduling.
Safety-critical applications have strict time and cost constrains, which means that not only faults have to be tolerated but also the constraints should be satisfied. Hence, efficient system design approaches with consideration of fault tolerance are required.
The thesis proposes several design optimization strategies and scheduling techniques that take fault tolerance into account. The design optimization tasks addressed include, among others, process mapping, fault tolerance policy assignment, and checkpoint distribution.
Dedicated scheduling techniques and mapping optimization strategies are also proposed to handle customized transparency requirements associated with processes and messages. By providing fault containment, transparency can, potentially, improve testability and debugability of fault-tolerant applications.
The efficiency of the proposed scheduling techniques and design optimization strategies is evaluated with extensive experiments conducted on a number of synthetic applications and a real-life example. The experimental results show that considering fault tolerance during system-level design optimization is essential when designing cost-effective fault-tolerant embedded systems.
No FiF-a 91
VERKSAMHETSAPNPASSADE IT-STÖD - DESIGNTEORI OCH METOD
Det finns IT-stöd i verksamheter som inte fungerar som bra stöd för personalen att utföra arbetet. Att IT-stöd inte fungerar kan bero på olika brister i utvecklingen. En brist som förs fram i denna avhandling är att IT-stödet inte har utvecklats så att det är verksamhetsanpassat. Ett förslag är att IT-stöd kan bli mer verksamhetsanpassade genom att ha ett lämpligt synsätt på IT och verksamhet samt använda en designteori baserad på synsättet och en metod baserad på designteorin. Begreppen verksamhet och IT-stöd har undersökts och ett förslag på ett synsätt för att utveckla verksamhetsanpassade IT-stöd består av ett pragmatiskt perspektiv på verksamhet, ett kontextuellt perspektiv på IT och ett systemiskt perspektiv på relationen mellan verksamhet och IT. Handlingsbarhet och aktivitetsteori antogs tillsammans utgöra en designteori som täckte in det föreslagna synsättet. En undersökning har gjorts av hur handlingsbarhet skulle kunna vidareutvecklas med aktivitetsteori och användas som designteori för att utveckla verksamhetsanpassade IT-stöd. Detta har gjorts genom en konceptuell teoretisk jämförelse mellan teorierna. Undersökningen kan också ses som teoretisk grundning av handlingsbarhet. Kravhanteringsmetoden VIBA (Verksamhets- och Informationsbehovsanalys) är baserad på handlingsbarhet och har undersökts som metod för att utveckla verksamhetsanpassade IT-stöd. För att pröva aktivitetsteori som kompletterande designteori till handlingsbarhet har det undersökts hur aktivitetsteori skulle kunna användas för att vidareutveckla VIBA till en metod för att utveckla verksamhetsanpassade IT-stöd. Metodutvecklingen har gjorts genom att jämföra VIBA med en metod för arbetsutveckling som är baserad på aktivitetsteori. Detta har genererat ett metodförslag som sedan har prövats praktiskt genom tillämpning i två IT-utvecklingsprojekt. Båda projekten har handlat om utveckling av mobilt IT-stöd för vård och omsorgsverksamhet. Jämförelsen mellan handlingsbarhet och aktivitetsteori visade på att teorierna delvis hade gemensamma begrepp och delvis hade begrepp som enbart fanns inom respektive teori. Detta visar att aktivitetsteori skulle kunna vidareutveckla handlingsbarhet inom dessa begrepp, både som designteori för att utveckla verksamhetsanpassade IT-stöd och som sådan. Metodförslaget gick att tillämpa praktiskt. För att styrka dess användbarhet måste dock ytterligare prövning och validering
A BLUEPRINT FOR USING COMMERCIAL GAMES OFF THE SHELF IN DEFENCE TRAINING, EDUCATION AND RESEARCH SIMULATIONS
There are two types of simulations, those made for business and those made for pleasure. The underlying technology is usually the same, the difference being how and for what purpose the simulation is used. Often the two purposes can be combined. Nowhere is this more obvious than in the mutual benefit that exists between the military community and the entertainment business. These mutual benefits have only in recent years begun to be seriously explored.
The objective of this work is to explore how to modify and use commercial video games off the shelf, in defence training, education and research. The work focuses on the process of how and what to consider when modifying commercial off the shelf games for military needs.
The outlined blueprint is based on studies performed with combatants from the Swedish Army. To facilitate the development of the blueprint, a great number of commercial games used by military communities around the world are evaluated. These evaluations, in harmony with literature in the area, are used to develop a basic theoretical framework. The basic theoretical framework characterizes the approach and style throughout the work.
From a general point of view, there are two overall findings; first there is an urgent need for more intuitive, pedagogical and powerful tools for preparation, management and evaluation of game-based simulation, especially since the real learning often takes place during the modification process rather the during the playing session. Second, there is a defective understanding of the differences between and purposes of a defence simulation and a game. Defence simulations focus on actions and events, while video games focus on human reactions to actions and events.
TOWARDS AN XML DOCUMENT RESTRUCTURING FRAMEWORK
An XML document has a set of constraints associated, such as validity constraints and cross-dependencies. When changing its structure these constraints must be maintained. In some cases a restructuring involves many dependent documents; such changes should be automated to ensure consistency and efficiency.
Most existing XML tools support simple updates, restricted to a single document. Moreover, these tools often do not support concepts defined by a specific XML-application (an XML-application defines the set of valid markup symbols, e.g., tags, and their hierarchical structure). This work aims at developing a framework for XML document restructuring. The framework facilitates realisation of document restructuring tools by providing advanced restructuring functions, i.e., provide an environment where restructuring operations can easily be realised. To avoid restricting the framework to a specific set of XML-applications, it is designed for flexibility.
The conceptual part of this work focuses on the definition of an operation set for XML document restructuring, called the operation catalogue. The operations are adapted to a document model defined by this work. The catalogue is divided into three abstraction layers, corresponding to the concepts defined by XML, XML-applications, and XML-application policies. The layer structure facilitates extensibility by allowing new operations to be defined in terms of existing. In the practical part, an architecture is presented for a document restructuring framework which supports realisation of the earlier presented operations. The architecture is based on a layered approach to facilitate extensibility with new layers that contain restructuring operations and functions for an XML-application or an XML-application policy. A new layer component can be added without recompilation of existing components. To reduce resource consumption during document load and restructuring the framework allows its user to specify, upon initialization, the set of active layer components (each layer component may perform analysis). This part also includes a prototype implementation of the presented architecture.
This work results in an event-based framework for document restructuring that is extensible with restructuring support for new XML-application and XML-application policies. The framework is also well suited to manage inter document issues, such as dependencies.
PREREQUISITES FOR DATA SHARING IN EMERGENCY MANAGEMENT: JOINT ANALYSIS USING A REAL-TIME ROLE-PLAYING EXERCISE APPROACH
This thesis explains how semi-coordinated or separated utilization of information and communication technologies may affect collaborative work between different emergency management organizations, and their capabilities to perform joint tasks during emergency responses. Another aim is to explore the modeling of emergency management and data collection methods with respect to utilization of these technologies. The theoretical basis for the thesis consists of system science, cognitive system engineering, communication, informatics, simulation, emergency management, and command and control. Important notions are the joint cognitive systems concept and the communication infrastructure concept. The case study method and the real-time role-playing exercise approach are the main methodological approaches. On the basis of two main studies, geospatial data and related systems are studied as an example. Study I focuses on emergency management organizations’ abilities to collaborate effectively by assessing their communication infrastructure. Study II, on the other hand, highlights the emerging effects in use of data in collaborative work when responding to a forest fire scenario. The results from the studies, and from the general work conducted and presented here, show that the semi-coordinated or separated utilization of the technologies affects (a) how well the organizations can collaborate, (b) the capabilities to carry out collaborative tasks during crises and disasters, and (c) to what extent the technology can be used in real-life situations. The results also show that the joint cognitive system notion and the real-time role-playing exercise approach provided new ways to conceptualize and study the emergency management and the command and control system.
A FRAMEWORK FOR DESIGNING CONSTRAINT STORES
A constraint solver based on concurrent search and propagation provides a well-defined component model for propagators by enforcing a strict two-level architecture. This makes it straightforward for third parties to invent, implement and deploy new kinds of propagators. The most critical components of such solvers are the constraint stores through which propagators communicate with each other. Introducing stores supporting new kinds of stored constraints can potentially increase the solving power by several orders of magnitude. This thesis presents a theoretical framework for designing stores achieving this without loss of propagator interoperability.
SLACK-TIME AWARE DYNAMIC ROUTING SCHEMES FOR ON-CHIP NETWORKS
Network-on-Chip (NoC) is a new on-chip communication paradigm for future IP-core based System-on-Chip (SoC), designed to remove a number of limitations of today’s on-chip interconnect solutions. A Nointerconnects cores by means of a packet switched micro-network, which improves scalability and reusability, resulting in a shorter time to market. A typical NoC will be running many applications concurrently, which results in shared network capacity between different kinds of traffic flows. Due to the diverse characteristic of applications, some traffic flows will require real-time communication guarantees while others are tolerant to even some loss of data. In order to provide different levels of Quality-of-Service (QoS) for traffic flows, the communication traffic is separated into different service classes. Traffic in NoC is typically classified into two service classes: the guaranteed throughput (GT) and the best-effort (BE) service class. The GT class offers strict QoS guarantees by setting up a virtual path with reserved bandwidth between the source (GT-producer) and destination (GT-consumer), called a GT-path. The BE class offers no strict QoS guarantees, but tries to efficiently use any network capacity which may become available from the GT traffic. The GT traffic may not fully utilize its bandwidth reservation if its communication volume varies, leading to time intervals where there is no GT traffic using the bandwidth reservation. These intervals are referred to as slack-time. If the slack can not be used this leads to unnecessarily reduced performance of BE traffic, since a part of the available network capacity becomes blocked. This thesis deals with methods to efficiently use the slack-time for BE traffic. The contributions include three new dynamic schemes for slack distribution in NoC. First, a scheme to inform the routers of a GT-path about available slack is evaluated. The GT-producer plans its traffic using a special playout buffer and issues control packets containing the actual amount of slack-time available. The results show that this scheme leads to decreased latency, jitter and packet drops for BE traffic. Secondly, an extension to this scheme is evaluated, where slack is distributed among multiple GT-paths (slack distribution in space). This opens up the possibility to balance the QoS of BE traffic flows which overlap with the GT-paths. Thirdly, a scheme to distribute slack among the links of a GT-path (slack distribution in time) is proposed. In this approach, arriving GT-packets, at a certain router along the GT-path, can wait for a maximum defined amount of time. During this time, any waiting BE traffic in the buffers can be forwarded over the GT-path. The results confirm that this is especially important during high BE-traffic load, where this technique decreases the jitter of BE traffic considerably.
MODELLING USER TASKS AND INTENTIONS FOR SERVICE DISCOVERY IN UBIQUITOUS COMPUTING
Ubiquitous computing (Ubicomp) increases in proliferation. Multiple and ever growing in numbers, computational devices are now at the users' disposal throughout the physical environment, while simultaneously being effectively invisible. Consequently, a significant challenge is service discovery. Services may for instance be physical, such as printing a document, or virtual, such as communicating information. The existing solutions, such as Bluetooth and UPnP, address part of the issue, specifically low-level physical interconnectivity. Still absent are solutions for high-level challenges, such as connecting users with appropriate services. In order to provide appropriate service offerings, service discovery in Ubicomp must take the users' context, tasks, goals, intentions, and available resources into consideration. It is possible to divide the high-level service-discovery issue into two parts; inadequate service models, and insufficient common-sense models of human activities.
This thesis contributes to service discovery in Ubicomp, by arguing that in order to meet these highlevel challenges, a new layer is required. Furthermore, the thesis presents a prototype implementation of this new service-discovery architecture and model. The architecture consists of hardware, ontology layer, and common-sense layer. This work addresses the ontology and common-sense layers. Subsequently, implementation is divided into two parts; Oden and Magubi. Oden addresses the issue of inadequate service models through a combination of service-ontologies in concert with logical reasoning engines, and Magubi addresses the issue of insufficient common-sense models of human activities, by using common-sense models in combination with rule engines. The synthesis of these two stages enables the system to reason about services, devices, and user expectations, as well as to make suitable connections to satisfy the users’ overall goal.
Designing common-sense models and service ontologies for a Ubicomp environment is a non-trivial task. Despite this, we believe that if correctly done, it might be possible to reuse at least part of the knowledge in different situations. With the ability to reason about services and human activities it is possible to decide if, how, and where to present the services to the users. The solution is intended to off-load users in diverse Ubicomp environments as well as provide a more relevant service discovery.
ONTOLOGY AS CONCEPTUAL SCHEMA WHEN MODELLING HISTORICAL MAPS FOR DATABASE STORAGE
Sweden has an enormous treasure in its vast number of large-scale historical maps from a period of 400 years made for different purposes, that we call map series. The maps are also very time and regional dependent with respect to their concepts. A large scanning project by Lantmäteriverket will make most of these maps available as raster images. In many disciplines in the humanities and social sciences, like history, human geography and archaeology, historical maps are of great importance as a source of information. They are used frequently in different studies for a variety of problems. A full and systematic analyse of this material from a database perspective has so far not been conducted. During the last decade or two, it has been more and more common to use data from historical maps in GIS-analysis. In this thesis a novel approach to model these maps is tested. The method is based on the modelling of each map series as its own ontology, thus focusing on the unique concepts of each map series. The scope of this work is a map series covering the province of Gotland produced during the period 1693-1705. These maps have extensive text descriptions concerned with different aspects of the mapped features. Via a code marking system they are attached to the maps. In this thesis a semantic analysis and an ontology over all the concepts found in the maps and text descriptions are presented. In our project we model the maps as close to the original structure as possible with a very data oriented view. Furthermore; we demonstrate how this ontology can be used as a conceptual schema for a logical E/R database schema. The Ontology is described in terms of the Protégé meta-model and the E/R schema in UML. The mapping between the two is a set of elementary rules, which are easy for a human to comprehend, but hard to automate. The E/R schema is implemented in a demonstration system. Examples of some different applications which are feasibly to perform by the system are presented. These examples go beyond the traditional use of historical maps in GIS today.
NAVIGATION FUNCTIONALITIES FOR AN AUTONOMOUS UAV HELICOPTER
This thesis was written during the WITAS UAV Project where one of the goals has been the development of a software/hardware architecture for an unmanned autonomous helicopter, in addition to autonomous functionalities required for complex mission scenarios. The algorithms developed here have been tested on an unmanned helicopter platform developed by Yamaha Motor Company called the RMAX.
The character of the thesis is primarily experimental and it should be viewed as developing navigational functionality to support autonomous flight during complex real-world mission scenarios. This task is multidisciplinary since it requires competence in aeronautics, computer science and electronics.
The focus of the thesis has been on the development of a control method to enable the helicopter to follow 3D paths. Additionally, a helicopter simulation tool has been developed in order to test the control system before flight-tests.
The thesis also presents an implementation and experimental evaluation of a sensor fusion technique based on a Kalman filter applied to a vision based autonomous landing problem. Extensive experimental flight-test results are presented.
USER-CENTRIC CRITIQUING IN COMMAND AND CONTROL: THE DKEXPERT AND COMPLAN APPROACHES
This thesis describes two approaches for using critiquing as decision support for military mission planning. In our work, we have drawn from both human-centered research as well as results on decision support systems research for military mission planning when devising approaches for knowledge acquisition and decision support for mission planning.
Our two approaches build on a common set of requirements which have been developed as the consequence of both literature analyses as well as interview studies. In short, these criteria state that critiquing systems should be developed with transparency, ease of use and integration in traditional work flow in mind. The use of these criteria is illustrated in two approaches to decision support in two different settings: a collaborative real-time war-gaming simulation and a planning tool for training mission commanders.
Our first approach is demonstrated by the DKExpert system, in which end-users can create feedback mechanisms for their own needs when playing a two-sided war-game scenario in the DKE simulation environment. In DKExpert, users can choose to trigger feedback during the game by instructing a rule engine to recognize critical situations. Our second approach, Com-Plan, builds on the insights on knowledge and planning representation gained from DKExpert and introduces an explicit representation of planning operations, thereby allowing for better analysis of planning operations and user-controlled feedback. ComPlan also demonstrates a design for critiquing support systems that respects the traditional work practice of mission planners while allowing for intelligent analysis of military plans.
EMBODIED SIMULATION AS OFF-LINE REPRESENTATION
This licentiate thesis argues that a key to understanding the embodiment of cognition is the “sharing” of neural mechanisms between sensorimotor processes and higher-level cognitive processes as described by simulation theories. Simulation theories explain higher-level cognition as (partial) simulations or emulations of sensorimotor processes through the re-activation of neural circuitry also active in bodily perception, action, and emotion. This thesis develops the notion that simulation mechanisms have a particular representational function, as off-line representations, which contributes to the representation debate in embodied cognitive science. Based on empirical evidence from neuroscience, psychology and other disciplines as well as a review of existing simulation theories, the thesis describes three main mechanisms of simulation theories: re-activation, binding, and prediction. The possibility of using situated and embodied artificial agents to further understand and validate simulation as a mechanism of (higher-level) cognition is addressed through analysis and comparison of existing models. The thesis also presents some directions for further research on modeling simulation as well as the notion of embodied simulation as off-line representation.
SYSTEM-ON-CHIP TEST SCHEDULING WITH DEFECT-PROBABILITY AND TEMPERATURE CONSERATIONS
Electronic systems have become highly complex, which results in a dramatic increase of both design and production cost. Recently a core-based system-on-chip (SoC) design methodology has been employed in order to reduce these costs. However, testing of SoCs has been facing challenges such as long test application time and high temperature during test. In this thesis, we address the problem of minimizing test application time for SoCs and propose three techniques to generate efficient test schedules.
First, a defect-probability driven test scheduling technique is presented for production test, in which an abort-on-first-fail (AOFF) test approach is employed and a hybrid built-in self-test architecture is assumed. Using an AOFF test approach, the test process can be aborted as soon as the first fault is detected. Given the defect probabilities of individual cores, a method is proposed to calculate the expected test application time (ETAT). A heuristic is then proposed to generate test schedules with minimized ETATs.
Second, a power-constrained test scheduling approach using test set partitioning is proposed. It assumes that, during the test, the total amount of power consumed by the cores being tested in parallel has to be lower than a given limit. A heuristic is proposed to minimize the test application time, in which a test set partitioning technique is employed to generate more efficient test schedules.
Third, a thermal-aware test scheduling approach is presented, in which test set partitioning and interleaving are employed. A constraint logic programming (CLP) approach is deployed to find the optimal solution. Moreover, a heuristic is also developed to generate near-optimal test schedules especially for large designs to which the CLP-based algorithm is inapplicable.
Experiments based on benchmark designs have been carried out to demonstrate the applicability and efficiency of the proposed techniques.
COMPONENTS, SAFETY INTERFACES AND COMPOSITIONAL ANALYSIS
Component-based software development has emerged as a promising approach for developing complex software systems by composing smaller independently developed components into larger component assemblies. This approach offers means to increase software reuse, achieve higher flexibility and shorter time-to-market by the use of off-the-shelf components (COTS). However, the use of COTS in safety-critical system is highly unexplored.
This thesis addresses the problems appearing in component-based development of safety-critical systems. We aim at efficient reasoning about safety at system level while adding or replacing components. For safety-related reasoning it does not suffice to consider functioning components in their intended environments but also the behaviour of components in presence of single or multiple faults. Our contribution is a formal component model that includes the notion of a safety interface. It describes how the component behaves with respect to violation of a given system-level property in presence of faults in its environment. This approach also provides a link between formal analysis of components in safety-critical systems and the traditional engineering processes supported by model-based development.
We also present an algorithm for deriving safety interfaces given a particular safety property and fault modes for the component. The safety interface is then used in a method proposed for compositional reasoning about component assemblies. Instead of reasoning about the effect of faults on the composed system, we suggest analysis of fault tolerance through pair wise analysis based on safety interfaces.
The framework is demonstrated as a proof-of-concept in two case studies; a hydraulic system from the aerospace industry and an adaptive cruise controller from the automotive industry. The case studies have shown that a more efficient system-level safety analysis can be performed using the safety interfaces.
QUESTION CLASSIFICATION IN QUESTION ANSWERING SYSTEMS
Question answering systems can be seen as the next step in information retrieval, allowing users to pose questions in natural language and receive succinct answers. In order for a question answering system as a whole to be successful, research has shown that the correct classification of questions with regards to the expected answer type is imperative. Question classification has two components: a taxonomy of answer types, and a machinery for making the classifications.
This thesis focuses on five different machine learning algorithms for the question classification task. The algorithms are k nearest neighbours, naïve bayes, decision tree learning, sparse network of winnows, and support vector machines. These algorithms have been applied to two different corpora, one of which has been used extensively in previous work and has been constructed for a specific agenda. The other corpus is drawn from a set of users' questions posed to a running online system. The results showed that the performance of the algorithms on the different corpora differs both in absolute terms, as well as with regards to the relative ranking of them. On the novel corpus, naïve bayes, decision tree learning, and support vector machines perform on par with each other, while on the biased corpus there is a clear difference between them, with support vector machines being the best and naïve bayes being the worst.
The thesis also presents an analysis of questions that are problematic for all learning algorithms. The errors can roughly be divided as due to categories with few members, variations in question formulation, the actual usage of the taxonomy, keyword errors, and spelling errors. A large portion of the errors were also hard to explain.
INFORMATION DEMAND AND USE: IMPROVING INFORMATION FLOW WITHIN SMALL-SCALE BUSINESS CONTEXTS
Whilst the amount of information readily available to workers in information- and knowledge intensive business- and industrial contexts only seem to increase with every day, those workers still have difficulties in finding relevant and needed information as well as storing, distributing, and aggregating such information. Yet, whilst there exist numerous technical, organisational, and practical approaches to remedy the situation, the problems seem to prevail.
This publication describes the first part of the author’s work on defining a methodology for improving the flow of work related information, with respect to the information demand of individuals and organisations. After a prefatory description of the perceived problems concerning information flow in modern organisations, a number of initial conjectures regarding information demand and use in small-scale business contexts are defined based on a literature study. With this as the starting point the author sets out to, through an empirical investigation performed in three different Swedish organisations during 2005, identify how individuals within organisations in general, and these three in particular, use information with respect to such organisational aspects as roles, tasks, and resources as well as spatio-temporal aspects. The results from the investigation are then used to validate the conjectures and to draw a number of conclusions on which both a definition of information demand, as well as the initial steps towards defining a methodology for information demand analysis, are based. Lastly, a short discussion of the applicability of the results in continued work is presented together with a description of such planned work.
DEDUCTIVE PLANNING AND COMPOSITE ACTIONS IN TEMPORAL ACTION LOGIC
Temporal Action Logic is a well established logical formalism for reasoning about action and change that has long been used as a formal specification language. Its first-order characterization and explicit time representation makes it a suitable target for automated theorem proving and the application of temporal constraint solvers. We introduce a translation from a subset of Temporal Action Logic to constraint logic programs that takes advantage of these characteristics to make the logic applicable, not just as a formal specification language, but in solving practical reasoning problems. Extensions are introduced that enable the generation of action sequences, thus paving the road for interesting applications in deductive planning. The use of qualitative temporal constraints makes it possible to follow a least commitment strategy and construct partially ordered plans. Furthermore, the logical language and logic program translation is extended with the notion of composite actions that can be used to formulate and execute scripted plans with conditional actions, non-deterministic choices, and loops. The resulting planner and reasoner is integrated with a graphical user interface in our autonomous helicopter research system and applied to logistics problems. Solution plans are synthesized together with monitoring constraints that trigger the generation of recovery actions in cases of execution failures.
RESTORING CONSISTENCY AFTER NETWORK PARTITIONS
The software industry is facing a great challenge. While systems get more complex and distributed across the world, users are becoming more dependent on their availability. As systems increase in size and complexity so does the risk that some part will fail. Unfortunately, it has proven hard to tackle faults in distributed systems without a rigorous approach. Therefore, it is crucial that the scientific community can provide answers to how distributed computer systems can continue functioning despite faults.
Our contribution in this thesis is regarding a special class of faults which occurs when network links fail in such a way that parts of the network become isolated, such faults are termed network partitions. We consider the problem of how systems that have integrity constraints on data can continue operating in presence of a network partition. Such a system must act optimistically while the network is split and then perform a some kind of reconciliation to restore consistency afterwards.
We have formally described four reconciliation algorithms and proven them correct. The novelty of these algorithms lies in the fact that they can restore consistency after network partitions in a system with integrity constraints and that one of the protocols allows the system to provide service during the reconciliation. We have implemented and evaluated the algorithms using simulation and as part of a partition-tolerant CORBA middleware. The results indicate that it pays off to act optimistically and that it is worthwhile to provide service during reconciliation.
TOWARDS INDIVIDUALIZED DRUG DOSAGE - GENERAL METHODS AND CASE STUDIES
Progress in individualized drug treatment is of increasing importance, promising to avoid much human suffering and reducing medical treatment costs for society. The strategy is to maximize the therapeutic effects and minimize the negative side effects of a drug on individual or group basis. To reach the goal, interactions between the human body and different drugs must be further clarified, for instance by using mathematical models. Whether clinical studies or laboratory experiments are used as primary sources of information, greatly influences the possibilities of obtaining data. This must be considered both prior and during model development and different strategies must be used. The character of the data may also restrict the level of complexity for the models, thus limiting their usage as tools for individualized treatment. In this thesis work two case studies have been made, each with the aim to develop a model for a specific human-drug interaction. The first case study concerns treatment of inflammatory bowel disease with thiopurines, whereas the second is about treatment of ovarian cancer with paclitaxel. Although both case studies make use of similar amounts of experimental data, model development depends considerably on prior knowledge about the systems, the character of the data and the choice of modelling tools. All these factors are presented for each of the case studies along with current results. Further, a system for classifying different but related models is also proposed with the intention that an increased understanding will contribute to advancement in individualized drug dosage.
A VISUAL QUERY LANGUAGE SERVED BY A MULTI-SENSOR ENVIRONMENT
A problem in modern command and control situations is that much data is available from different sensors. Several sensor data sources also require that the user has knowledge about the specific sensor types to be able to interpret the data.
To alleviate the working situation for a commander, we have designed and constructed a system that will take input from several different sensors and subsequently present the relevant combined information to the user. The users specify what kind of information is of interest at the moment by means of a query language. The main issues when designing this query language have been that (a) the users should not have to have any knowledge about sensors or sensor data analysis, and (b) that the query language should be powerful and flexible, yet easy to use. The solution has been to (a) use sensor data independence and (b) have a visual query language.
A visual query language was developed with a two-step interface. First, the users pose a “rough”, simple query that is evaluated by the underlying knowledge system. The system returns the relevant information that can be found in the sensor data. Then, the users have the possibility to refine the result by setting conditions for this. These conditions are formulated by specifying attributes of objects or relations between objects.
The problem of uncertainty in spatial data; (i.e. location, area) has been considered. The question of how to represent potential uncertainties is dealt with. An investigation has been carried out to find which relations are practically useful when dealing with uncertain spatial data.
The query language has been evaluated by means of a scenario. The scenario was inspired by real events and was developed in cooperation with a military officer to assure that it was fairly realistic. The scenario was simulated using several tools where the query language was one of the more central ones. It proved that the query language can be of use in realistic situations.
SAFETY, SECURITY, AND SEMANTIC ASPECTS OF EGUATION-BASED OBJECT-ORIENTED LANGUAGES AND ENVIRONMENTS
During the last two decades, the interest for computer aided modeling and simulation of complex physical systems has witnessed a significant growth. The recent possibility to create acausal models, using components from different domains (e.g., electrical, mechanical, and hydraulic) enables new opportunities. Modelica is one of the most prominent equation-based object-oriented (EOO) languages that support such capabilities, including the ability to simulate both continuous- and discrete-time models, as well as mixed hybrid models. However, there are still many remaining challenges when it comes to language safety and simulation security. The problem area concerns detecting modeling errors at an early stage, so that faults can be isolated and resolved. Furthermore, to give guarantees for the absence of faults in models, the need for precise language specifications is vital, both regarding type systems and dynamic semantics. This thesis includes five papers related to these topics. The first paper describes the informal concept of types in the Modelica language, and proposes a new concrete syntax for more precise type definitions. The second paper provides a new approach for detecting over- and under-constrained systems of equations in EOO languages, based on a concept called structural constraint delta. That approach makes use of type checking and a type inference algorithm. The third paper outlines a strategy for using abstract syntax as a middle-way between a formal and informal language specification. The fourth paper suggests and evaluates an approach for secure distributed co-simulation over wide area networks. The final paper outlines a new formal operational semantics for describing physical connections, which is based on the untyped lambda calculus. A kernel language is defined, in which real physical models are constructed and simulated.
INVASIVE INTERACTIVE PARALLELIZATION
While looking at the strengths and weaknesses of contemporary approaches to parallelization this thesis suggests a form of parallelizing refactoring -- Invasive Interactive Parallelization -- that aims at addressing a number of weaker sides of contemporary methods. Our ultimate goal is to make the parallelization more user and developer friendly. While admitting that the approach adds complexity at certain levels, in particular, it can be said to reduce code understandability, we conclude that it provides a remedy for a number of problems found in contemporary methods. As the main contribution the thesis discusses the benefits we see with the approach, introduces a set of parallelization categories a typical parallelization consists of, and shows how the method can be realized with abstract syntax tree transformations. The thesis also presents a formal solution to the problem of automated round-trip software engineering in aspect-weaving systems.
A HOLISTIC APPROACH TO USABILITY EVALUATIONS OF MIXED REALITY SYSTEMS
The main focus of this thesis is the study of user centered issues in Mixed and Augmented Reality (AR) systems. Mixed Reality (MR) research is in general a highly technically oriented field with few examples of user centered studies. The few studies focusing on user issues found in the field are all based on traditional Human Computer Interaction (HCI) methodologies. Usability methods used in MR/AR research are mainly based on usability methods used for graphical user interfaces, sometimes in combination with usability for Virtual Reality (VR) applications. MR/AR systems and applications differ from standard desktop applications in many ways, but specifically in one crucial respect; it is intended to be used as a mediator or amplifier of human action, often in physical interaction with the surroundings. The differences between MR/AR systems and desktop computer display based systems create a need for a different approach to both development and evaluations of these systems. To understand the potential of MR/AR systems in real world tasks the technology must be tested in real world scenarios. This thesis describes a theoretical basis for approaching the issue of usability evaluations in MR/AR applications. It also includes results from two user studies conducted in a hospital setting where professionals tested an MR/AR system prototype.
A MODEL AND IMPLEMENTATION OF A SECURITY PLUG-IN FOR THE SOFTWARE LIFE CYCLE
Currently, security is frequently considered late in software life cycle. It is often bolted on late in development, or even during deployment or maintenance, through activities such as add-on security software and penetration-and-patch maintenance. Even if software developers aim to incorporate security into their products from the beginning of the software life cycle, they face an exhaustive amount of ad hoc unstructured information without any practical guidance on how and why this information should be used and what the costs and benefits of using it are. This is due to a lack of structured methods.
In this thesis we present a model for secure software development and implementation of a security plug-in that deploys this model in software life cycle. The model is a structured unified process, named S3P (Sustainable Software Security Process) and is designed to be easily adaptable to any software development process. S3P provides the formalism required to identify the causes of vulnerabilities and the mitigation techniques that address these causes to prevent vulnerabilities. We present a prototype of the security plug-in implemented for the OpenUP/Basic development process in Eclipse Process Framework. We also present the results of the evaluation of this plug-in. The work in this thesis is a first step towards a general framework for introducing security into the software life cycle and to support software process improvements to prevent recurrence of software vulnerabilities.
MOBILITY AND ROUTING IN A DEALY-TOLERANT NETWROK OF UNMANNED AERIAL VEHICLES
Technology has reached a point where it has become feasible to develop unmanned aerial vehicles (UAVs), that is aircraft without a human pilot on board. Given that future UAVs can be autonomous and cheap, applications of swarming UAVs are possible. In this thesis we have studied a reconnaissance application using swarming UAVs and how these UAVs can communicate the reconnaissance data. To guide the UAVs in their reconnaissance mission we have proposed a pheromone based mobility model that in a distributed manner guides the UAVs to areas not recently visited. Each UAV has a local pheromone map that it updates based on its reconnaissance scans. The information in the local map is regularly shared with a UAV’s neighbors. Evaluations have shown that the pheromone logic is very good at guiding the UAVs in their cooperative reconnaissance mission in a distributed manner.
Analyzing the connectivity of the UAVs we found that they were heavily partitioned which meant that contemporaneous communication paths generally were not possible to establish. This means that traditional mobile ad hoc network (MANET) routing protocols like AODV, DSR and GPSR will generally fail. By using node mobility and the store-carry-forward principle of delay-tolerant routing the transfer of messages between nodes is still possible. In this thesis we propose location aware routing for delay-tolerant networks (LAROD). LAROD is a beacon-less geographical routing protocol for intermittently connected mobile ad hoc networks. Using static destinations we have shown by a comparative study that LAROD has almost as good delivery rate as an epidemic routing scheme, but at a substantially lower overhead.
This thesis addresses computer game play activities from the perspective of embodied and situated cognition. From such a perspective, game play can be divided into the physical handling of the game and the players' understanding of it. Game play can also be described in terms of three different levels of situatedness "high-level" situatedness, the contextual "here and now", and "low-level" situatedness. Moreover, theoretical and empirical implications of such a perspective have been explored more in detail in two case studies.
COMPLETING THE PICTURE - FRAGMENTS AND BACK AGAIN
"Better methods and tools are needed in the fight against child pornography. This thesis presents a method for file type categorisation of unknown data fragments, a method for reassembly of JPEG fragments, and the requirements put on an artificial JPEG header for viewing reassembled images. To enable empirical evaluation of the methods a number of tools based on the methods have been implemented.
The file type categorisation method identifies JPEG fragments with a detection rate of 100% and a false positives rate of 0.1%. The method uses three algorithms, Byte Frequency Distribution (BFD), Rate of Change (RoC), and 2-grams. The algorithms are designed for different situations, depending on the requirements at hand.
The reconnection method correctly reconnects 97% of a Restart (RST) marker enabled JPEG image, fragmented into 4 KiB large pieces. When dealing with fragments from several images at once, the method is able to correctly connect 70% of the fragments at the first iteration.
Two parameters in a JPEG header are crucial to the quality of the image; the size of the image and the sampling factor (actually factors) of the image. The size can be found using brute force and the sampling factors only take on three different values. Hence it is possible to use an artificial JPEG header to view full of parts of an image. The only requirement is that the fragments contain RST markers.
The results of the evaluations of the methods show that it is possible to find, reassemble, and view JPEG image fragments with high certainty."
DYNAMIC ABSTRACTION FOR INTERLEAVED TASK PLANNING AND EXECUTION
It is often beneficial for an autonomous agent that operates in a complex environment to make use of different types of mathematical models to keep track of unobservable parts of the world or to perform prediction, planning and other types of reasoning. Since a model is always a simplification of something else, there always exists a tradeoff between the model’s accuracy and feasibility when it is used within a certain application due to the limited available computational resources. Currently, this tradeoff is to a large extent balanced by humans for model construction in general and for autonomous agents in particular. This thesis investigates different solutions where such agents are more responsible for balancing the tradeoff for models themselves in the context of interleaved task planning and plan execution. The necessary components for an autonomous agent that performs its abstractions and constructs planning models dynamically during task planning and execution are investigated and a method called DARE is developed that is a template for handling the possible situations that can occur such as the rise of unsuitable abstractions and need for dynamic construction of abstraction levels. Implementations of DARE are presented in two case studies where both a fully and partially observable stochastic domain are used, motivated by research with Unmanned Aircraft Systems. The case studies also demonstrate possible ways to perform dynamic abstraction and problem model construction in practice.
TERRAIN OBJECT RECOGNITION AND CONTEXT FUSION FOR DECISION SUPPORT
A laser radar can be used to generate three dimensional data about the terrain in a very high resolution. The development of new support technologies to analyze these data is critical to the effective and efficient use of these data in decision support systems, due to the large amounts of data that are generated. Adequate technology in this regard is currently not available and development of new methods and algorithms to this end are important goals of this work.
A semi-qualitative data structure for terrain surface modelling has been developed. A categorization and triangulation process has also been developed to substitute the high resolution 3D model for this semi-qualitative data structure. The qualitative part of the structure can also be used for detection and recognition of terrain features. The quantitative part of the structure is, together with the qualitative part, used for visualization of the terrain surface. Substituting the 3D model for the semi-qualitative structures means that a data reduction is performed.
A number of algorithms for detection and recognition of different terrain objects have been developed. The algorithms use the qualitative part of the previously developed semi-qualitative data structure as input. The taken approach is based on matching of symbols and syntactic pattern recognition. Results regarding the accuracy of the implemented algorithms for detection and recognition of terrain objects are visualized.
A further important goal has been to develop a methodology for determining driveability using 3D-data and other geographic data. These data must be fused with vehicle data to determine the driving properties of the terrain context of our operations. This fusion process is therefore called context fusion. The recognized terrain objects are used together with map data in this method. The uncertainty associated with the imprecision of data has been taken into account as well.
ASSISTANCE PLUS: 3D-MEDIATED ADVICE-GIVING ON PHARMACEUTICAL PRODUCTS
In the use of medication and pharmaceutical products, non-compliance is a major problem. One thing we can do something about is making sure consumers have the information they need. This thesis investigates how remote communication technology can be used to improve the availability for expressive advice-giving services. Special attention is given to the balancing of expressiveness and availability. A solution is presented that uses 3D visualisation in combination with audio and video communication to convey advice on complex pharmaceutical products. The solution is tested and evaluated in two user studies. The first study is broad and explorative, the second more focused and evaluative. The solution was well received by participating subjects. They welcomed the sense of personal contact that seeing the communicating party over video link produced and appreciated the expressive power and pedagogical value of the 3D materials. Herbert Clark’s theory of use of language is suggested as a framework for the analysis of the dynamics of the relationship between consumer and advisor.
AUTOMATIC PARALLELIZATION USING PIPELINING FOR EQUATION-BASED SIMULATION LANGUAGE
During the most recent decades modern equation-based object-oriented modeling and simulation languages, such as Modelica, have become available. This has made it easier to build complex and more detailed models for use in simulation. To be able to simulate such large and complex systems it is sometimes not enough to rely on the ability of a compiler to optimize the simulation code and reduce the size of the underlying set of equations to speed up the simulation on a single processor. Instead we must look for ways to utilize the increasing number of processing units available in modern computers. However to gain any increased performance from a parallel computer the simulation program must be expressed in a way that exposes the potential parallelism to the computer. Doing this manually is not a simple task and most modelers are not experts in parallel computing. Therefore it is very appealing to let the compiler parallelize the simulation code automatically. This thesis investigates techniques of using automatic translation of models in typical equation based languages, such as Modelica, into parallel simulation code that enable high utilization of available processors in a parallel computer. The two main ideas investigated here are the following: first, to apply parallelization simultaneously to both the system equations and the numerical solver, and secondly. to use software pipelining to further reduce the time processors are kept waiting for the results of other processors. Prototype implementations of the investigated techniques have been developed as a part of the OpenModelica open source compiler for Modelica. The prototype has been used to evaluate the parallelization techniques by measuring the execution time of test models on a few parallel archtectures and to compare the results to sequential code as well as to the results achieved in earlier work. A measured speedup of 6.1 on eight processors on a shared memory machine has been reached. It still remains to evaluate the methods for a wider range of test models and parallel architectures.
USING OBSERVERS FOR MODEL BASED DATA COLLECTION IN DISTRIBUTED TACTICAL OPERATIONS
Modern information technology increases the use of computers in training systems as well as in command-and-control systems in military services and public-safety organizations. This computerization combined with new threats present a challenging complexity. Situational awareness in evolving distributed operations and follow-up in training systems depends on humans in the field reporting observations of events. The use of this observer-reported information can be largely improved by implementation of models supporting both reporting and computer representation of objects and phenomena in operations.
This thesis characterises and describes observer model-based data collection in distributed tactical operations, where multiple, dispersed units work to achieve common goals. Reconstruction and exploration of multimedia representations of operations is becoming an established means for supporting taskforce training. We explore how modelling of operational processes and entities can support observer data collection and increase information content in mission histories. We use realistic exercises for testing developed models, methods and tools for observer data collection and transfer results to live operations.
The main contribution of this thesis is the systematic description of the model-based approach to using observers for data collection. Methodological aspects in using humans to collect data to be used in information systems, and also modelling aspects for phenomena occurring in emergency response and communication areas contribute to the body of research. We describe a general methodology for using human observers to collect adequate data for use in information systems. In addition, we describe methods and tools to collect data on the chain of medical attendance in emergency response exercises, and on command-and-control processes in several domains.
IMPLEMENTATION OF HEALTH INFORMATION SYSTEMS
Healthcare organizations now consider increased efficiency, reduced costs, improved patient care and quality of services, and safety when they are planning to implement new information and communication technology (ICT) based applications. However, in spite of enormous investment in health information systems (HIS), no convincing evidence of the overall benefits of HISs yet exists. The publishing of studies that capture the effects of the implementation and use of ICT-based applications in healthcare may contribute to the emergence of an evidence-based health informatics which can be used as a platform for decisions made by policy makers, executives, and clinicians. Health informatics needs further studies identifying the factors affecting successful HIS implementation and capturing the effects of HIS implementation. The purpose of the work presented in this thesis is to increase the available knowledge about the impact of the implementation and use of HISs in healthcare organizations. All the studies included in this thesis used qualitative research methods. A case study design and literature review were performed to collect data.
This thesis’s results highlight an increasing need to share knowledge, find methods to evaluate the impact of investments, and formulate indicators for success. It makes suggestions for developing or extending evaluation methods that can be applied to this area with a multi-actor perspective in order to understand the effects, consequences, and prerequisites that have to be achieved for the successful implementation and use of IT in healthcare. The results also propose that HIS, particularly integrated computer-based patient records (ICPR), be introduced to fulfill a high number of organizational, individualbased, and socio-technical goals at different levels. It is therefore necessary to link the goals that HIS systems are to fulfill in relation to short-term, middle-term, and long-term strategic goals. Another suggestion is that implementers and vendors should direct more attention to what has been published in the area to avoid future failures.
This thesis’s findings outline an updated structure for implementation planning. When implementing HISs in hospital and primary-care environments, this thesis suggests that such strategic actions as management involvement and resource allocation, such tactical action as integrating HIS with healthcare workflow, and such operational actions as user involvement, establishing compatibility between software and hardware, and education and training should be taken into consideration.
WORD ALIGNMENT BY RE-USING PARALLEL PHRASES
In this thesis we present the idea of using parallel phrases for word alignment. Each parallel phrase is extracted from a set of manual word alignments and contains a number of source and target words and their corresponding alignments. If a parallel phrase matches a new sentence pair, its word alignments can be applied to the new sentence. There are several advantages of using phrases for word alignment. First, longer text segments include more context and will be more likely to produce correct word alignments than shorter segments or single words. More importantly, the use of longer phrases makesit possible to generalize words in the phrase by replacing words by parts-of-speech or other grammatical information. In this way, the number of words covered by the extracted phrases can go beyond the words and phrases that were present in the original set of manually aligned sentences. We present experiments with phrase-based word alignment on three types of English–Swedish parallel corpora: a software manual, a novel and proceedings of the European Parliament. In order to find a balance between improved coverage and high alignment accuracy we investigated different properties of generalised phrases to identify which types of phrases are likely to produce accurate alignments on new data. Finally, we have compared phrase-based word alignments to state-of-the-art statistical alignment with encouraging results. We show that phrase-based word alignments can be used to enhance statistical word alignment. To evaluate word alignments an English–Swedish reference set for the Europarl corpus was constructed. The guidelines for producing this reference alignment are presented in the thesis
INTEGRATED SOFTWARE PIPELINING
In this thesis we address the problem of integrated software pipelining for clustered VLIW architectures. The phases that are integrated and solved as one combined problem are: cluster assignment, instruction selection, scheduling, register allocation and spilling.
As a first step we describe two methods for integrated code generation of basic blocks. The first method is optimal and based on integer linear programming. The second method is a heuristic based on genetic algorithms.
We then extend the integer linear programming model to modulo scheduling. To the best of our knowledge this is the first time anybody has optimally solved the modulo scheduling problem for clustered architectures with instruction selection and cluster assignment integrated.
We also show that optimal spilling is closely related to optimal register allocation when the register files are clustered. In fact, optimal spilling is as simple as adding an additional virtual register file representing the memory and have transfer instructions to and from this register file corresponding to stores and loads.
Our algorithm for modulo scheduling iteratively considers schedules with increasing number of schedule slots. A problem with such an iterative method is that if the initiation interval is not equal to the lower bound there is no way to determine whether the found solution is optimal or not. We have proven that for a class of architectures that we call transfer free, we can set an upper bound on the schedule length. I.e., we can prove when a found modulo schedule with initiation interval larger than the lower bound is optimal.
Experiments have been conducted to show the usefulness and limitations of our optimal methods. For the basic block case we compare the optimal method to the heuristic based on genetic algorithms.
TOWARDS AN ONTOLOGY DEVELOPMENT METHODOLOGY FOR SMALL AND MEDIUMSIZED ENTERPRISES
This thesis contributes to the research field information logistics. Information logistics aims at improving information flow and at reducing information overload by providing the right information, in the right context, at the right time, at the right place through the right channel.
Ontologies are expected to contribute to reduced information overload and solving information supply problems. An ontology is created to form some kind of shared understanding for the involved stakeholders in the domain at hand. By using this semantic structure you can further build applications that use the ontology and support the employee by providing only the most important information for this person.
During the last years, there has been an increasing number of successful cases in which industrial applications successfully use ontologies. Most of these cases however, stem from large enterprises or IT-intensive small or medium-sized enterprises (SME). The current ontology development methodologies are not tailored for SME and their specific demands and preferences, such as that SME prefer mature technologies, and show a clear preference for to a large extent standardised solutions. The author proposes a new ontology development methodology, taking the specific characteristics of SME into consideration. This methodology was tested in an application case, which resulted in a number of concrete improvement ideas, but also the conclusion that further specialisation of the methodology was needed, for example for a specific usage area or domain. In order to find out in which direction to specify the methodology a survey was performed among SME in the region of Jönköping.
The main conclusion from the survey is that ontologies can be expected to be useful for SME mainly in the area of product configuration and variability modelling. Another area of interest is document management for supporting project work. The area of information search and retrieval can also be seen as a possible application field, as many of the respondents of the survey spend much time finding and saving information.
DEADLOCK FREE ROUTING IN MESH NETWORKS ON CHIP WITH REGIONS
There is a seemingly endless miniaturization of electronic components, which has enabled designers to build sophisticated computing structureson silicon chips. Consequently, electronic systems are continuously improving with new and more advanced functionalities. Design complexity ofthese Systems on Chip (SoC) is reduced by the use of pre-designed cores. However, several problems related to the interconnection of coresremain. Network on Chip (NoC) is a new SoC design paradigm, which targets the interconnect problems using classical network concepts. Still,SoC cores show large variance in size and functionality, whereas several NoC benefits relate to regularity and homogeneity.
This thesis studies some network aspects which are characteristic to NoC systems. One is the issue of area wastage in NoC due to cores of varioussizes. We elaborate on using oversized regions in regular mesh NoC and identify several new design possibilities. Adverse effects of regions oncommunication are outlined and evaluated by simulation.
Deadlock freedom is an important region issue, since it affects both the usability and performance of routing algorithms. The concept of faultyblocks, used in deadlock free fault-tolerant routing algorithms has similarities with rectangular regions. We have improved and adopted one suchalgorithm to provide deadlock free routing in NoC with regions. This work also offers a methodology for designing topology agnostic, deadlockfree, highly adaptive application specific routing algorithms. The methodology exploits information about communication among tasks of anapplication. This is used in the analysis of deadlock freedom, such that fewer deadlock preventing routing restrictions are required.
A comparative study of the two proposed routing algorithms shows that the application specific algorithm gives significantly higher performance.But, the fault-tolerant algorithm may be preferred for systems requiring support for general communication. Several extensions to our work areproposed, for example in areas such as core mapping and efficient routing algorithms. The region concept can be extended for supporting reuse ofa pre-designed NoC as a component in a larger hierarchical NoC.
Compound Processing for Phrase-Based Statistical Machine Translation
In this thesis I explore how compound processing can be used to improve phrase-based statistical machine translation (PBSMT) between English and German/Swedish. Both German and Swedish generally use closed compounds, which are written as one word without spaces or other indicators of word boundaries. Compounding is both common and productive, which makes it problematic for PBSMT, mainly due to sparse data problems.
The adopted strategy for compound processing is to split compounds into their component parts before training and translation. For translation into Swedish and German the parts are merged after translation. I investigate the effect of different splitting algorithms for translation between English and German, and of different merging algorithms for German. I also apply these methods to a different language pair, English--Swedish. Overall the studies show that compound processing is useful, especially for translation from English into German or Swedish. But there are improvements for translation into English as well, such as a reduction of unknown words.
I show that for translation between English and German different splitting algorithms work best for different translation directions. I also design and evaluate a novel merging algorithm based on part-of-speech matching, which outperforms previous methods for compound merging, showing the need for information that is carried through the translation process, rather than only external knowledge sources such as word lists. Most of the methods for compound processing were originally developed for German. I show that these methods can be applied to Swedish as well, with similar results.
Scientific collaboration, workflow, provenance, search engine, query language, data integration
Science is changing. Computers, fast communication, and new technologies have created new ways of conducting research. For instance, researchers from different disciplines are processing and analyzing scientific data that is increasing at an exponential rate. This kind of research requires that the scientists have access to tools that can handle huge amounts of data, enable access to vast computational resources, and support the collaboration of large teams of scientists. This thesis focuses on tools that help support scientific collaboration.
Workflows and provenance are two concepts that have proven useful in supporting scientific collaboration. Workflows provide a formal specification of scientific experiments, and provenance offers a model for documenting data and process dependencies. Together, they enable the creation of tools that can support collaboration through the whole scientific life-cycle, from specification of experiments to validation of results. However, existing models for workflows and provenance are often specific to particular tasks and tools. This makes it hard to analyze the history of data that has been generated over several application areas by different tools. Moreover, workflow design is a time-consuming process and often requires extensive knowledge of the tools involved and collaboration with researchers with different expertise. This thesis addresses these problems.
Our first contribution is a study of the differences between two approaches to interoperability between provenance models: direct data conversion, and mediation. We perform a case study where we integrate three different provenance models using the mediation approach, and show the advantages compared to data conversion. Our second contribution serves to support workflow design by allowing multiple users to concurrently design workflows. Current workflow tools lack the ability for users to work simultaneously on the same workflow. We propose a method that uses the provenance of workflow evolution to enable real-time collaborative design of workflows. Our third contribution considers supporting workflow design by reusing existing workflows. Workflow collections for reuse are available, but more efficient methods for generating summaries of search results are still needed. We explore new summarization strategies that considers the workflow structure.
Visualisations in Service Design
Service design is a relatively new field which has its roots in the design field, but utilises knowledge from other disciplines focusing on services as well. The service design field can be described as a maturing field. However, much which is considered knowledge in the field is still based on anecdotes rather than research. One such area is visualisations of insights gained throughout the service design process. The goal of this thesis is to provide a scientific base for discussions on visualisations by describing the current use of visualisations and exploring what visualisations communicate. This is done through two different studies.
The first study consists of a series of interviews with practicing service designers. The results show that all interviewees visualise their insights gained throughout the service design process. Further analysis found that there are three main lines of arguments used by the interviewees in regard to why they visualise; as a tool to find insights in the material, to keep empathy with users of the service and to communicate the insights to outside stakeholders.
The second study analysed six visualisation types from actual service design projects by service design consultancies. Four different frameworks were used to analyse what visualisations did, and did not, communicate. Two of the frameworks were based on research in service design; the three reasons to visualise as stated in the interviews in study 1 and a framework for service design visualisations. The two frameworks were adapted from other service disciplines; what differentiates services from goods (the IHIP-framework), and a framework focusing on service as the base for all transactions (Service Dominant Logic). It is found that the visualisation types in general are strong in communicating the design aspects of services, but that they have problems in representing all aspects of service as identified in the service literature.
The thesis provides an academic basis on the use of visualisations in service design. It is concluded that it seems like the service design community currently sees services as being not-goods, a line of thought other service disciplines have discarded the last ten years and replaced with a view of services as the basis for all transactions. The analysis highlights areas where there is a need to improve the visualisations to more accurately represent services.
System-Level Techniques for Temperature-Aware Energy Optimization
Energy consumption has become one of the main design constraints in today’s integrated circuits. Techniques for energy optimization, from circuit-level up to system-level, have been intensively researched.
The advent of large-scale integration with deep sub-micron technologies has led to both high power densities and high chip working temperatures. At the same time, leakage power is becoming the dominant power consumption source of circuits, due to continuously lowered threshold voltages, as technology scales. In this context, temperature is an important parameter. One aspect, of particular interest for this thesis, is the strong inter-dependency between leakage and temperature. Apart from leakage power, temperature also has an important impact on circuit delay and, implicitly, on the frequency, mainly through its influence on carrier mobility and threshold voltage. For power-aware design techniques, temperature has become a major factor to be considered. In this thesis, we address the issue of system-level energy optimization for real-time embedded systems taking temperature aspects into consideration.
We have investigated two problems in this thesis: (1) Energy optimization via temperature-aware dynamic voltage/frequency scaling (DVFS). (2) Energy optimization through temperature-aware idle time (or slack) distribution (ITD). For the above two problems, we have proposed off-line techniques where only static slack is considered. To further improve energy efficiency, we have also proposed online techniques, which make use of both static and dynamic slack. Experimental results have demonstrated that considerable improvement of the energy efficiency can be achieved by applying our temperature-aware optimization techniques. Another contribution of this thesis is an analytical temperature analysis approach which is both accurate and sufficiently fast to be used inside an energy optimization loop.
Exploring Biologically Inspired Interactive Networks for Object Recognition
This thesis deals with biologically-inspired interactive neural networks for the task of object recognition. Such networks offer an interesting alternative approach to traditional image processing techniques. Although the networks are very powerful classification tools, they are difficult to handle due to their bidirectional interactivity. It is one of the main reasons why these networks do not perform the task of generalization to novel objects well. Generalization is a very important property for any object recognition system, as it is impractical for a system to learn all instances of an object class before classifying. In this thesis, we have investigated the working of an interactive neural network by fine tuning different structural and algorithmic parameters. The performance of the networks was evaluated by analyzing the generalization ability of the trained network to novel objects. Furthermore, the interactivity of the network was utilized to simulate focus of attention during object classification. Selective attention is an important visual mechanism for object recognition and provides an efficient way of using the limited computational resources of the human visual system. Unlike most previous work in the field of image processing, in this thesis attention is considered as an integral part of object processing. Attention focus, in this work, is computed within the same network and in parallel with object recognition.
As a first step, a study into the efficacy of Hebbian learning as a feature extraction method was conducted. In a second study, the receptive field size in the network, which controls the size of the extracted features as well as the number of layers in the network, was varied and analyzed to find its effect on generalization. In a continuation study, a comparison was made between learnt (Hebbian learning) and hard coded feature detectors. In the last study, attention focus was computed using interaction between bottom-up and top-down activation flow with the aim to handle multiple objects in the visual scene. On the basis of the results and analysis of our simulations we have found that the generalization performance of the bidirectional hierarchical network improves with the addition of a small amount of Hebbian learning to an otherwise error-driven learning. We also conclude that the optimal size of the receptive fields in our network depends on the object of interest in the image. Moreover, each receptive field must contain some part of the object in the input image. We have also found that networks using hard coded feature extraction perform better than the networks that use Hebbian learning for developing feature detectors. In the last study, we have successfully demonstrated the emergence of visual attention within an interactive network that handles more than one object in the input field. Our simulations demonstrate how bidirectional interactivity directs attention focus towards the required object by using both bottom-up and top-down effects.
In general, the findings of this thesis will increase understanding about the working of biologically-inspired interactive networks. Specifically, the studied effects of the structural and algorithmic parameters that are critical for the generalization property will help develop these and similar networks and lead to improved performance on object recognition tasks. The results from the attention simulations can be used to increase the ability of networks to deal with multiple objects in an efficient and effective manner.
Dealing with Missing Mappings and Structure in a Network of Ontologies
With the popularity of the World Wide Web, a large amount of data is generated and made available through the Internet everyday. To integrate and query this huge amount of heterogeneous data, the vision of Semantic Web has been recognized as a possible solution. One key technology for the Semantic Web is ontologies. Many ontologies have been developed in recent years. Meanwhile, due to the demand of applications using multiple ontologies, mappings between entities of these ontologies are generated as well, which leads to the generation of ontology networks consisting of ontologies and mappings between these ontologies. However, neither developing ontologies nor finding mappings between ontologies is an easy task. It may happen that the ontologies are not consistent or complete, or the mappings between these ontologies are not correct or complete, or the resulting ontology network is not consistent. This may lead to problems when they are used in semantically-enabled applications.
In this thesis, we address two issues relevant to the quality of the mappings and the structure in the ontology network. The first issue deals with the missing mappings between networked ontologies. Assuming existing mappings between ontologies are correct, we investigate whether and how to use these existing mappings, to find more mappings between ontologies. We propose and test several strategies of using the given correct mappings to align ontologies. The second issue deals with the missing structure, in particular missing is-a relations, in networked ontologies. Based on the assumption that missing is-a relations are a kind of modeling defects, we propose an ontology debugging approach to tackle this issue. We develop an algorithm for detecting missing is-a relations in ontologies, as well as algorithms which assist the user in repairing by generating and recommending possible ways of repairing and executing the repairing. Based on this approach, we develop a system and test its use and performance.
Mapping Concurrent Applications to Multiprocessor Systems with Multithreaded Processors and Network on Chip-Based Interconnections
Network on Chip (NoC) architectures provide scalable platforms for designing Systems on Chip (SoC) with large number of cores. Developing products and applications using an NoC architecture offers many challenges and opportunities. A tool which can map an application or a set of applications to a given NoC architecture will be essential.
In this thesis we first survey current techniques and we present our proposals for mapping and scheduling of concurrent applications to NoCs with multithreaded processors as computational resources.
NoC platforms are basically a special class of Multiprocessor Embedded Systems (MPES). Conventional MPES architectures are mostly bus-based and, thus, are exposed to potential difficulties regarding scalability and reusability. There has been a lot of research on MPES development including work on mapping and scheduling of applications. Many of these results can also be applied to NoC platforms.
Mapping and scheduling are known to be computationally hard problems. A large range of exact and approximate optimization algorithms have been proposed for solving these problems. The methods include Branch-and–Bound (BB), constructive and transformative heuristics such as List Scheduling (LS), Genetic Algorithms (GA) and various types of Mathematical Programming algorithms.
Concurrent applications are able to capture a typical embedded system which is multifunctional. Concurrent applications can be executed on an NoC which provides a large computational power with multiple on-chip computational resources.
Improving the time performances of concurrent applications which are running on Network on Chip (NoC) architectures is mainly correlated with the ability of mapping and scheduling methodologies to exploit the Thread Level Parallelism (TLP) of concurrent applications through the available NoC parallelism. Matching the architectural parallelism to the application concurrency for obtaining good performance-cost tradeoffs is another aspect of the problem.
Multithreading is a technique for hiding long latencies of memory accesses, through the overlapped execution of several threads. Recently, Multi-Threaded Processors (MTPs) have been designed providing the architectural infrastructure to concurrently execute multiple threads at hardware level which, usually, results in a very low context switching overhead. Simultaneous Multi-Threaded Processors (SMTPs) are superscalar processor architectures which adaptively exploit the coarse grain and the fine grain parallelism of applications, by simultaneously executing instructions from several thread contexts.
In this thesis we make a case for using SMTPs and MTPs as NoC resources and show that such a multiprocessor architecture provides better time performances than an NoC with solely General-purpose Processors (GP). We have developed a methodology for task mapping and scheduling to an NoC with mixed SMTP, MTP and GP resources, which aims to maximize the time performance of concurrent applications and to satisfy their soft deadlines. The developed methodology was evaluated on many configurations of NoC-based platforms with SMTP, MTP and GP resources. The experimental results demonstrate that the use of SMTPs and MTPs in NoC platforms can significantly speed-up applications.
Positioning Algorithms for Surveillance Using Unmanned Aerial Vehicles
Surveillance is an important application for unmanned aerial vehicles (UAVs). The sensed information often has high priority and it must be made available to human operators as quickly as possible. Due to obstacles and limited communication range, it is not always possible to transmit the information directly to the base station. In this case, other UAVs can form a relay chain between the surveillance UAV and the base station. Determining suitable positions for such UAVs is a complex optimization problem in and of itself, and is made even more diﬃcult by communication and surveillance constraints.
To solve diﬀerent variations of ﬁnding positions for UAVs for surveillance of one target, two new algorithms have been developed. One of the algorithms is developed especially for ﬁnding a set of relay chains oﬀering diﬀerent trade-oﬀs between the number of UAVsand the quality of the chain. The other algorithm is tailored towards ﬁnding the highest quality chain possible, given a limited number of available UAVs.
Finding the optimal positions for surveillance of several targets is more diﬃcult. A study has been performed, in order to determine how the problems of interest can besolved. It turns out that very few of the existing algorithms can be used due to the characteristics of our speciﬁc problem. For this reason, an algorithm for quickly calculating positions for surveillance of multiple targets has been developed. This enables calculation of an initial chain that is immediately made available to the user, and the chain is then incrementally optimized according to the user’s desire.
Contributions to Web Authentication for Untrusted Computers
Authentication methods offer varying levels of security. Methods with one-time credentials generated by dedicated hardware tokens can reach a high level of security, whereas password-based authentication methods have a low level of security since passwords can be eavesdropped and stolen by an attacker. Password-based methods are dominant in web authentication since they are both easy to implement and easy to use. Dedicated hardware, on the other hand, is not always available to the user, usually requires additional equipment and may be more complex to use than password-based authentication.
Different services and applications on the web have different requirements for the security of authentication. Therefore, it is necessary for designers of authentication solutions to address this need for a range of security levels. Another concern is mobile users authenticating from unknown, and therefore untrusted, computers. This in turn raises issues of availability, since users need secure authentication to be available, regardless of where they authenticate or which computer they use.
We propose a method for evaluation and design of web authentication solutions that takes into account a number of often overlooked design factors, i.e. availability, usability and economic aspects. Our proposed method uses the concept of security levels from the Electronic Authentication Guideline, provided by NIST.
We focus on the use of handheld devices, especially mobile phones, as a flexible, multi-purpose (i.e. non-dedicated) hardware device for web authentication. Mobile phones offer unique advantages for secure authentication, as they are small, flexible and portable, and provide multiple data transfer channels. Phone designs, however, vary and the choice of channels and authentication methods will influence the security level of authentication. It is not trivial to maintain a consistent overview of the strengths and weaknesses of the available alternatives. Our evaluation and design method provides this overview and can help developers and users to compare and choose authentication solutions.
Sustainable Interactions: Studies in the Design of Energy Awareness Artefacts
This thesis presents a collection of experimental designs that approach the problem of growing electricity consumption in homes. From the perspective of design, the intention has been to critically explore the design space of energy awareness artefacts to reinstate awareness of energy use in everyday practice. The design experiments were used as vehicles for thinking about the relationship between physical form, interaction, and social practice. The rationale behind the concepts was based on a small-scale ethnography, situated interviews, and design experience. Moreover, the thesis compares designer intention and actual user experiences of a prototype that was installed in nine homes in a residential area in Stockholm for three months. This was done in order to elicit tacit knowledge about how the concept was used in real-world domestic settings, to challenge everyday routines, and to enable both users and designers to critically reflect on artefacts and practices.
From a design perspective, contributions include design approaches to communicating energy use: visualizations for showing relationships between behaviour and electricity consumption, shapes and forms to direct action, means for turning restrictions caused by energy conservation into central parts of the product experience, and ways to promote sustainable behaviour with positive driving forces based on user lifestyles.
The general results indicate that inclusion is of great importance when designing energy awareness artefacts; all members of the household should be able to access, interact with, and reflect on their energy use. Therefore, design-related aspects such as placement and visibility, as well as how the artefact might affect the social interactions in the home, become central. Additionally, the thesis argues that these types of artefacts can potentially create awareness accompanied by negative results such as stress. A challenge for the designer is to create artefacts that communicate and direct energy use in ways that are attractive and can be accepted by all household members as a possible way of life
Conceptualising Prototypes in Service Design
To date, service prototyping has been discussed academically as an unproblematic add-on to existing prototyping techniques, or as methods for prototyping social interaction. In fact, most of the knowledge on how services are prototyped comes from organisations and practicing design consultants. Some attempts to define service prototyping have been made but generally without concern about how complete service experiences should or could be represented. Building on existing knowledge about prototyping, a draft of a service prototyping conceptualisation is generated. Based on the draft, the question of how to prototype holistic service experiences is raised and in total, 5 studies have been conducted that contribute knowledge to that overarching question. In addition, each study has its own research question. Study 1 conceptualises prototypes and prototyping in a framework while study 2 and 3 looks at what practicing service designers say they do to prototype services and how they involve different stakeholders in the process. Study 4 examines aspects of design communication and how service experiences are communicated and used during design meetings, and study 5 finally, attempts to generate a process that can be used to evaluate the impact of location oriented service prototypes in e.g. healthcare settings. A number of challenges for service prototyping are identified in the studies, along with the issue of who authors prototypes. The conceptualisation of prototyping is adjusted based on the studies and a framework is constructed that support the conceptualisation. Little evidence for holistic approaches to prototyping services is found in the interviews and service designers involve their clients primarily when prototyping. Service experiences are introduced in communication using a format termed micro-narratives. This format and the purpose of using references to previous experiences are discussed. The thesis is concluded with a suggestion of a process for service prototyping. This process is specific for service design and attempts to support service designers in making holistic service representations when prototyping. Service prototyping requires further research.
Computer-Assisted Troubleshooting for Efficient Off-board Diagnosis
This licentiate thesis considers computer-assisted troubleshooting of complex products such as heavy trucks. The troubleshooting task is to find and repair all faulty components in a malfunctioning system. This is done by performing actions to gather more information regarding which faults there can be or to repair components that are suspected to be faulty. The expected cost of the performed actions should be as low as possible.
The work described in this thesis contributes to solving the troubleshooting task in such a way that a good trade-off between computation time and solution quality can be made. A framework for troubleshooting is developed where the system is diagnosed using non-stationary dynamic Bayesian networks and the decisions of which actions to perform are made using a new planning algorithm for Stochastic Shortest Path Problems called Iterative Bounding LAO*.
It is shown how the troubleshooting problem can be converted into a Stochastic Shortest Path problem so that it can be efficiently solved using general algorithms such as Iterative Bounding LAO*. New and improved search heuristics for solving the troubleshooting problem by searching are also presented in this thesis.
The methods presented in this thesis are evaluated in a case study of an auxiliary hydraulic braking system of a modern truck. The evaluation shows that the new algorithm Iterative Bounding LAO* creates troubleshooting plans with a lower expected cost faster than existing state-of-the-art algorithms in the literature. The case study shows that the troubleshooting framework can be applied to systems from the heavy vehicles domain.
Predictable Real-Time Applications on Multiprocessor Systems-on-Chip
Being predictable with respect to time is, by definition, a fundamental requirement for any real-time system. Modern multiprocessor systems impose a challenge in this context, due to resource sharing conflicts causing memory transfers to become unpredictable. In this thesis, we present a framework for achieving predictability for real-time applications running on multiprocessor system-on-chip platforms. Using a TDMA bus, worst-case execution time analysis and scheduling are done simultaneously. Since the worst-case execution times are directly dependent on the bus schedule, bus access design is of special importance. Therefore, we provide an efficient algorithm for generating bus schedules, resulting in a minimized worst-case global delay.
We also present a new approach considering the average-case execution time in a predictable context. Optimization techniques for improving the average-case execution time of tasks, for which predictability with respect to time is not required, have been investigated for a long time in many different contexts. However, this has traditionally been done without paying attention to the worst-case execution time. For predictable real-time applications, on the other hand, the focus has been solely on worst-case execution time optimization, ignoring how this affects the execution time in the average case. In this thesis, we show that having a good average-case global delay can be important also for real-time applications, for which predictability is required. Furthermore, for real-time applications running on multiprocessor systems-on-chip, we present a technique for optimizing for the average case and the worst case simultaneously, allowing for a good average case execution time while still keeping the worst case as small as possible. The proposed solutions in this thesis have been validated by extensive experiments. The results demonstrate the efficiency and importance of the presented techniques.
Skeleton Programming for Heterogeneous GPU-based Systems
In this thesis, we address issues associated with programming modern heterogeneous systems while focusing on a special kind of heterogeneous systems that include multicore CPUs and one or more GPUs, called GPU-based systems.We consider the skeleton programming approach to achieve high level abstraction for efficient and portable programming of these GPU-based systemsand present our work on SkePU library which is a skeleton library for these systems.
We extend the existing SkePU library with a two-dimensional (2D) data type and skeleton operations and implement several new applications using newly made skeletons. Furthermore, we consider the algorithmic choice present in SkePU and implement support to specify and automatically optimize the algorithmic choice for a skeleton call, on a given platform.
To show how to achieve performance, we provide a case-study on optimized GPU-based skeleton implementation for 2D stencil computations and introduce two metrics to maximize resource utilization on a GPU. By devising a mechanism to automatically calculate these two metrics, performance can be retained while porting an application from one GPU architecture to another.
Another contribution of this thesis is implementation of the runtime support for the SkePU skeleton library. This is achieved with the help of the StarPUruntime system. By this implementation,support for dynamic scheduling and load balancing for the SkePU skeleton programs is achieved. Furthermore, a capability to do hybrid executionby parallel execution on all available CPUs and GPUs in a system, even for a single skeleton invocation, is developed.
SkePU initially supported only data-parallel skeletons. The first task-parallel skeleton (farm) in SkePU is implemented with support for performance-aware scheduling and hierarchical parallel execution by enabling all data parallel skeletons to be usable as tasks inside the farm construct.
Experimental evaluations are carried out and presented for algorithmic selection, performance portability, dynamic scheduling and hybrid execution aspects of our work.
Complex Task Allocation for Delegation: From Theory to Practice
The problem of determining who should do what given a set of tasks and a set of agents is called the task allocation problem. The problem occurs in many multi-agent system applications where a workload of tasks should be shared by a number of agents. In our case, the task allocation problem occurs as an integral part of a larger problem of determining if a task can be delegated from one agent to another.
Delegation is the act of handing over the responsibility for something to someone. Previously, a theory for delegation including a delegation speech act has been specified. The speech act specifies the preconditions that must be fulfilled before the delegation can be carried out, and the postconditions that will be true afterward. To actually use the speech act in a multi-agent system, there must be a practical way of determining if the preconditions are true. This can be done by a process that includes solving a complex task allocation problem by the agents involved in the delegation.
In this thesis a constraint-based task specification formalism, a complex task allocation algorithm for allocating tasks to unmanned aerial vehicles and a generic collaborative system shell for robotic systems are developed. The three components are used as the basis for a collaborative unmanned aircraft system that uses delegation for distributing and coordinating the agents' execution of complex tasks.
Contributions to Parallel Simulation of Equation-Based Models on Graphics Processing Units
In this thesis we investigate techniques and methods for parallel simulation of equation-based, object-oriented (EOO) Modelica models on graphics processing units (GPUs). Modelica is being developed through an international effort via the Modelica Association. With Modelica it is possible to build computationally heavy models; simulating such models however might take a considerable amount of time. Therefor techniques of utilizing parallel multi-core architectures for simulation are desirable. The goal in this work is mainly automatic parallelization of equation-based models, that is, it is up to the compiler and not the end-user modeler to make sure that code is generated that can efficiently utilize parallel multi-core architectures. Not only the code generation process has to be altered but the accompanying run-time system has to be modified as well. Adding explicit parallel language constructs to Modelica is also discussed to some extent. GPUs can be used to do general purpose scientific and engineering computing. The theoretical processing power of GPUs has surpassed that of CPUs due to the highly parallel structure of GPUs. GPUs are, however, only good at solving certain problems of data-parallel nature. In this thesis we relate several contributions, by the author and co-workers, to each other. We conclude that the massively parallel GPU architectures are currently only suitable for a limited set of Modelica models. This might change with future GPU generations. CUDA for instance, the main software platform used in the thesis for general purpose computing on graphics processing units (GPGPU), is changing rapidly and more features are being added such as recursion, function pointers, C++ templates, etc.; however the underlying hardware architecture is still optimized for data-parallelism.
Selected Aspects of Navigation and Path Planning in Unmanned Aircraft Systems
Unmanned aircraft systems (UASs) are an important future technology with early generations already being used in many areas of application encompassing both military and civilian domains. This thesis proposes a number of integration techniques for combining control-based navigation with more abstract path planning functionality for UASs. These techniques are empirically tested and validated using an RMAX helicopter platform used in the UASTechLab at Linköping University. Although the thesis focuses on helicopter platforms, the techniques are generic in nature and can be used in other robotic systems.
At the control level a navigation task is executed by a set of control modes. A framework based on the abstraction of hierarchical concurrent state machines for the design and development of hybrid control systems is presented. The framework is used to specify reactive behaviors and for sequentialisation of control modes. Selected examples of control systems deployed on UASs are presented. Collision-free paths executed at the control level are generated by path planning algorithms.We propose a path replanning framework extending the existing path planners to allow dynamic repair of flight paths when new obstacles or no-fly zones obstructing the current flight path are detected. Additionally, a novel approach to selecting the best path repair strategy based on machine learning technique is presented. A prerequisite for a safe navigation in a real-world environment is an accurate geometrical model. As a step towards building accurate 3D models onboard UASs initial work on the integration of a laser range finder with a helicopter platform is also presented.
Combination of the techniques presented provides another step towards building comprehensive and robust navigation systems for future UASs.
Increasing Autonomy of Unmanned Aircraft Systems Through the Use of Imaging Sensors
The range of missions performed by Unmanned Aircraft Systems (UAS) has been steadily growing in the past decades thanks to continued development in several disciplines. The goal of increasing the autonomy of UAS's is widening the range of tasks which can be carried out without, or with minimal, external help. This thesis presents methods for increasing specific aspects of autonomy of UAS's operating both in outdoor and indoor environments where cameras are used as the primary sensors.
First, a method for fusing color and thermal images for object detection, geolocation and tracking for UAS's operating primarily outdoors is presented. Specifically, a method for building saliency maps where human body locations are marked as points of interest is described. Such maps can be used in emergency situations to increase the situational awareness of first responders or a robotic system itself. Additionally, the same method is applied to the problem of vehicle tracking. A generated stream of geographical locations of tracked vehicles increases situational awareness by allowing for qualitative reasoning about, for example, vehicles overtaking, entering or leaving crossings.
Second, two approaches to the UAS indoor localization problem in the absence of GPS-based positioning are presented. Both use cameras as the main sensors and enable autonomous indoor ight and navigation. The first approach takes advantage of cooperation with a ground robot to provide a UAS with its localization information. The second approach uses marker-based visual pose estimation where all computations are done onboard a small-scale aircraft which additionally increases its autonomy by not relying on external computational power.
The Evolution of the Connector View Concept: Enterprise Models for Interoperability Solutions in the Extended Enterprise
People around the world who are working in companies and organisations need to collaborate, and in their collaboration use information managed by different information systems. The requirements of information systems to be interoperable are therefore apparant. While the technical problems, of communicating or sharing information between different information systems, have become less difficult to solve, the attention has turned to other aspects of interoperability. Such aspects concern the bussiness processes, the knowledge, the syntax and the semantics that involves the information managed by information systems.
Enterprise modelling is widely used to achieve integration solutions within enterprises and is a research area both for the integration wihin an enterprise (company or organisation) and the integration between several different enterprises. Enterprise modelling takes into account several of the aspects, mentioned as important for interoperability, in the models that are created.
This thesis describes a research which has resulted in the connector view concept. The main contribution with this framework comprises a model structure and an approach, for performing the modelling of the collaboration between several partners in an extended enterprise. The purpose of the enterprise models thus created, by using the connector view concept, is to find solutions to interoperability problems, that exist in the collaboration between several enterprises.
Computational Terminology: Exploring Bilingual and Monolingual Term Extraction
Terminologies are becoming more important to modern day society as technology and science continue to grow at an accelerating rate in a globalized environment. Agreeing upon which terms should be used to represent which concepts and how those terms should be translated into different languages is important if we wish to be able to communicate with as little confusion and misunderstandings as possible.
Since the 1990s, an increasing amount of terminology research has been devoted to facilitating and augmenting terminology-related tasks by using computers and computational methods. One focus for this research is Automatic Term Extraction (ATE).
In this compilation thesis, studies on both bilingual and monolingual ATE are presented. First, two publications reporting on how bilingual ATE using the align-extract approach can be used to extract patent terms. The result in this case was 181,000 manually validated English-Swedish patent terms which were to be used in a machine translation system for patent documents. A critical component of the method used is the Q-value metric, presented in the third paper, which can be used to rank extracted term candidates (TC) in an order that correlates with TC precision. The use of Machine Learning (ML) in monolingual ATE is the topic of the two final contributions. The first ML-related publication shows that rule induction based ML can be used to generate linguistic term selection patterns, and in the second ML-related publication, contrastive n-gram language models are used in conjunction with SVM ML to improve the precision of term candidates selected using linguistic patterns.
Models and Tools for Distributed User Interface Development
The way we interact with computers and computer systems are constantly changing as technology evolves. A current trend is that users interact with multiple andinterconnected devices on a daily basis. They are beginning to request ways and means of dividing and spreading their applications acrossthese devices.Distributed user interfaces (DUIs) have been proposed as a means ofdistributing programs over multiple interconnected devices through theuser interface (UI). DUIs represent a fundamental change foruser-interface design, enabling new ways of developing distributedsystems that, for instance, support runtime reorganization ofUIs. However developing DUIs presents a far more complex task compared totraditional UI development, due to the inherent complexity thatarises from combining UI development with distributed systems. Thetraditional approach in software engineering and computer science toovercoming complexity is to build tools and frameworks, to allowfor good code reuse and higher level of abstraction for applicationprogramers.Conventional GUI programming tools and frameworks are not developedto support DUIs specifically. In this thesis we explore key issues increating new programming tools and frameworks (APIs) for DUI-based UIdevelopment. We also present and discuss the DUI framework Marve,which adds DUI support to Java Swing.A unique feature of Marve is that it is designedfor industrial-scale development, extending a standard UIframework. The framework has beentested and evaluated in a project where an operator control stationsystem was developed.
Optimizing Fault Tolerance for Real-Time Systems
For the vast majority of computer systems correct operation is defined as producing the correct result within a time constraint (deadline). We refer to such computer systems as real-time systems (RTSs). RTSs manufactured in recent semiconductor technologies are increasingly susceptible to soft errors, which enforces the use of fault tolerance to detect and recover from eventual errors. However, fault tolerance usually introduces a time overhead, which may cause an RTS to violate the time constraints. Depending on the consequences of violating the deadlines, RTSs are divided into hard RTSs, where the consequences are severe, and soft RTSs, otherwise. Traditionally, worst case execution time (WCET) analyses are used for hard RTSs to ensure that the deadlines are not violated, and average execution time (AET) analyses are used for soft RTSs. However, at design time a designer of an RTS copes with the challenging task of deciding whether the system should be a hard or a soft RTS. In such case, focusing only on WCET analyses may result in an over-designed system, while on the other hand focusing only on AET analyses may result in a system that allows eventual deadline violations. To overcome this problem, we introduce Level of Confidence (LoC) as a metric to evaluate to what extent a deadline is met in presence of soft errors. The advantage is that the same metric can be used for both soft and hard RTSs, thus a system designer can precisely specify to what extent a deadline is to be met. In this thesis, we address optimization of Roll-back Recovery with Checkpointing (RRC) which is a good representative for fault tolerance due to that it enables detection and recovery of soft errors at the cost of introducing a time overhead which impacts the execution time of tasks. The time overhead depends on the number of checkpoints that are used. Therefore, we provide mathematical expressions for finding the optimal number of checkpoints which leads to: 1) minimal AET and 2) maximal LoC. To obtain these expressions we assume that error probability is given. However, error probability is not known in advance and it can even vary over runtime. Therefore, we propose two error probability estimation techniques: Periodic Probability Estimation and Aperiodic Probability Estimation that estimate error probability during runtime and adjust the RRC scheme with the goal to reduce the AET. By conducting experiments, we show that both techniques provide near-optimal performance of RRC.
Mission Experience: How to Model and Capture it to Enable Vicarious Learning
Organizations for humanitarian assistance, disaster response and military activities are characterized by their special role in society to resolve time-constrained and potentially life-threatening situations. The tactical missions that these organizations conduct regularly are significantly dynamic in character, and sometimes impossible to fully comprehend and predict. In these situations, when control becomes opportunistic, the organizations are forced to rely on the collective experience of their personnel to respond effectively to the unfolding threats. Generating such experience through traditional means of training, exercising and apprenticeship, is expensive, time-consuming, and difficult to manage.
This thesis explores how and why mission experience should be utilized in emergency management and military organizations to improve performance. A multimedia approach for capturing mission experience has further been tested in two case studies to determine how the commanders’ experiences can be externalized to enable vicarious learning. These studies propose a set of technical, methodological, and ethical issues that need to be considered when externalizing mission experience, based on two aforementioned case studies complemented by a literature review. The presented outcomes are (1) a model aligning abilities that tactical organizations need when responding to dynamic situations of different familiarity, (2) a review of the usefulness of several different data sources for externalization of commanders’ experiences from tactical operations, and (3) a review of methodological, technical, and ethical issues to consider when externalizing tactical military and emergency management operations. The results presented in this thesis indicate that multimedia approaches for capturing mission histories can indeed complement training and exercising as a method for generating valuable experience from tactical missions.
Anomaly Detection and its Adaptation: Studies on Cyber-Physical Systems
Cyber-Physical Systems (CPS) are complex systems where physical operations are supported and coordinated by Information and Communication Technology (ICT).
From the point of view of security, ICT technology offers new opportunities to increase vigilance and real-time responsiveness to physical security faults. On the other hand, the cyber domain carries all the security vulnerabilities typical to information systems, making security a new big challenge in critical systems. This thesis addresses anomaly detection as security measure in CPS. Anomaly detection consists of modelling the good behaviour of a system using machine learning and data mining algorithms, detecting anomalies when deviations from the normality model occur at runtime. Its main feature is the ability to discover the kinds of attack not seen before, making it suitable as a second line of defence.
The first contribution of this thesis addresses the application of anomaly detection as early warning system in water management systems. We describe the evaluation of an anomaly detection software when integrated in a Supervisory Control and Data Acquisition (SCADA) system where water quality sensors provide data for real-time analysis and detection of contaminants. Then, we focus our attention to smart metering infrastructures. We study a smart metering device that uses a trusted platform for storage and communication of electricity metering data, and show that despite the hard core security, there is still room for deployment of a second level of defence as an embedded real-time anomaly detector that can cover both the cyber and physical domains. In both scenarios, we show that anomaly detection algorithms can efficiently discover attacks in the form of contamination events in the first case and cyber attacks for electricity theft in the second. The second contribution focuses on online adaptation of the parameters of anomaly detection applied to a Mobile Ad hoc Network (MANET) for disaster response. Since survivability of the communication to network attacks is as crucial as the lifetime of the network itself, we devised a component that is in charge of adjusting the parameters based on the current energy level, using the trade-off between the node's response to attacks and the energy consumption induced by the intrusion detection system. Adaption increases the network lifetime without significantly deteriorating the detection performance.
Towards an Approach for Efficiency Evaluation of Enterprise Modeling Methods
Nowadays, there is a belief that organizations should keep improving different aspects of theirenterprise to remain competitive in their business segment. For this purpose, it is required to understand the current state of the enterprise, analyze and evaluate it to be able to figure out suitable change measures. To perform such a process in a systematic and structured way, receiving support from powerful tools is inevitable. Enterprise Modeling is a field that can support improvement processes by developing models to show different aspects of an enterprise. An Enterprise Modeling Method is an important support for the Enterprise Modeling. A method is comprised of different conceptual parts: Perspective, Framework, Method Component (which itself contains Procedure, Notation and Concepts), and Cooperation Principles. In an ideal modeling process, both the process and the results are of high quality. One dimension of quality which is in focus in this thesis is efficiency. The issue of efficiency evaluation in Enterprise Modeling still seems to be a rather unexploited research area.
The thesis investigates three aspects of Enterprise Modeling Methods: what is the meaning of efficiency in this context, how can efficiency be evaluated and in what phases of a modeling process could efficiency be evaluated. The contribution of the thesis is an approach for evaluation of efficiency in Enterprise Modeling Methods based also on several case studies. The evaluation approach is constituted by efficiency criteria that should be met by (different parts of) a method. While a subset of these criteria always need to be fulfilled in a congruent way, fulfillment of the rest of the criteria depends on the application case. To help the user in initial evaluation of a method, a structure of driving questions is presented.
Resilience in High Risk Work: Analysing Adaptive Performance
In today’s complexsocio-technical systems it is not possible to foresee and prepare for allfuture events. To cope with the intricacy and coupling between people,technical systems and the dynamic environment people are required tocontinuously adapt. To design resilient systems a deepened understanding ofwhat supports and enables adaptive performance is needed. In this thesis two studiesare presented that investigate how adaptive abilities can be identified andanalysed in complex work settings across domains. The studies focus onunderstanding adaptive performance, what enables successful adaptation and how contextual factors affect the performance. The first study examines how acrisis command team adapts as they lose important functions of their teamduring a response operation. The secondstudy presents a framework to analyse adaptive behaviour in everyday work wheresystems are working near the margins of safety. The examples that underlie theframework are based on findings from focus group discussion withrepresentatives from different organisations, including health care, nuclear,transportation and emergency services. Main contributions of this thesis includethe examination of adaptive performance and of how it can be analysed as ameans to learn about and strengthen resilience. By using contextual analysis enablersof adaptive performance and its effects the overall system are identified. Theanalysis further demonstrates that resilience is not a system property but aresult of situational circumstances and organisational structures. Theframework supports practitioners and researchers in reporting findings,structuring cases and making sense of sharp-end adaptations. The analysismethod can be used to better understand system adaptive capacities, monitoradaptive patterns and enhance current methods for safety management.
Tools for Understanding, Debugging, and Simulation Performance Improvement of Equation-based Models
Equation-based object-oriented (EOO) modelling languages provide a convenient, declarative method for describing models of cyber-physical systems.Because of the ease of use of EOO languages, large and complex models can be built with limited effort.However, current state-of-the-art tools do not provide the user with enough information when errors appear or simulation results are wrong.It is paramount that the tools give the user enough information to correct errors or understand where the problems that lead to wrong simulation results are located.However, understanding the model translation process of an EOO compiler is a daunting task that not only requires knowledge of the numerical algorithms that the tool executes during simulation, but also the complex symbolic transformations being performed.
In this work, we develop and explore methods where the EOO tool records the transformations during the translation process in order to provide better diagnostics, explanations, and analysis.This information can be used to generate better error-messages during translation.It can also be used to provide better debugging for a simulation that produces unexpected results or where numerical methods fail.
Meeting deadlines is particularly important for real-time applications.It is usually important to identify possible bottlenecks and either simplify the model or give hints to the compiler that enables it to generate faster code.When profiling and measuring execution times of parts of the model the recorded information can also be used to find out why a particular system is slow.Combined with debugging information, it is possible to find out why this system of equations is slow to solve, which helps understanding what can be done to simplify the model.
Finally, we provide a method and tool prototype suitable for speeding up simulations by compiling a simulation executable for a parallel platform by partitioning the model at appropriate places.
Towards an Ontology Design Pattern Quality Model
The use of semantic technologies and Semantic Web ontologies in particular have enabled many recent developments in information integration, search engines, and reasoning over formalised knowledge. Ontology Design Patterns have been proposed to be useful in simplifying the development of Semantic Web ontologies by codifying and reusing modelling best practices.
This thesis investigates the quality of Ontology Design Patterns. The main contribution of the thesis is a theoretically grounded and partially empirically evaluated quality model for such patterns including a set of quality characteristics, indicators, measurement methods and recommendations. The quality model is based on established theory on information system quality, conceptual model quality, and ontology evaluation. It has been tested in a case study setting and in two experiments.
The main findings of this thesis are that the quality of Ontology Design Patterns can be identified, formalised and measured, and furthermore, that these qualities interact in such a way that ontology engineers using patterns need to make tradeoffs regarding which qualities they wish to prioritise. The developed model may aid them in making these choices.
This work has been supported by Jönköping University.
Designing Security-enhanced Embedded Systems: Bridging Two Islands of Expertise
The increasing prevalence of embedded devices and a boost in sophisticated attacks against them make embedded system security an intricate and pressing issue. New approaches to support the development of security-enhanced systems need to be explored. We realise that efficient transfer of knowledge from security experts to embedded system engineers is vitally important, but hardly achievable in current practice.This thesis proposes a Security-Enhanced Embedded system Design (SEED) approach, which is a set of concepts, methods, and tools that together aim at addressing this challenge of bridging the gap between the two areas of expertise.
First, we introduce the concept of a Domain-Specific Security Model (DSSM) as a suitable abstraction to capture the knowledge of security experts in a way that this knowledge can be later reused by embedded system engineers. Each DSSM characterises common security issues of a specific application domain in a form of security properties, which are further linked to a range of solutions.
As a next step, we complement a DSSM with the concept of a Performance Evaluation Record (PER) to account for the resource-constrained nature of embedded systems. Each PER characterises the resource overhead created by a security solution, a provided level of security, and the evaluation technique applied.
Finally, we define a process that assists an embedded system engineer in selecting a relevant set of security solutions. The process couples together (i) the use of the security knowledge accumulated in DSSMs and PERs, (ii) the identification of security issues in a system design, and (iii) the analysis of resource constraints of a system and available security solutions. The approach is supported by a set of tools that automate its certain steps.
We use a case study from a smart metering domain to demonstrate how the SEED approach can be applied. We show that our approach adequately supports security experts in description of knowledge about security solutions in the form of formalised ontologies and embedded system engineers in integration of an appropriate set of security solutions based on that knowledge.
Exploiting Energy Awareness in Mobile Communication
Although evolving mobile technologies bring millions of users closer to the vision of information anywhere-anytime, device battery depletions hamper the quality of experience to a great extent. The massive explosion of mobile applications with the ensuing data exchange over the cellular infrastructure is not only a blessing to the mobile user, but also has a price in terms of rapid discharge of the device battery. Wireless communication is a large contributor to the energy consumption. Thus, the current call for energy economy in mobile devices poses the challenge of reducing the energy consumption of wireless data transmissions at the user end by developing energy-efficient communication.
This thesis addresses the energy efficiency of data transmission at the user end in the context of cellular networks. We argue that the design of energy-efficient solutions starts by energy awareness and propose EnergyBox, a parametrised tool that enables accurate and repeatable energy quantification at the user end using real data traffic traces as input. EnergyBox abstracts the underlying states for operation of the wireless interfaces and allows to estimate the energy consumption for different operator settings and device characteristics.
Next, we devise an energy-efficient algorithm that schedules the packet transmissions at the user end based on the knowledge of the network parameters that impact the handset energy consumption. The solution focuses on the characteristics of a given traffic class with the lowest quality of service requirements. The cost of running the solution itself is studied showing that the proposed cross-layer scheduler uses a small amount of energy to significantly extend the battery lifetime at the cost of some added latency.
Finally, the benefit of employing EnergyBox to systematically study the different design choices that developers face with respect to data transmissions of applications is shown in the context of location sharing services and instant messaging applications. The results show that quantifying energy consumption of communication patterns, protocols, and data formats can aid the design of tailor-made solutions with a significantly smaller energy footprint.
Integration of Ontology Alignment and Ontology Debugging for Taxonomy Networks
Semantically-enabled applications, such as ontology-based search and data integration, take into account the semantics of the input data in their algorithms. Such applications often use ontologies, which model the application domains in question, as well as alignments, which provide information about the relationships between the terms in the different ontologies.
The quality and reliability of the results of such applications depend directly on the correctness and completeness of the ontologies and alignments they utilize. Traditionally, ontology debugging discovers defects in ontologies and alignments and provides means for improving their correctness and completeness, while ontology alignment establishes the relationships between the terms in the different ontologies, thus addressing completeness of alignments.
This thesis focuses on the integration of ontology alignment and debugging for taxonomy networks which are formed by taxonomies, the most widely used kind of ontologies, connected through alignments.
The contributions of this thesis include the following. To the best of our knowledge, we have developed the first approach and framework that integrate ontology alignment and debugging, and allow debugging of modelling defects both in the structure of the taxonomies as well as in their alignments. As debugging modelling defects requires domain knowledge, we have developed algorithms that employ the domain knowledge intrinsic to the network to detect and repair modelling defects.
Further, a system has been implemented and several experiments with real-world ontologies have been performed in order to demonstrate the advantages of our integrated ontology alignment and debugging approach. For instance, in one of the experiments with the well-known ontologies and alignment from the Anatomy track in Ontology Alignment Evaluation Initiative 2010, 203 modelling defects (concerning incomplete and incorrect information) were discovered and repaired.
A Study of Chain Graph Interpretations
Probabilistic graphical models are today one of the most well used architectures for modelling and reasoning about knowledge with uncertainty. The most widely used subclass of these models is Bayesian networks that has found a wide range of applications both in industry and research. Bayesian networks do however have a major limitation which is that only asymmetric relationships, namely cause and eect relationships, can be modelled between its variables. A class of probabilistic graphical models that has tried to solve this shortcoming is chain graphs. It is achieved by including two types of edges in the models, representing both symmetric and asymmetric relationships between the connected variables. This allows for a wider range of independence models to be modelled. Depending on how the second edge is interpreted this has also given rise to dierent chain graph interpretations.
Although chain graphs were first presented in the late eighties the field has been relatively dormant and most research has been focused on Bayesian networks. This was until recently when chain graphs got renewed interest. The research on chain graphs has thereafter extended many of the ideas from Bayesian networks and in this thesis we study what this new surge of research has been focused on and what results have been achieved. Moreover we do also discuss what areas that we think are most important to focus on in further research.
Grounding Emotion Appraisal in Autonomous Humanoids
The work presented in this dissertation investigates the problem for resource management of autonomous robots. Acting under the constraint of limited resources is a necessity for every robot which should perform tasks independent of human control. Some of the most important variables and performance criteria for adaptive behavior under resource constraints are discussed. Concepts like autonomy, self-sufficiency, energy dynamics, work utility, effort of action, and optimal task selection are defined and analyzed as the emphasis is on the resource balance in interaction with a human. The primary resource for every robot is its energy. In addition to the regulation of its “energy homeostasis”, a robot should perform its designer’s tasks with the required level of efficiency. A service robot residing in a human-centered environment should perform some social tasks like cleaning, helping elderly people or delivering goods. Maintaining a proper quality of work and, at the same time, not running out of energy represents a basic two-resource problem which was used as a test-bed scenario in the thesis. Safety is an important aspect of any human-robot interaction. Thus, a new three – resource problem (energy, work quality, safety) is presented and also used for the experimental investigations in the thesis.
The main contribution of the thesis is developing an affective cognitive architecture. The architecture uses top-down ethological models of action selection. The action selection mechanisms are nested into a model of human affect based on appraisal theory of emotion. The arousal component of the architecture is grounded into electrical energy processes in the robotic body and is modulating the effort of movement. The provided arousal mechanism has an important functional role for the adaptability of the robot in the proposed two- and three resource scenarios. These investigations are part of a more general goal of grounding highlevel emotion substrates - Pleasure Arousal Dominance emotion space in homeostatic processes in humanoid robots. The development of the architecture took inspiration from several computation architectures of emotion in robotics, which are analyzed in the thesis.
Sustainability of the basic cycles of the essential variables of a robotic system is chosen as a basic performance measure for validating the emotion components of the architecture and the grounding process. Several experiments are performed with two humanoid robots – iCub and NAO showing the role of task selection mechanism and arousal component of the architecture for the robot’s self-sufficiency and adaptability.
Completing the Is-a Structure in Description Logics Ontologies
The World Wide Web contains large amounts of data and in most cases this data is without any explicit structure. The lack of structure makes it dicult for aut mated agents to understand and use such data. A step towards a more structured World Wide Web is the idea of the Semantic Web which aims at introducing se mantics to data on the WorldWideWeb. One of the key technologies in this endeavour are ontologies which provide means for modeling a domain of interest.
Developing and maintaining ontologies is not an easy task and it is often the case that defects are introduced into ontologies. This can be a problem for semantically-enabled applications such as ontology-based querying. Defects in ontologies directly influence the quality of the results of such applications as correct results can be missed and wrong results can be returned.
This thesis considers one type of defects in ontologies, namely the problem of completing the is-a structure in ontologies represented in description logics. We focus on two variants of description logics, the EL family and ALC, which are often used in practice.
The contributions of this thesis are as follows. First, we formalize the problem of completing the is-a structure as a generalized TBox abduction problem (GTAP) which is a new type of abduction problem in description logics. Next, we provide algorithms for solving GTAP in the EL family and ALC description logics. Finally, we describe two implemented systems based on the introduced algorithms. The systems were evaluated in two experiments which have shown the usefulness of our approach. For example, in one experiment using ontologies from the Ontology Alignment Evaluation Initiative 58 and 94 detected missing is-a relations were repaired by adding 54 and 101 is-a relations, respectively, introducing new knowledge to the ontologies.
Code Generation and Global Optimization Techniques for a Reconfigurable PRAM-NUMA Multicore Architecture
In this thesis we describe techniques for code generation and global optimization for a PRAM-NUMA multicore architecture. We specifically focus on the REPLICA architecture which is a family massively multithreaded very long instruction word (VLIW) chip multiprocessors with chained functional units that has a reconfigurable emulated shared on-chip memory. The on-ship memory system supports two execution modes, PRAM and NUMA, which can be switched between at run-time.PRAM mode is considered the standard execution mode and targets mainly applications with very high thread level parallelism (TLP). In contrast, NUMA mode is optimized for sequential legacy applications and applications with low amount of TLP. Different versions of the REPLICA architecture have different number of cores, hardware threads and functional units. In order to utilize the REPLICA architecture efficiently we have made several contributionsto the development of a compiler for REPLICA target code generation. It supports both code generation for PRAM mode and NUMA mode and can generate code for different versions of the processor pipeline (i.e. for different numbers of functional units). It includes optimization phases to increase the utilization of the available functional units. We have also contributed to quantitative the evaluation of PRAM and NUMA mode. The results show that PRAM mode often suits programs with irregular memory access patterns and control flow best while NUMA mode suites regular programs better. However, for a particular program it is not always obvious which mode, PRAM or NUMA, will show best performance. To tackle this we contributed a case study for generic stencil computations, using machine learning derived cost models in order to automatically select at runtime which mode to execute in. We extended this to also include a sequence of kernels.
Energy-Efficient Computing over Streams with Massively Parallel Architectures
The rise of many-core processor architectures in the high-performance computing market answers to a constantly growing need of processing power to solve more and more challenging problems such as the ones in computing for big data. Fast computation is more and more limited by the very high power required and the management of the considerable heat produced. Many programming models compete to take prot of many-core architectures to improve both execution speed and energy consumption, each with their advantages and drawbacks. The work described in this thesis is based on the dataflow computing approach and investigates the benets of a carefully designed pipelined execution of streaming applications, focusing on particular on off- and on-chip memory accesses. We implement classic and on-chip pipelined versions of mergesort for the SCC. We see how the benets of the on-chip pipelining technique are bounded by the underlying architecture, and we explore the problem of ne tuning streaming applications for manycore architectures to optimize for energy given a throughput budget. We propose a novel methodology to compute schedules optimized for energy eciency for a fixed throughput target. We introduce Schedeval, a tool to test schedules of communicating streaming tasks under throughput constraints for the SCC. We show¬† that streaming applications based on Schedeval compete with specialized implementations and we use Schedeval to demonstrate performance dierences between schedules that are otherwise considered as equivalent by a simple model.
Automatic and Explicit Parallelization Approaches for Mathematical Simulation Models
The move from single core and processor systems to multi-core and many-processors systemscomes with the requirement of implementing computations in a way that can utilizethese multiple units eciently. This task of writing ecient multi-threaded algorithmswill not be possible with out improving programming languages and compilers to providethe mechanisms to do so. Computer aided mathematical modeling and simulationis one of the most computationally intensive areas of computer science. Even simpli-ed models of physical systems can impose a considerable amount of computational loadon the processors at hand. Being able to take advantage of the potential computationpower provided by multi-core systems is vital in this area of application. This thesis triesto address how we can take advantage of the potential computation power provided bythese modern processors to improve the performance of simulations. The work presentsimprovements for the Modelica modeling language and the OpenModelica compiler.
Two approaches of utilizing the computational power provided by modern multi-corearchitectures are presented in this thesis: Automatic and Explicit parallelization. Therst approach presents the process of extracting and utilizing potential parallelism fromequation systems in an automatic way with out any need for extra eort from the modelers/programmers side. The thesis explains improvements made to the OpenModelicacompiler and presents the accompanying task systems library for ecient representation,clustering, scheduling proling and executing complex equation/task systems with heavydependencies. The Explicit parallelization approach explains the process of utilizing parallelismwith the help of the modeler or programmer. New programming constructs havebeen introduced to the Modelica language in order to enable modelers write parallelizedcode. the OpenModelica compiler has been improved accordingly to recognize and utilizethe information from this new algorithmic constructs and generate parallel code toimprove the performance of computations.
Efficient Temporal Reasoning with Uncertainty
Automated Planning is an active area within Artificial Intelligence. With the help of computers we can quickly find good plans in complicated problem domains, such as planning for search and rescue after a natural disaster. When planning in realistic domains the exact duration of an action generally cannot be predicted in advance. Temporal planning therefore tends to use upper bounds on durations, with the explicit or implicit assumption that if an action happens to be executed more quickly, the plan will still succeed. However, this assumption is often false. If we finish cooking too early, the dinner will be cold before everyone is at home and can eat. Simple Temporal Networks with Uncertainty (STNUs) allow us to model such situations. An STNU-based planner must verify that the temporal problems it generates are executable, which is captured by the property of dynamic controllability (DC). If a plan is not dynamically controllable, adding actions cannot restore controllability. Therefore a planner should verify after each action addition whether the plan remains DC, and if not, backtrack. Verifying dynamic controllability of a full STNU is computationally intensive. Therefore, incremental DC verification algorithms are needed.
We start by discussing two existing algorithms relevant to the thesis. These are the very first DC verification algorithm called MMV (by¬†Morris,¬†Muscettola and¬†Vidal) and the incremental DC verification algorithm called FastIDC, which is based on MMV.
We then show that FastIDC is not sound, sometimes labeling networks as dynamically controllable when they are not.¬† We analyze the algorithm to pinpoint the cause and show how the algorithm can be modified to correctly and efficiently detect uncontrollable networks.
In the next part we use insights from this work to re-analyze the MMV algorithm. This algorithm is pseudo-polynomial and was later subsumed by first an n5 algorithm and then an n4 algorithm. We show that the basic techniques used by MMV can in fact be used to create an n4 algorithm for verifying dynamic controllability, with a new termination criterion based on a deeper analysis of MMV. This means that there is now a comparatively easy way of implementing a highly efficient dynamic controllability verification algorithm. From a theoretical viewpoint, understanding MMV is important since it acts as a building block for all subsequent algorithms that verify dynamic controllability. In our analysis we also discuss a change in MMV which reduces the amount of regression needed in the network substantially.
In the final part of the thesis we show that the FastIDC method can result in traversing part of a temporal network multiple times, with constraints slowly tightening towards their final values.¬† As a result of our analysis we then present a new algorithm with an improved traversal strategy that avoids this behavior. ¬†The new algorithm, EfficientIDC, has a time complexity which is lower than that of FastIDC. We prove that it is sound and complete.
Automatic Verification of Parameterized Sytems by Over-Approximation
This thesis presents a completely automatic verification framework to check safety properties of parameterized systems. A parameterized system is a family of finite state systems where every system consists of a finite number of processes running in parallel the same algorithm. All the systems in the family differ only in the number of the processes and, in general, the number of systems in a family may be unbounded. Examples of parameterized systems are communication protocols, mutual exclusion protocols, cache coherence protocols, distributed algorithms etc.
Model-checking of finite state systems is a well-developed formal verification approach of proving properties of systems in an automatic way. However, it cannot be applied directly to parameterized systems because the unbounded number of systems in a family means an infinite state space. In this thesis we propose to abstract an original family of systems consisting of an unbounded number of processes into one consisting of a fixed number of processes. An abstracted system is considered to consist of k+1 components‚ÄĒk reference processes and their environment. The transition relation for the abstracted system is an over-approximation of the transition relation for the original system, therefore, a set of reachable states of the abstracted system is an over-approximation of the set of reachable states of the original one.
A safety property is considered to be parameterized by a fixed number of processes whose relationship is in the center of attention in the property. Such processes serve as reference processes in the abstraction. We propose an encoding which allows to perform reachability analysis for an abstraction parameterized by the reference processes.
We have successfully verified three classic parameterized systems with replicated processes by applying this method.
Page responsible: Director of Graduate Studies
Last updated: 2016-02-29