IDA Dept. of Computer and Information science, Linköping University

IDA Technical Reports, 1989

Last updated: Tue, 02 Dec 1997 11:02:15


Ahrenberg, L. (1989). A Constraint-Based Model for Natural-Language Understanding and a Pilot Implementation. Technical Report LiTH-IDA-R-89-22, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: This paper describes and evaluates a pilot implementation of a model for natural-language understanding. An essential feature of the model is its coupling of a unification-based linguistic description with an object-oriented representation of domain and discourse knowledge. For both tasks knowledge is represented and manipulated as object descriptions in the form of attribute-value structures. These are in turn built up from constraints associated with words and constructions of the grammar and with types and attributes of the domain knowledge base.

Ahrenberg, L. (1989). A Formal Field Grammar. Technical Report LiTH-IDA-R-89-46, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: The paper presents a grammatical formalism called Field-and-Category Grammar, which builds on insights and results from three quite different streams of work: the descriptive field grammars of the Danish linguist Poul Diderichsen, the functional unification-based grammars of Martin Kay and Ronald Kaplan, and the inheritance networks of Artificial Intelligence. Brief comparisons are made with other current formalisms, in particular as regards the statement of word order constraints. Finally, as an illustration of the descriptive potentials of FCGs, a grammar for a fairly large subset of Swedish is presented.

Ahrenberg, L. (1989). On the Integration of Linguistic Knowledge and World Knowledge in Natural Language Understanding. Technical Report LiTH-IDA-R-89-23, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the Nordic Conference of Text Comprehension in Man and Machine, Sigtuna, Sweden, October 27-28, 1989. (bibtex),

Abstract: No abstract available

Bilos, R. (1989). A Survey of Visibility Control Models. Technical Report LiTH-IDA-R-89-18, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: No abstract available

Bilos, R. and Fritzson, P. (1989). Experience from a Token Sequence Representation of Programs, Documents, and their Deltas. Technical Report LiTH-IDA-R-89-12, Department of Computer and Information Science, Linköping University, Sweden. Also presented at the International Workshop on Software Version and Configuration Control, Grassau, FRG, January 27-29, 1988. (bibtex),

Abstract: A primary goal with this work has been to investigate the consequences of a token-based program and document representation. This representation lies between plain text and trees in complexity.We have found that a program or a document represented as a token sequence saves on the average 50 representation. Another advantage is that deltas between program versions stored in source code control systems become insensitive to changes in whitespace or formatting style. Statistics from version-handling, computation of deltas, and storage is presented in the paper.

Dahlbäck, N. and Jönsson, A. (1989). Empirical Studies of Discourse Representations for Natural Language Interfaces. Technical Report LiTH-IDA-R-89-33, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the 4th Conf. of the European Chapter of the Association for Computational Linguistics, Manchester, April, 1989. (bibtex),

Abstract: We present the results from a series of experiments aimed at uncovering the discourse structure of man-machine communication in natural language (Wizard of Oz experiments). The results suggest the existence of different classes of dialogue situations, requiring computational discourse representations of various complexity. Important factors seem to be the number of different permissible tasks in the system and to what extent the system takes initiative in the dialogue. We also analyse indexical expressions and especially the use of pronouns, and suggest a psychological explanation of their restricted occurrence in these types of dialogues.

Dahlhaus, E., Karpinski, M., and Lingas, A. (1989). A Parallel Algorithm for Maximum Matching in Planar Graphs. Technical Report LiTH-IDA-R-89-10, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: We present a new parallel algorithm for finding a maximum cardinality matching in an arbitrary planar bipartite graph G. Our algorithm is processor-time efficient if the size l of a maximum cardinality matching of G is large. It runs in time O((0.5n -l+ sqrt (n)) (log n)**4) on a CRCW PRAM with O((n**1.5)log n) processors.

Djidjev, H. N., Lingas, A., and Sack, J.-R. (1989). An O(n log n) Algorithm for Computing a Link Center in a Simple Polygon. Technical Report LiTH-IDA-R-89-09, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the 6th Symposium on Theoretical Aspects of Computer Science, Paderborn, Germany, February 1989. (bibtex),

Abstract: The problem of finding the link center of a simple n-vertex polygon P had previously been solved in quadratic time. It was posed as an open problem as to whether a faster algorithm exists for determining at least one point inside the link center. Here this question is answered affirmatively. We present an algorithm that determines, in O(n log n) time, either a triangle inside the link center or the entire link center. As a consequence, we also obtain an O(n log n) time solution to the problem of determining the link radius of P. Both results are improvements over the O(n**2) bound previously established.

Drabent, W. and Martelli, M. (1989). Strict Completion of Logic Programs. Technical Report LiTH-IDA-R-89-28, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: The paper presents a new approach to the problem of completeness of the SLDNF-resolution. We propose a different reference theory that we call strict completion. This new concept of completion (comp*(P)) is based on a program transformation that given any program transforms it into a strict one (with the same computational behaviour) and the usual notion of program completion. We consider it a reasonable reference theory to discuss program semantics and completeness results. The new comp*(P) is always consistent and the completeness of all allowed programs and goals w.r.t. comp*(P) is proved.

Fjällström, P.-O. (1989). Approximation of Polygonal Lines. Technical Report LiTH-IDA-R-89-30, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: Given a polygonal line P and a scalar value d, we present an algorithm that determines a polygonal approximation Q of P in O(n2) time. The approximation can be characterized as follows. The vertices of Q is a minimal subsequence of the vertices of P, such that, each edge of Q is either identical of an edge of P, or it replaces a connected subchain of edges of P. In the latter case, the edge must not deviate more than distance d from the vertices of the subchain.

Fjällström, P.-O., Katajainen, J., Levcopoulos, C., and Petersson, O. (1989). A Sublogarithmic Parallel Algorithm for Finding the Convex Hull of Sorted Points. Technical Report LiTH-IDA-R-89-29, Department of Computer and Information Science, Linköping University, Sweden. Also in BIT 30(1990), 378-384. (bibtex),

Abstract: We present a parallel algorithm for finding the convex hull of a sorted set of points in the plane. Our algorithm runs in O(log n/log log n) time using O(n log log n/log n) processors in the CRCW PRAM computational model, which is time and cost optimal. Our algorithm is based on cube root of n divide-and-conquer. Furthermore, we use a simple, pointer-based data structure.

Fjellborg, B. (1989). A General Framework for Extraction of VLSI Pipeline Structures. Technical Report LiTH-IDA-R-89-36, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the 15th Symposium on Microprocessing and Microprogramming, EUROMICRO'89, Short Notes Session, Cologne, West Germany, September 4-8, 1989. (bibtex),

Abstract: A novel approach for extraction of pipeline structures from high-level behavioral VLSI design descriptions is presented. Unlike previous work, it is not restricted to exploitation of loops in the design, but can handle arbitrary computation structures without apparent pipelining properties. General criteria for locating pipeline structures are discussed, and an algorithm, utilizing several heuristics, that produces multi-function pipelines for those parts of the design that are feasible for pipelining, is presented.

Fritzson, P. and Kroha, P. (1989). An Object-Oriented Database Approach to the Symbol Processing in an Incremental Compiler. Technical Report LiTH-IDA-R-89-27, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: This report introduces an object-oriented database used as a symbol table for an incremental compiler. It is argued that a pure relational model is unsuitable for this task. The used model is conceptually simple, but powerful enough to model languages of the complexity of Ada. We describe the conceptual schema in the EER-model terminology, the external schema as a set of specific features of each programming language (visibility rules) and the object-oriented implementation.An important area of practical concern is addressed, which is the need for efficient implementation. The primary focus of efficiency is in storing and managing complex objects on disk. A simple but very efficient method for clustering objects on disk is described. A predictive method is used to guide the transfer of objects from disk to the heap. This clustering has a crucial influence on the efficiency. Finally it is shown how these clusters can be derived from the known (or supposed) structure of queries.

Hultman, J., Nyberg, A., and Svensson, M. (1989). A Software Architecture for Autonomous Systems. Technical Report LiTH-IDA-R-89-40, Department of Computer and Information Science, Linköping University, Sweden. Also presented at the First PROMETHEUS Scientific Workshop, Wolfsburg, 1989 and the Sixth International Symposium on Unmanned Untethered Submersible Technology, Ellicott City, Maryland, USA, 1989. (bibtex),

Abstract: A software architecture for autonomous, or partly autonomous, systems is described. We emphasize systems such as autonomous vehicles. The suggested architecture has three layers: the process layer, rule layer, and analysis layer, respectively. Each layer has a local state which is a function of time, and is updated in regular intervals. The period from one state update to the next is called a phase for the layer. Lower layers operate with shorter phases, i.e. update their parameters with higher frequency, as usual in automatic control systems.The analysis layer (the highest layer) uses a modified version of Explicit Temporal Logic for defining the behaviour of the autonomous system. We define this logic called Executable Explicit Temporal Logic (EETL). How to include different planning facilities in the analysis layer is also discussed. The rule layer (the middle layer) is responsible for translating the EETL statements into a representation which can be executed. This translation, which is a sort of incremental compilation, is described. The result from the compilation is a set of rules which are passed to the process layer. The rules determine when actions should be executed in the process layer.The process layer is responsible for executing the actions which determine the behaviour of the autonomous system. The design of this layer reflects the need for real time performance. The actions are represented as data dependencies between sensors and actuators. At each time-point a set of such actions is executed. The rules received from the rule layer determine when this action set shall be changed, i.e. they determine the control flow in the system. Hardware configuration and software tools used in an implementation of the system are also briefly discussed.

Joas, K. (1989). Interprocess Communication Primitives for a Distributed Debugger under Unix. Technical Report LiTH-IDA-R-89-48, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: By extending the low-level protocol used by the DICE system, an incremental programming environment for host-target development, it is possible to handle debugging of multiple processes distributed over multiple machines. In this document we define the interprocess communication primitives for distributed applications handled by such a system. We have chosen a message-based IPC facility, implemented using remote procedure calls in a Unix environment.The IPC primitives are defined on a procedure level. In addition to the functions related to IPC, we define functions for building a distributed system. These functions control creation and destruction of processes and communication links. They implement the PEPSy (Programming Environments for Parallel Systems) paradigm for structuring of distributed systems.

Johansson, O. (1989). An Experiment with a Neural Network for Handwritten Character Recognition. Technical Report LiTH-IDA-R-89-44, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: This report shows some promising results in using neural networks for hand- written character recognition in real-time. The characters were input from a digitizer and preprocessed to extract different features. These were taken as input to a backpropagation network which was trained to recognize the 29 lowercase letters in the Swedish alphabet. Its configuration was 40 input nodes, 30 hidden nodes and 29 output nodes. It was trained with 4 different versions of each character in 15 000 iterations. After that it was able to recognize 100 percent of two other separately entered versions of the alphabet. The amount of computation needed for classifying a character with this kind of networks puts no restrictions on real-time performance.

Jönsson, A. (1989). Application-Dependent Discourse Management for Natural Language Interfaces: An Empirical Investigation. Technical Report LiTH-IDA-R-89-34, Department of Computer and Information Science, Linköping University, Sweden. Also in Papers from the Seventh Scandinavian Conference of Computational Linguistics (Nordiska Datalingvistikdagarna), Reykjavik, Iceland, April, 1989. (bibtex),

Abstract: This paper presents results from a refined analysis of a series of Wizard of Oz-experiments with focus on discourse management. The study is part of a larger project aimed at designing a general natural language interface. It is argued that a natural language interface needs different referent resolution mechanisms for different combinations of background systems and scenarios. I present three mechanisms that can be used: proximity, object hierarchy and goal-directed control. Finally I suggest the use of a Natural Language Interface Management System (NLIMS) for customization of natural language interfaces to different applications.

Jönsson, A. and Ahrenberg, L. (1989). Extensions of a Descriptor-Based Tagging System into a Tool for the Generation of Unification-Based Grammars. Technical Report LiTH-IDA-R-89-35, Department of Computer and Information Science, Linköping University, Sweden. Paper presented at the 16th International ALLC Conference and 9th ICCH Conference, Toronto, Canada, June, 1989. (bibtex),

Abstract: This paper describes an interactive tagging system (DagTag) which uses directed acyclic graphs, "dags", as tags. The motivations behind the system are explained and its use for tagging and analysis is briefly described. We then discuss how the system can be used to generate unification-based grammars tailored to particular applications.

Kamkar, M., Shahmehri, N., and Fritzson, P. (1989). Affect-Chaining and Dependency Oriented Flow Analysis Applied to Queries of Programs. Technical Report LiTH-IDA-R-89-02, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: Most of the work on static program flow analysis has been done in the context of code optimization. The situation is different for an application such as an interactive query tool for programmer support. Primarily this is because the information wanted is different from what is needed for optimization, but also because incremental flow analysis algorithms are much more relevant in this context.In this paper we introduce the concept of affect-chaining, which is the process of analysing flow of data between variables in a program. The objective is to help the user to better understand data flow and data dependencies in programs not only during design and coding but also during test, debugging and maintenance. We present both forward- and backward- versions of affect-chaining analysis together with efficient algorithms.A long term goal of the work presented in this paper is to combine results from static analysis of a program and information from the run-time state during execution of the same program. The idea is, that this combination will enable an interactive query tool to answer questions about possible reasons for unexpected program behavior, and also to inform about possible consequences of a program change which may be considered. Another goal is to develop better estimates of software complexity based on affect-chaining dependencies.

Karlsson, R. G. (1989). Traversing a Maze with a Robot Arm. Technical Report LiTH-IDA-R-89-01, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: No abstract available

Katajainen, J., Levcopoulos, C., and Petersson, O. (1989). Local Insertion Sort Revisited. Technical Report LiTH-IDA-R-89-20, Department of Computer and Information Science, Linköping University, Sweden. Also presented at the International Symposium on Optimal Algorithms, Varna, Bulgaria, May 29 - June 2, 1989. (bibtex),

Abstract: Two measures of presortedness, Dist(X) and logDist(X), are presented. The measures are motivated by a geometric interpretation of the input sequence X. Using these measures, an exact characterization of the local insertion sort algorithm of Mannila is achieved. Variants of local insertion sort, using many fingers (i.e., pointers into a finger search tree) instead of only one, are suggested. The number can either be fixed beforehand or dynamic, i.e., vary during the sorting process. Different strategies for moving the fingers yield different algorithms. Some general conditions which all strategies should satisfy are stated, and two such strategies are given. Furthermore, a method for designing dynamic algorithms that allocate an optimal number of fingers is provided. For any specific strategy satisfying our conditions, this method yields an optimal algorithm.

Katajainen, J. and Mäkinen, E. (1989). Tree Compression and Optimization with Applications. Technical Report LiTH-IDA-R-89-47, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: Dedicated to the memory of Markku Tamminen (1945-1989)Abstract. Different methods for compressing trees are surveyed and developed. Tree compression can be seen as a trade-off problem between time and space in which we can choose different strategies depending on whether we prefer better compression results or more efficient operations in the compressed structure. Of special interest is the case where space can be saved while preserving the functionality of the operations; this is called data optimization. The general compression scheme employed here consists of separate linearization of the tree structure and the data stored in the tree. Also some applications of the tree compression methods are explored. These include the syntax-directed compression of program files, the compression of pixel trees, trie compaction, and dictionaries maintained as implicit data structures.

Kroha, P. (1989). An Extension of the Single Instruction Machine Idea. Technical Report LiTH-IDA-R-89-25, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: The idea of reducing machine instruction sets from CISC machines to RISC machines has an extreme possibility in SIC machines (single instruction machines) with only one instruction in the machine instruction set. Instead of the operation code decoding, only decoding of simple addressing modes is done through the ordinary addressing mechanism. Different types of arithmetic units are associated with certain special addresses in the ad dress space. One Central Move Unit (CMU) supplies these arithmetic units (AMUs) with input values.This idea was first published in [Lip76]. The extension presented in this report brings the following original ideas: (1) the usage of two CMUs (Central Move Units) with a two- port memory, (2) usage of AMUs whose operations take longer time than the MOVE operation of CMU, (3) usage of an algorithm tailored SIC machine, i.e. the number and the types of AMUs in the set can be tailored to the algorithm, (4) a possibility of parallel processing since many of the arithmetic units usually can be scheduled by the compiler to work parallel.

Kroha, P. and Fritzson, P. (1989). A Compiler with Scheduling for a Specialized Synchronous Multiprocessor System. Technical Report LiTH-IDA-R-89-26, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: This report presents an algorithm for scheduling parallel activities in a specialized synchronous multiprocessor system. This algorithm is implemented as a part of a cross- compiler for an extended parallel single instruction machine (SIC). A SIC machine may contain multiple arithmetic processors, each associated with certain addresses in the address space.The scheduling cross-compiler initially derives a schedule including information about the number and types of processors necessary for the highest possible degree of parallelism. If too few arithmetic processors are available, a schedule for a smaller number of processors can be generated. Code generation and scheduling is presented for a one page program example in Pascal. For this example, a speed-up of a factor of 7 was obtained for the multiprocessor system, compared to the Intel 80286 processor, and assuming the same clock cycle time.

Kuchcinski, K. and Peng, Z. (1989). Testability Analysis in a VLSI High-Level Synthesis System. Technical Report LiTH-IDA-R-89-42, Department of Computer and Information Science, Linköping University, Sweden. Also presented at the15th Symposium on Microprocessing and Microprogramming, EUROMICRO'89, Cologne, West Germany, September 4-8, 1989 and will be published in Microprocessing and Microprogramming, The EUROMICRO Journal, 1990. (bibtex),

Abstract: This paper deals with the problem of defining testability measures for register-transfer level designs in order to incorporate them into the high-level synthesis system of VLSI circuits. First, a design representation and its observability and controllability measures are defined. Second, an algorithm for testability analysis is described. Third, some discussion about how the testability measures can be used to influence the synthesis process is given. Finally, some practical impact and applications of the presented approach are also indicated.

Larsson, T. (1989). Mixed Functional and Temporal Hardware Verification. Technical Report LiTH-IDA-R-89-53, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of Applied Formal Methods For Correct VLSI Design, 13-16 November 1989, Leuven, Belgium. (bibtex),

Abstract: We a present higher-order description language extended with a temporal operator. The temporal extension is defined by a set of axioms and a temporal model. We give examples on the description of hardware circuits and temporal constraints on these. Further a verification method based on transformations is presented and illustrated by an example proof of a finite impulse response filter.

Levcopoulos, C., Katajainen, J., and Lingas, A. (1989). An Optimal Expected-Time Parallel Algorithm for Voronoi Diagrams. Technical Report LiTH-IDA-R-89-08, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the 1st Scandinavian Workshop on Algorithm Theory, Halmstad, Sweden, July 1988. (bibtex),

Abstract: We present a parallel algorithm which constructs the Voronoi diagram of a planar n-point set within a square window. When the points are independently drawn from a uniform distribution, the algorithm runs in O(log n) expected time on CRCW PRAM with O(n log n) processors. The fast operation of the algorithm results from the efficiency of a new multi-level bucketing technique convenient in processor assignment. The concurrent write is used only for the distribution of points in their home buckets in the bottom level.

Levcopoulos, C. and Petersson, O. (1989). Heapsort - Adapted for Presorted Files. Technical Report LiTH-IDA-R-89-21, Department of Computer and Information Science, Linköping University, Sweden. Also presented at the 1989 Workshop on Data Structures and Algorithms, Ottawa, Canada, May 17-19, 1989. (bibtex),

Abstract: We provide a new sorting algorithm which is optimal with respect to several known, and new, measures of presortedness. A new such measure, called Osc(X), measures the oscillation within the input data. The measure has an interesting application in the sweep-line technique in computational geometry. Our algorithm is based on a new approach which yields space efficiency and it uses simple data structures. For example, after a linear time preprocessing step, the only data structures used are a static tree and a heap.

Levcopoulos, C. and Petersson, O. (1989). A Note on Adaptive Parallel Sorting. Technical Report LiTH-IDA-R-89-17, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: We present a cost optimal parallel algorithm for sorting presorted sequences. The measure of presortedness we consider is Rem(X), i.e., the smallest number of elements whose removal from the input sequence X leaves a sorted sequence. The algorithm sorts a sequence X of length n, with Rem(X)=r, in O(log n) time, provided O((n+rlog r)/log n) processors are available, in the EREW PRAM model. This is the first PRAM sorting algorithm which is cost optimal, with respect to Rem(X). A sequential implementation of the algorithm turns out to be one of the simplest sorting algorithms which is optimal with respect to any measure of presortedness.

Levcopoulos, C. and Petersson, O. (1989). An Optimal Parallel Algorithm for Sorting Presorted Files. Technical Report LiTH-IDA-R-89-16, Department of Computer and Information Science, Linköping University, Sweden. A previous version presented at the 8th Conference on Foundations of Software Technology and Theoretical Computer Science, Poona, India, Dec. 1988. (bibtex),

Abstract: We present a cost optimal parallel algorithm for sorting presorted files. The measure of presortedness we consider is the number of inversions in the input file. The algorithm sorts a file of length n, with 0(mn) inversions, in 0(log n(log*n-log*m)) time, provided 0(n log m/(log n(log*n log* m))) processors are available, in the EREW PRAM model. This is the first PRAM sorting algorithm which is cost optimal, with respect to the number of inversions. Our method uses a new approach, which can also be used to derive a simple sequential sorting algorithm, which is efficient with respect to the number of inversions.

Lingas, A. (1989). An Efficient Parallel Algorithm for Planar Directed Reachability. Technical Report LiTH-IDA-R-89-11, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: We present a preprocessing of planar digraphs which enables us to solve all the reachability problems for a distinguished vertex of a planar digraph in time O((log n)**2) using a CREW PRAM with O(n log n) processors. The preprocessing takes O((log n)**6) time using a CRCW PRAM with O((n**1.5)(log n)**2) processors. Employing it, we can find the transitive closure of a planar digraph in time O((log n)**6) using a CRCW PRAM with O((n**2)log n) processors.

Lingas, A. (1989). Greedy Triangulation can be Efficiently Implemented in the Average Case. Technical Report LiTH-IDA-R-89-06, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the 14th International Workshop on Graph-Theoretic Concepts in Computer Science, Amsterdam, The Netherlands, June 1988. (bibtex),

Abstract: Let S be a set of n points uniformly distributed in a unit square. We show that the greedy triangulation of S can be computed in O(n log n) expected time (without bucketing). The best previously known upper-bound on the expected-time performance of an algorithm for the greedy triangulation was O(n**2).

Lingas, A. (1989). Subgraph Isomorphism for Connected Graphs of Bounded Valence and Bounded Separator is in NC. Technical Report LiTH-IDA-R-89-07, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of 1988 International Conference on Parallel Processing, Chicago, USA, August 1988. (bibtex),

Abstract: It is shown that the problem of subgraph isomorphism for connected graphs of bounded valence and bounded separator admits a deterministic parallel algorithm running in poly-log time and using a polynomial number of processors.

Lingas, A. (1989). Voronoi Diagrams with Barriers and the Shortest Diagonal Problem. Technical Report LiTH-IDA-R-89-04, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: Using the concept of a generalized Voronoi diagram to include edge barriers it is shown that a shortest diagonal of a planar straight-line graph can be found in time O(n log n).

Lingas, A. and Syslo, M. M. (1989). A Polynomial-Time Algorithm for Subgraph Isomorphism of Two-Connected Series-Parallel Graphs. Technical Report LiTH-IDA-R-89-05, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the 15th International Qolloquium on Automata, Languages and Programming, Tampere, Finland, July 1988. (bibtex),

Abstract: We present the first polynomial-time algorithm for the subgraph isomorphism problem restricted to two-connected series-parallel graphs, which uses a new decomposition technique. We also show that this problem is in random NC, and that it is in NC if the input graphs are of bounded valence.

Löwgren, J. (1989). An Architecture for Expert System User Interface Design and Management. Technical Report LiTH-IDA-R-89-50, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of ACM SIGGRAPH Symposium on User Interface Software Technology (VIST'89), Williamsburg, Virginia, November 13-15, 1989. (bibtex),

Abstract: From a user interface point of view, expert systems are different from applications in general in that the reasoning process of the system often defines the dialogue structure. This has several advantages, but there may also be problems due to the lack of separation between functionality and user interface. This paper investigates the possibility of treating an expert system user interface as separate from the reasoning process of the system, and the consequences thereof.We propose that an expert system user interface can be seen as a combination of two different structures; the surface dialogue, comprising mainly lexical and syntactical aspects, and the session discourse which represents the interaction between user and system on a discourse level. A proposed architecture for a software tool managing these two structures is presented and discussed, with particular emphasis on the session discourse manager.

Löwgren, J., Nordqvist, T., and Löf, S. (1989). Knowledge-Based Support for User Interface Evaluation in User Interface Management Systems. Technical Report LiTH-IDA-R-89-32, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: No abstract available

Maluszynski, J. and Näslund, T. (1989). Fail Substitutions for Negation as Failure. Technical Report LiTH-IDA-R-89-03, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the North American Conference on Logic Programming 89, Cleveland, Ohio, USA. (bibtex),

Abstract: No abstract available

Peng, Z. and Kuchcinski, K. (1989). A Multi-Level Optimization Strategy for Behavior-Entry Synthesis. Technical Report LiTH-IDA-R-89-41, Department of Computer and Information Science, Linköping University, Sweden. Also presented at the Fourth International Workshop on High-Level Synthesis, Kennebunkport, Maine, USA, October 15-18, 1989. (bibtex),

Abstract: This paper describes an optimization strategy used in a register-transfer synthesis system which takes a high-level behavioral specification as input. The synthesis system utilizes a unified design representation which captures the three elements of a VLSI system: control, communication, and computational elements and makes trade-off between them. The synthesis task is then carried out as a sequence of trade-off transformations. Selection of transformations is guided by the optimization strategy which makes design decisions concerning operation scheduling, data path allocation, control allocation, and module binding simultaneously. The integration of the several synthesis levels has resulted in a better chance to reach the globally optimal solution.

Pettersson, M. (1989). Generating Interpreters from Denotational Definitions using C++ as a Meta-language. Technical Report LiTH-IDA-R-89-52, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: In this paper we present a method to automatically generate interpreters from strongly typed denotational definitions. C++ is used as a meta-language in a way that largely preserves the structure of denotational definitions in standard mathematical notation. The object-oriented features of C++ together with operator overloading and dynamic typing are used to achieve this goal. There are several advantages with this method. Compared to meta-languages such as Scheme, the strong typing of C++ eliminates many errors in denotational specifications already at compile-time. Also, denotational definitions expressed in C++ can be used together with common compiler-writing tools such as Yacc, and be compiled efficiently. Perhaps most important: this method facilitates more widespread use of denotational techniques, by embedding them within a common industrial-strength language.

Sandewall, E. (1989). Combining Logic and Differential Equations for Describing Real-World Systems. Technical Report LiTH-IDA-R-89-38, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the Conf. on Representation and Reasoning about Knowledge, Toronto, Canada, 1989. (bibtex),

Abstract: The paper shows how to combine non-monotonic temporal logic with differential calculus. A temporal logic is defined where time is real-valued and not discrete, and where real-valued, continuous parameters are used with their derivatives. Differential equations can therefore be used directly as axioms, and need not be transformed into confluences. This logic is used for characterizing common-sense physical systems where some parameters, or some of their derivatives, are occasionally discontinuous. Differential calculus then serves for characterizing the parameters during time-segments where they are continuous, and logic is used for characterizing the parameters around the discontinuity points. Models and preferential entailment is defined for this logic. For a simple scenario example (ball falling into shaft) it is shown what geometrical and physical axioms are needed, and how the axioms preferentially entail the desired common-sense conclusion.

Sandewall, E. (1989). A Decision Procedure for A Theory of Actions and Plans. Technical Report LiTH-IDA-R-89-39, Department of Computer and Information Science, Linköping University, Sweden. Also published in Z. Ras et al. (eds), Methodologies for Intelligent Systems, 1989. (bibtex),

Abstract: The paper describes a decision procedure for a preferential (and therefore non-monotonic), temporal logic for reasoning about actions and plans. The procedure can be used for temporal prediction, temporal postdiction, and also for plan generation. The basic method for the procedure is to operate on a working set of partial interpretations, and to gradually strengthen these partial interpretations until they are minimal models for the given axioms.

Sandewall, E. (1989). Filter Preferential Entailment for the Logic of Action in Almost Continuous Worlds. Technical Report LiTH-IDA-R-89-37, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the International Joint Conference on Artificial Intelligence, Detroit, USA, 1989. (bibtex),

Abstract: Mechanical systems, of the kinds which are of interest for qualitative reasoning, are characterized by a set of real-valued parameters, each of which is a piecewise continuous function of real-valued time. A temporal logic is introduced which allows the description of parameters, both in their continuous intervals and around their breakpoints, and which also allows the description of actions being performed in sequence or in parallel. If axioms are given which characterize physical laws, conditions and effects of actions, and observations or goals at specific points in time, one wishes to identify sets of actions ("plans") which account for the observations or obtain the goals. The paper proposes preference criteria which should determine the model set for such axioms. It is shown that conventional preferential entailment is not sufficient. A modified condition, filter preferential entailment is defined where preference conditions and axiom satisfaction conditions are interleaved.

Sandewall, E. et al. (1989). Department of Computer and Information Science Annual Research Report 1988. Technical Report LiTH-IDA-R-89-45, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: No abstract.

Santi, Ö. (1989). Retargeting of an Incremental Code Generator to MC68020. Technical Report LiTH-IDA-R-89-13, Department of Computer and Information Science, Linköping University, Sweden. (bibtex),

Abstract: Incremental programming environments are becoming increasingly popular, as the speed with which large programs can be developed increases. As opposed to earlier systems, which are mostly based on interpretation, DICE - Distributed Incremental Compiling Environment - is an incremental programming environment based solely on compilation technology. In this report the retargeting of the code generator in DICE from PDP-11/DEC-20 to the Sun workstation (MC68020) is described. This work is a part of the DICE project, which is administered by Peter Fritzson at the Department of Computer and Information Science, Linköping University. The project investigates incremental compilation of PASCAL in a distributed programming environment. The program development environment resides on a host computer, while the program under development resides on a separate target computer. The incrementality and the distributed configuration impose special constraints that must be taken into account when writing a code generator for a new machine. The administration of code and data on the target machine is discussed and analyzed. A new runtime environment in the target is provided by creating an interface to an existing one. In DICE, the traditional compile, link and load passes are integrated and interleaved.

Shahmehri, N. and Fritzson, P. (1989). Algorithmic Debugging for Imperatives for Languages with Side-effects. Technical Report LiTH-IDA-R-89-49, Department of Computer and Information Science, Linköping University, Sweden. Also presented at The 3rd International Workshop on Compiler Compilers, Schwerin, Germany. October 22-24, 1990. The proceedings will be published by Springer Verlag in the LNCS series. (bibtex),

Abstract: Algorithmic debugging is a technique for semi-automatic localization of program errors. So far, this technique has been limited to programs without side-effects, and has only been applied to Prolog programs. In this paper, we generalize the algorithmic debugging method to programs which may contain side-effects and which can be written in imperative languages, e.g. Pascal or object-oriented languages.Our method combines program transformations with results from data flow analysis to achieve this goal. Programs which contain side-effects are transformed or mapped to programs without side-effects. These transformations are guided by data flow analysis results. The conventional algorithmic debugging technique is used on the transformed or mapped program, but the debugging process is presented to the user in terms of the original program. A small prototype for a subset of Pascal has been implemented. A larger prototype including transformations is being implemented within the DICE system - a programming environment based on incremental compilation. Currently we restrict ourselves to bug localization for terminating programs. Also, side-effects related to pointers are currently not considered.

Wire'n, M. (1989). Interactive Incremental Chart Parsing. Technical Report LiTH-IDA-R-89-24, Department of Computer and Information Science, Linköping University, Sweden. Also in Proc. of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, Manchester, England, April 10-12, 1989. (bibtex),

Abstract: No abstract available


Goto (at Linköping University):
CS Dept TR Overview
Maintained by webmaster