SaS Seminars 2004
Software and Systems Research Seminar Series
Anatomy of a worst-case efficient priority queue
Date: January 20, Place: Alan Turing, Time: 15:15
University of Copenhagen
Department of Computing
Jubilee lecture: 25th anniversary on the stage as university teacher.
For abstract click here
Addressing MPSoC Hardware/Software platform challenges
Date: December 16, Place: Alan Turing, Time: 15:15
Prof. Luca Benini,
University of Bologna
Dept. of Electronics, Computer Science and Systems
With the fast diffusion of Multiprocessor System on Chip (MPSoC) platforms in many application areas, we need a coherent and synergistic hardware-software approach to fully exploit their huge potential. In this talk I will give an overview of the research activity carried out in this area at the University of Bologna. I will touch upon various topics such as: Network on Chip (NoC) design, design flows and programming paradigms. On these topics, I will present recent research results and directions of current investigation.
Luca Benini received the B.S. degree (summa cum laude) in electrical engineering from the University of Bologna, Italy, in 1991, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University in 1994 and 1997, respectively. He is with the department of electronics, computer science and systems in the University of Bologna. He also holds visiting researcher positions at Stanford University and the Hewlett-Packard Laboratories, Palo Alto, CA.
GridStat : A Next-Generation Communication Infrastructure for the Electric Power Grid
Date: November 18, Place: Alan Turing, Time: 15:15
Prof. David E. Bakken, Washington State University Pullman, Washington USA
Electric power infrastructures are extremely complex. For example, the North American power grids involve almost 3500 utility organizations. The communication system coordinating and monitoring utility operations in the US was designed largely in response to the 1965 blackout in the US Northeast, and it is similarly obsolete in Europe. However, since then network and related technologies have improved dramatically, and factors such as deregulation are putting additional strain on the communications system. In this seminar I will overview the grid's communication system and show how it greatly limits opportunities for protection and control as well as how it has been a factor in recent blackouts. I will also overview the requirements an improved communications infrastructure for the power grid must meet. I will then describe GridStat, our middleware communications framework for the electric power grid and other critical infrastructures. GridStat a publish-subscribe middleware framework that is specialized for the delivery of status updates. It features optimizations that exploit the semantics of status flows (as opposed to generic event messages) and features a QoS management infrastructure. It is being used in a trial deployment by Avista Utilities, an electric and gas utility.
Dave Bakken is an Associate Professor of Computer Science in the School of Electrical Engineering and Computer Science at Washington State University (WSU). His research interests include middleware, distributed computing systems, fault tolerance, and QoS frameworks. Prior to joining WSU, he was a scientist at BBN, where he was an original co-inventor of the Quality Objects (QuO) framework. He is on sabbatical in Norway this academic year, at the University of Oslo and also 20% at Simula Research Lab.
R&D Challenges for Resilience in Ambient Intelligence
Prof. Dr.-Ing. Heinz Thielmann Director Fraunhofer Institute for Secure Telecooperation (SIT), Darmstadt/Germany
Date: October 14, Place: Alan Turing, Time: 15:15
The overall goal of developing a secure and dependable Information Society calls for a better understanding of the novel issues associated with the advent of innovative and pervasive ICT. The growing autonomy & intelligence of technologies and systems together with the increasing scale and volume of their deployment pose new challenges for security and dependability. The complexity of such challenges is further magnified by the increasing volatility and growing heterogenity of products, applications, services, systems and processes in the digital environment and the inherent interdependencies resulting from the pervasiveness of technologies and systems in all aspects of our society and our economy.
In this context, the objective of Fraunhofer SIT in line with the program of the European commission - is to mobilise all stakeholders and relevant scientific communities in identifying and articulate the key R&D challenges in developing a secure and dependable Information Society. Whereas the thorough investigation of the R&D challenges associated to the development of a secure and dependable Information Society may be very broad and demanding, we need to focus on a limited and selected number of themes, such as
How to meet the dependability and security requirements for unbounded computer-based systems-of-systems and networks;
How to build in resilience and dependability in dynamic and evolvable information networks and infrastructures for AmI;
How to understand and cope with interdependencies between the information infrastructure and other critical infrastructures for Society.
Challenges and concerns. The information infrastructure public and enterprises - underpins most systems in our society. Critical systems can not only be involved in risks originating from inside, but increasingly now from exogenous sources, both malicious or from the interconnection with other systems. ICT is a crucial factor of these risks. The most crucial pat is the so-called Critical Information Infrastructure (CII), the nervous system that connects all other networked systems. The risks induced by the information infrastructure can affect society as a whole, an industrial or geographic sector, businesses and the single citizen. But the emerging wide-spread interconnectivity brings forth interdependencies: faults and failures can be caused by external systems through mostly uncovered links. Protecting ICT systems networked in unbounded environments needs new scientific and technological developments. Most systems (e.g. power, transport) have been designed in the past with a different paradigm in mind: isolated systems, clear jurisdictions and responsibilities, controlled access and interactions. Now we are interconnecting those systems with ICT, deploying networked systems with unbound access systems-of-systems comprising components that haven't been explicitly produced for them. Behaviours just emerge from the combination of systems and some of them are failure-prone in unknown and unpredictable ways. This constitutes a major c hallenge. However the protection of these infrastructures, and in particular of the information infrastructures, requires a profound understanding scientifically based, rigorous, supported by appropriate technologies.
Key technological issues. We need multiple approaches that take advantage of existing models, and develop those appropriate for understanding interdependencies and hidden connections, at all levels of granularity (from devices to Personal Area networks, to LANs to WANs, to interdependent infrastructures). We need to develop technologies for protection (understanding by this the whole cycle from prevention to recovery from failures). This requires data interpretation for observation and explanation of situations, mostly in real-time and related to alert and emergency conditions. For managing the security of networked systems, it is necessary to manage all required information. A new challenge is to devise security as an intrinsic feature to systems, self-healing solutions (architectures and other needed technologies) have to be studied. In a networked society, all nodes are connected and "always on". This is a new risk situation, especially for citizens that might not be aware of the threats. The individual's domestic infrastructures are at the same time a source of vulnerability for the overall infrastructures, and a source of vulnerabilities for the individual, with effects on his/her security and privacy, and changes in the consideration of time and space issues (proximity, residence, personal spaces). Multiple stakeholders have to be taken into consideration in the monitoring, detection and reaction to failure conditions have to consider the expectations and requirements of all of them (service users, citizens, etc.). Resilience should also be applied to the control of industrial installations, where ICT is pervasive. There, systems should develop an awareness of situations scalable in different dimensions (space, time, rationality what is expected from a given device), and then acting according to dependability/security objectives. s needs to include the specification of local and general failure conditions. Large industrial control is evolving, with the inclusion of ever more networked devices. Resilience is not just the result of the composition of the properties of individual components or architectures, but also of the engineering processes.
The presentation will give insight into related R&D areas and projects at Fraunhofer SIT,
Prof. Dr.-Ing. Heinz Thielmann graduated in communications engineering and data processing in 1969 at the Technical University Darmstadt (Germany), where he also received his Dr.-Ing. degree in 1973 on a thesis "Analysis and Synthesis methods in analog and digital filtering". From 1974 to 1994 Prof. Thielmann worked in different functions in Philips: R&D for communication systems, product management, CEO of a worldwide business unit "Network Systems". In 1994 he joined the Fraunhofer Gesellschaft and is Managing Director of the Institute for Secure Telecooperation in Darmstadt. Since 1973 Prof. Thielmann gives lectures in communications engineering, networks, IT-security and general innovation management. He is advisor to government authorities, the EU-commission and network operators and member of industrial supervisory boards.
Complexity Issues in System Development: Examples from Automotive Electronics
Date: September 6, Place: Alan Turing, Time: 13:15
In this seminar, some of the major trends and drivers in the development of automotive electronics will be presented. One of the major concerns is how to deal with the increasing complexity, in particular in software. To be able to handle this, a better understanding of the nature of complexity is needed, which not only includes the system itself but also the organisation and people that participate in the development. Since the main activity of the organisation is information processing, resulting in a description of the product to be produced, the ability of humans to process information is at the heart of the problem. In this presentation, an initial attempt is made at describing some of the issues involved, and some of the strategies employed within the automotive industry to cope with the complexity.
Jakob Axelsson studied Computer Science in Linköping, Sweden, and Lausanne, Switzerland. He received an M.Sc. from Linköping University in 1993, and a Ph.D. in 1997 for a thesis on hardware/software codesign of real-time systems. He has been working at ABB Corporate Research and ABB Power Generation (now Alstom) in Baden, Switzerland, Volvo Technological Development (now Volvo Technology) and Carlstedt Research & Development in Göteborg, Sweden. He is currently with the Volvo Car Corporation in Göteborg, where he is program manager for research and advanced engineering for electrical and electronic systems. He is also an adjunct professor in software and systems engineering at Mälardalen University in Västerås. In addition, he is chairman of the board of the ARTES national graduate school in real-time and embedded systems, and was until recently president of the Swedish chapter of the International Council on Systems Engineering (INCOSE).
Xcerpt: A Deductive Query Language for the (Semantic) Web
Date: August 19, Place: Alan Turing Time: 15.15
Sebastian Schaffert, Department of Computer Science LMU, Munich
Current Web query languages are special purpose in the sense that they are either only suitable for querying XML or semistructured data (like XQuery), or only suitable for querying "Semantic Web" meta data, e.g. in RDF or OWL. The language Xcerpt presented in this talk in contrast is a deductive, rule-based query language that aims at being capable of both querying "standard" Web data and "Semantic" Web data, and even allows to combine both kinds of information. Xcerpt's deductive properties also allow it to reason wit such data. This talk first introduces into Xcerpt's basic constructs with a focus on incomplete pattern queries for Web data, and then shows its application to a "standard" Web scenario, which is later extended with (simple) ontology reasoning. The talk furthermore introduces into the event and update language XChange, which builds upon Xcerpt for its querying components, and briefly summarises other related projects.
Trace Schemata for (Multi-language) Dynamic Analyses
Date: June 3, Place: Alan Turing Time: 15.15
Mireille Ducassé, Professor, Technical University INSA of Rennes
In order to understand the dynamics of programs, some data about executions have to be collected and these data are analyzed, preferably by automated tools. This raises a number of questions, in particular: What data have to be collected? What does the analysis produce? How are the data collected? How does the analysis work? How are the collection and the analysis of data combined ? In the presentation I will concentrate on the first and last questions which are too often overlooked. I will first advocate that properly specifying what data to collect is a key issue. I will try to coin the name "trace schemata" for modeling trace information. Second, I will discuss that what to collect is deeply correlated with how collection and analysis are combined. We have shown with different debuggers implemented for different languages that collecting trace information driven by the analysis in a modular way is a good compromise. It is easy to port. With the addition of a filtering mechanism, it can be efficient because only the relevant part of the trace is collected for a given analysis. This enables rich trace schemata to be defined. Therefore, a given tracer can be re-used for several analyses, avoiding a lot of tedious work.
Prof. Mireille Ducassé
IRISA/INSA of Rennes, France
Professor Mireille Ducassé worked 11 years in industrial research centers, first at the "Laboratoire de Marcoussis" in France, then at the "European computer-Industry Research Center" in Germany. She completed a PhD at the university of Rennes in 1992. In 1993, she was appointed professor at the technical university INSA (Institut National des Sciences Appliquées) of Rennes. Her research is done within the IRISA (Institut de Recherche en Informatique et Systèmes Aléatoires) which federates the computer science research of Rennes. She has been conference chair of AADEBUG 95 (international workshop on Automated Debugging) and program chair of AADEBUG 2000. Her publications can be found at http://www.irisa.fr/lande/ducasse/
Advanced Research with Autonomous Unmanned Aerial Vehicles
Date: March 25, Place: Visionen Time: 15.15
Patrick Doherty, Professor, IDA
The emerging area of intelligent unmanned aerial vehicle (UAV) research has shown rapid development in recent years and offers a great number of research challenges for artificial intelligence and knowledge representation. For both military and civilian applications, there is a desire to develop more sophisticated UAV platforms where the emphasis is placed on intelligent capabilities and their integration in complex distributed software architectures. Such architectures should support the integration of deliberative, reactive and control functionalities in addition to the UAV's integration with larger network centric systems.
In my talk I will present some of the research and results from the WITAS UAV Project, a long term basic research project with UAVs currently being pursued at Linköping University, Sweden. Actual missions flown for an international evaluation group in Revinge, Sweden will also be shown.
Intrusion Detection, myths of the past, current state of the art and future challenges
Date: February 12, Place: Alan Turing (Estraden) Time: 15.15
Marc Dacier, Professor, Corporate Communication Department, Eurecom
In the course of 2003, a well known consulting group published a report
advising its customers to postpone any investment in "Intrusion Detection
Systems" (IDS) in favor of so called "Intrusion Prevention Systems" (IPS).
This led to some heated controversy in the Intrusion Detection community.
During this talk, we try to clarify the situation by offering a thorough
and historical review of existing IDS. We identify the remaining gap
between the promises offered a few years ago and today's available
solutions. We highlight the issues left open and the avenues for future
research, not only in terms of
detection technologies per se, but also
terms of correlation mechanisms and countermeasures. We list the most
active research threads around the world and we explain why the lack of
unbiased data concerning the existing attack processes hinders the
evaluation of existing solutions. We end the presentation by explaining
why honeynets may constitute an interesting avenue to solve that problem.
CLP(BioNet): Towards a CLP framework for the analysis of Biochemical Networks
(Invited Talk at the Annual SweConsNet meeting)
Date: January 15, Place: Alan Turing (Estraden) Time: 10.00
Yves Deville, Professor, Université Catholique de Louvain, Department of Computing Science and Engineeering
Biochemical networks such as metabolic, regulatory or signal transduction pathways can be viewed as interconnected processes forming an intricate network of functional and physical interactions between molecular species in the cell. The amount of information available in such networks is increasing very rapidly. This is offering the possibility of performing various analyses on the structure of the full network of pathways for one organism as well as across different organisms, and has therefore generated interest in developing databases for storing this information, and methods for analyzing such networks. Analyzing these networks remains however far from straight forward due to the nature of the biological networks, which are often very large, heterogeneous, incomplete, or inconsistent. The analysis of biological networks is hence a challenging problem in systems biology, in bioinformatics, and in computer science.
Various forms of data models have been devised for the representation and for the analysis of biochemical networks (e.g. bipartite graphs). An object-oriented model, which is the basis of the aMAZE database for the representation of biochemical processes, will be presented. A biochemical network represented in this framework can then be transformed into a generalized graph, where nodes and arcs have attributes. Such graphs can be used for the visualization of the network as well as for its analysis.
The constraint programming framework is an attractive framework for the analysis of biochemical networks because most of the analyses can be expressed as a set of basic constraints on (extended) graphs, and various domain expertise can also be described by constraints. CLP(BioNet) is a first attempt to explicitly propose biological networks, represented by a specific form of graphs, as the underlying domain of a constraint system. Constraints, such as PathConstraint, form the basic constraints of the system. A specific analysis can then be expressed by combining basic constraints.
A first prototype of CLP(BioNet) is being developed. It is implemented in Oz. It uses finite domains and ideas from finite sets. Different graph algorithms are used to ensure the incrementality and the propagation of the constraints. This approach is also tested on real biochemical networks. The specification of analysis criteria as well as the analysis of the results is done in collaboration with biologists.
Page responsible: Christoph Kessler
Last updated: 2012-08-17