Hide menu

SaS Seminars 2011

Software and Systems Research Seminar Series


Fall 2007



Trading coverage for fault- and partition-tolerance in MANETs

Date: October 17 Place: John von Neumann Time: 10:15

Dr. Paul Ezhilchelvan , University of Newcastle upon Tyne

Abstract:

In a failure- and partition-prone network, such as a mobile ad-hoc network (manet), complete coverage requires that a message be transmitted until all operative nodes acknowledge reception. This is prohibitively expensive when small, mobile wireless devices collaborate by forming a manet. This talk will present an approach that intentionally sacrifices full coverage for fault- and partition-tolerance together with savings in memory and bandwidth. I will outline two novel broadcast protocols termed ◊Q and ◊R. The former ensures eventual quiescence (broadcast related transmissions stop permanently) and the latter eventual relinquishing of broadcast messages (which can be discarded permanently). Using these protocols, we will explore solutions to the consensus problem, essential to support user collaboration. We will observe that when full coverage is not guaranteed, consensus amongst n devices cannot be guaranteed if at most f of them can crash (unnoticeably) and if n ≤ 3f. We present a protocol for n > 3f and evaluate its performance through simulations. The protocol is eventually quiescent on the transmission of consensus decision and eventually relinquishing of all broadcasts message used.



Professor Installation Seminar:

Parallel Programming and Component Models for the Many-Core Era

Date: September 27 Place: John von Neumann Time: 15:15

Professor Christoph Kessler , IDA

Abstract:

The multicore roadmap in microprocessor design predicts a doubling of the number of cores per processor chip about every other year, while per-core performance will hardly increase any further. Programmers will very soon be faced with hundreds of hardware threads on a single processor chip. Exploiting them efficiently is the only way to keep up with the performance potential that still follows an exponential development for the foreseeable future. This makes the introduction of parallel computing even in mainstream application domains inevitable. Unfortunately, automatic parallelization methods are often not applicable due to lack of static information, not effective due to run-time overhead, or not scalable due to limitation by the given sequential program structure. Hence, explicit parallel programming is likely to become a dominating programming paradigm for the foreseeable future. In addition to the ever growing complexity of software, programmers now must master parallelism in an effective way, which creates a whole new problem dimension. (By the way - do we prepare our undergraduate students appropriately for this paradigm shift?)

Beyond the implicit shared-memory parallelism supported in many current general-purpose programming languages such as Java, future programming languages that should be able to exploit the upcoming Many-Core processor generation effectively will provide support for massive explicit parallelism, including both manual and automatic management of memory hierarchies. In this context, we advocate adopting simple and deterministic parallel programming models.

In the first part of this presentation, we briefly review previous accomplishments and current trends in the design and implementation of general-purpose parallel programming languages for various parallel hardware platforms.

In the second part, we give an outlook to component models and composition techniques for explicitly parallel software. In particular, we describe a novel approach to optimized composition of explicitly parallel components.



Spring 2007



Introduction to Systems Thinking and its Application

Date: Tuesday, May 8 Place: Alan Turing Time: 9.00 - 11.00

** Harold W. 'Bud' Lawson **, Lawson Konsult AB

Abstract: The ability to 'think' and 'act' in terms of systems is a prerequisite to being able to structure and operate organizations and their enterprises so that they can (pro-) actively pursue their business goals or missions. Systems thinking (also called the systemic approach) evolved, through multiple contributions, into a discipline that can be applied in gaining an understanding of broader aspects of systems including the dynamic relationships between systems in operation. Through systems and creative thinking, organizations/enterprises can learn to identify complex system problems and opportunities and to determine the need for as well as to consider the potential effect of system changes. The ability to act in terms of systems involves life cycle management and the usage of defined processes in order to accomplish system change. This seminar introduces systems thinking and its application. Further, the goals, structure and results of an academic course on the topic are reviewed. The course is based upon a new book in preparation for publication entitled: 'A Journey Through the Systems Landscape'.

Speaker's Profile: Harold W. 'Bud' Lawson has been active in the computing and systems arena since 1958 and has broad international experience in industrial and academic environments. Received the Bachelor of Science degree from Temple University, and the PhD degree from the Royal Technical University. Contributed to several pioneering efforts in computer hardware and software technologies. He has had professorial appointments at several universities in the USA, Europe and the Far East. Currently, Honorary Professor in the Swedish Graduate School of Computer Science and Academic Fellow in the Department of Systems Engineering and Engineering Management at Stevens Institute of Technology, Hoboken, NJ. Fellow of the Association for Computing Machinery, Fellow of the IEEE, Member of AFCEA and INCOSE. Elected architect of the ISO/IEC 15288 standard. In 2000 he was awarded the prestigious IEEE Computer Society Charles Babbage Computer Pioneer Medal for his 1964-65 invention of the pointer variable concept for programming languages. Harold Lawson is an independent consultant operating his own company Lawson Konsult AB and is, as well, a consulting partner and member of the board of directors of Syntell AB.


Towards the Engineering of Modular Software for Increased Predictability

Date: Thursday, April 26 Place: Alan Turing Time: 15:15

Professor Michel Schellekens, Centre for Efficiency-Oriented Languages, University College Cork

Abstract: We focus in this talk on two main methods used in academia and industry to optimize/evaluate software: worst-case and average-case analysis. These methods can be used in a variety of contexts for optimization purposes. For instance in a Real-Time context, to efficiently budget resources, and in embedded systems, for optimizing power consumption.

A crucial property for the predictability of software is modularity, i.e. the capacity to predict the behaviour of software from the behaviour of its components. It is shown that the worst-case measure typically does not allow for exact modularity. Current Real-Time approaches to static worst-case analysis are discussed in this light. On the other hand, we show that the average-case measure does possess inherent modularity. We show how this modularity can be exploited, based on a redesign of standard data structuring operations, to track the distribution of the data states throughout a computation. This approach in turn has enabled the specification of the novel programming language MOQA, implemented in Java 5.0, and its associated timing tool DISTRI-TRACK. MOQA (MOdular Quantitative Analysis), essentially a suite of data structure operations for modular design, is guaranteed to be modular w.r.t. the average-case time measure. This is not the case for general purpose programming languages and in particular for current languages geared towards automated average-case analysis.

The approach links several, thus far largely separate, areas together, including Semantics, Complexity, Analysis of Algorithms and Real-Time Language design. The corresponding unified foundation for algorithmic analysis has led to the solution of bottle-neck problems in automated average-case timing (open problems on dynamic algorithms, first investigated by Knuth) and has given rise to novel algorithms.

The talk focuses on the intuitions underlying the approach and should be accessible to anyone with a standard undergraduate background in the Analysis of Algorithms. The talk touches on some core issues which will be discussed in the book ``A Modular Calculus for the Average Cost of Data Structuring'', to appear with Springer.

Speaker's Profile: Prof. Michel Schellekens obtained his PhD from Carnegie Mellon University. Following his graduation from CMU, he joined Imperial College London as an EC Marie Curie Fellow and was a Post Doc at the University of Siegen. Currently he is an Associate Professor at the Department of Computer Science at University College Cork. As Science Foundation Ireland Principal Investigator he leads the center for Efficiency-Oriented Languages, which collaborates with companies including Sun Microsystems and Synopsis. His work includes fundamental contributions to Quantitative Domain Theory, new models for Complexity Analysis and more recently foundations for modular static average-case analysis. He is an editor of the journals Annals of the Marie Curie Fellowship Association and Applied General Topology.


Performance issues in emulated shared memory computing

Date: Monday, March 5 Place: Alan Turing Time: 14:15 - 15.15

Dr. Martti Forsell , VTT - Technical Research Center of Finland, Oulu

Abstract: Parallel machines employing an emulated shared memory system hold a great promise for scalable easy-to-program access to fine grained general purpose computing but unfortunately, realizing the full potential of them has turned out to be very difficult. In this presentation we discuss a number of performance issues, e.g. ILP- TLP co-exploitation, efficient implementation of high-level parallel language control and synchronization primitives, and active memory support related to emulated shared memory machines, and present promising techniques to address these issues. Some simulation results are provided.

Speaker's Profile: Martti Forsell received his Ph.D. in Computer Science from the University of Joensuu, Finland, in 1997. He is currently a Senior Research Scientist at VTT - Technical Research Center of Finland, Oulu, Finland. Dr. Forsell is the inventor of the first scalable high- performance MP-SOC/NOC architecture armed with an easy-to-use general purpose parallel application development scheme utilizing experimental e-language and compiling tools. His current research interests include parallel processor/computer architectures, systems/ networks on chip, performance area and power modeling, models of parallel computing, functionality mapping techniques, parallel languages, compiling, and optimization.



Page responsible: Christoph Kessler
Last updated: 2012-08-17