Hide menu

SaS Seminars 2010

Software and Systems Research Seminar Series

Autumn 2010

Worst-Case Analysis for Power-Awareness and Resource Sharing

Dr. Jian-Jia Chen, Karlsruhe Institute of Technology, Germany

Date: Thursday Dec 16, 2010. Place: Alan Turing Time: 10:15


Embedded systems have been widely adopted and deployed in many application domains. The worst-case response time is a non-functional but important requirement for many applications to meet the needs of performance requirements. This talk consists of two parts in this research direction. The first part will present how to analyze the worst-case peak temperature of workload conserving scheduling and the worst-case behavior of on-line dynamic voltage scaling (DVS) scheduling along with customization. The second part will present how to analyze the worst-case response time for real-time tasks with shared resources in multicore era. Different resource access arbiters and different resource access models will be discussed.

Speaker's Biography:

Dr. Jian-Jia Chen has been a Juniorprofessor in Department of Informatics at Karlsruhe Institute of Technology (KIT) since May, 2010. He received his Ph.D. degree from Department of Computer Science and Information Engineering, National Taiwan University, Taiwan in 2006. He received his B.S. degree from the Department of Chemistry at National Taiwan University 2001. After finishing the compulsory civil service in Dec. 2007, between Jan. 2008 and April 2010, he was a postdoc researcher at Computer Engineering and Networks Laboratory (TIK) in Swiss Federal Institute of Technology (ETH) Zurich, Switzerland. His research interests include real-time systems, embedded systems, reliable and dependable systems, energy-efficient scheduling, power-aware designs, temperature-aware scheduling, and distributed computing. He has received two best paper awards, and published more than 70 papers in international journals and conferences. He has served as a committee member in Members-at-Large in ACM SIGDA Low-Power Technical Committee since Aug. 2010, TPC members in several international conferences in real-time and embedded systems, such as RTSS, RTAS, RTCSA, DATE, ICCAD, etc., and Guest Editor in IEEE Transactions on Industrial Informatics (TII) and ACM Transactions on Embedded Computing Systems (TECS).

Attacking the Performance/Productivity Challenge: Computer Synthesis of Computational Programs

Prof. Markus Püschel , ETH Zürich, Switzerland

Date: Monday Dec 6, 2010. Place: Alan Turing Time: 10:15


Writing fast software has become extraordinarily difficult. For optimal performance, programs and their underlying algorithms have to be adapted to take full advantage of the platform's parallelism, memory hierarchy, and available instruction set. To make things worse, the best implementations are often platform-dependent and platforms are constantly evolving, which quickly renders libraries obsolete.

In this talk we present Spiral (www.spiral.net ), a domain-specific program generation system for important functionality used in signal processing, communication, and scientific computing including linear transforms and filters, Viterbi decoders, and basic linear algebra routines. Spiral completely replaces the human programmer. For a desired function, Spiral generates alternative algorithms, optimizes them, compiles them into programs, and intelligently searches for the best match to the computing platform. The main idea behind Spiral is a mathematical, declarative, domain-specific framework to represent algorithms and the use of rewriting systems to generate and optimize algorithms at a high level of abstraction. Optimization includes parallelization for vector architectures, shared and distributed memory platforms, and even FPGAs. Experimental results show that the code generated by Spiral competes with, and sometimes outperforms, the best available human-written code. Spiral has been used to generate part of Intel's commercial libraries IPP and MKL.

Speaker Biography:

Markus Püschel is a Professor of Computer Science at ETH Zürich, Switzerland. Before, he was a Professor of Electrical and Computer Engineering at Carnegie Mellon University, where he still has an adjunct status. He received his Diploma (M.Sc.) in Mathematics and his Doctorate (Ph.D.) in Computer Science, in 1995 and 1998, respectively, both from the University of Karlsruhe, Germany. From 1998-1999 he was a Postdoctoral Researcher at Mathematics and Computer Science, Drexel University. From 2000-2010 he was with Carnegie Mellon University, and since 2010 he has been with ETH Zurich. He was an Associate Editor for the IEEE Transactions on Signal Processing, the IEEE Signal Processing Letters, was a Guest Editor of the Proceedings of the IEEE and the Journal of Symbolic Computation, and served on various program committees of conferences in computing, compilers, and programming languages. He is a recipient of the Outstanding Research Award of the College of Engineering at Carnegie Mellon and the Eta Kappa Nu Award for Outstanding Teaching. He also holds the title of Privatdozent at the University of Technology, Vienna, Austria. In 2009 he cofounded Spiralgen Inc. More information is available at www.ece.cmu.edu/~pueschel .

Evaluation metric for infrastructure to access IC internals

Dr. Urban Ingelsson , Embedded Systems Lab, IDA, Linköping University

Date: Monday Nov 22, 2010. Place: Alan Turing Time: 15:15


We use and depend more and more on computer systems. We use mobile phones, desktops and laptops. And we depend on computer systems in cars, airplanes and telecommunication systems. These computer systems are composed of integrated circuits (ICs) containing billions of transistors that are squeezed into dies with sizes of a few square centimetres. Manufacturing of ICs is cumbersome, complicated and far from perfect. Therefore there is a need to access the internals of ICs during manufacturing, for testing and debugging, as well as during operation, for in-field test.

Today, IC design companies implement ad-hoc solutions to provide the possibility to access the internals of ICs. The basic approach is to design the IC with built-in instrumentation such as sensors, monitors, status registers, etc.. The interface of the instruments and the methodology for connecting to them are ad-hoc since no standard exist. In fact, without a standard no IC design tool vendor can develop the necessary tools to automate the process of designing ICs with test features. However, the methodology for connecting the instruments to the IC terminals is undergoing standardisation, with a standard proposal, IEEE P1687, currently under review.

In anticipation of P1687, we contribute to the development of the tools and methodologies required to replace the previous ad-hoc solutions. We have developed a metric to evaluate a P1687 network with respect to the access time to the instruments. Concerning our metric, we show that the instrument access time cannot be calculated by a closed form equation. Therefore, we present two algorithms corresponding to two types of instrument access schedules. Our analysis shows that alternative P1687 networks, for connecting to the same set of instruments, can be compared with respect to the overhead that is specific to P1687.

Speaker Biography:

Dr. Urban Ingelsson graduated with a M.Sc. degree in Computer Science and Engineering from Linköping University in 2005, a study which involved a year as exchange student at RWTH Aachen (Germany) and an internship at Philips Research (Eindhoven, the Netherlands). In 2009, he graduated with a Ph.D. in Electronics and Computer Science from the University of Southampton (UK) where he was supervised by Prof. Bashir M. Al-Hashimi. He is currently employed as a post-doctoral researcher in ESLAB, working for Dr. Erik Larsson.

Acumen: A Language for Modeling Cyber-Physical Systems

Prof. Walid Taha , Halmstad and Rice University

Date: Thursday Sep 30, 2010. Place: Alan Turing Time: 16:15

Abstract: Cyber-physical systems comprise digital components that directly interact with a physical environment. Specifying the behaviordesired of such systems requires analytical modeling of physical phenomena.

Similarly, testing them requires simulation of continuous systems. While numerous tools support later stages of developing simulation codes, there is still a large gap between analytical modeling and building running simulators. This gap significantly impedes the ability of scientists and engineers to develop novel cyber-physical systems.

We propose bridging this gap by automating the mapping from analytical models to simulation codes. Focusing on mechanical systems as an important class of physical systems, we study the form of analytical models that arise in this domain, along with the process by which domain experts map them to executable codes. We show that the key steps needed to automate this mapping are 1) a light-weight analysis to partially direct equations, 2) a binding-time analysis, and 3) symbolic differentiation. In addition to producing a prototype modeling environment, we highlight some limitations in the state of the art in tool support of simulation, and suggest ways in which some of these limitations could be overcome.

Speaker Biography:

Walid Taha is an professor at Halmstad University, and adjunct professor at Rice University. (Previously associated professor at Rice)
Walid's interests span programming languages semantics, type systems, compilers, program generation, real-time systems, and physically safe computing. He is the principal investigator on a number of NSF, Texas ATP, and SRC research grants and contracts, including an NSF CAREER Award. He is the principle designer of MetaOCaml, Acumen, and the Verilog Preprocessor system. He founded the ACM Conference on Generative Programming and Component Engineering (GPCE), the IFIP Working Group on Program Generation (WG 2.11), and the Middle Earth Programming Languages Seminar (MEPLS).

Development of Automotive Electronics: Perspective and Challenges

Dr. Luis Alejandro Cortes, Volvo AB

Date: Wednesday Sep 29, 2010. Place: Alan Turing Time: 13:15


Today, a modern vehicle contains a myriad of computer-controlled functions on-board, ranging from traction and steering features (such as fuel injection control and antilock braking) to telematics and comfort features (such as navigation systems and climate control). Dozens of computers (known as Electronic Control Units-ECUs) running millions of lines of software code are interconnected via a number of communication networks.

Since the 1970s the trend has been to enhance or replace mechanical or hydraulic systems by electronic systems. The advances in semiconductor technology have provided the means to fabricate smaller and cheaper electronic devices that perform more complex functions at higher speeds. This fact together with the huge potentials due to software technologies have made it possible to integrate complex functions that assist the driver, improve the vehicle performance and efficiency, and increase the safety as well as the level of comfort.

However, together with the new opportunities by integrated electronics in the vehicle, new challenges have come up. This seminar provides an overview of the current situation of the automotive industry and presents a high-level perspective of the main issues in the development of automotive electronics.

Also it is discussed general and specific challenges that go hand in hand with the opportunities provided by vehicle electronics.

Speaker's bio:

Luis Alejandro Cortes works currently as Group Manager of ''Vehicle Electronics'' at Volvo Technology and has several years of both academic experience in applied research and industrial experience in the development of automotive electronic systems, recently having a leading role in the development of a completely new electrical and electronic architecture for the next generation of trucks of the Volvo Group. He holds a Ph.D. degree in Computer Science from Linköping University where his research focused on real-time embedded systems.

Spring 2010

New Techniques for Functional Test of Systems-on-Chip

Prof. Matteo Sonza Reorda, Politecnico di Torino, Italy

Date: Friday June 11, 2010. Place: Grace Hopper Time: 10:15


End of production test of Integrated Circuits is currently performed mainly resorting to structural test. However, some limitations in the defect coverage which can be achieved in this way, combined with some constraints coming from the SoC design paradigm are pushing companies to also adopt functional test as a complementary test phase. This trend is raising the issue of devising efficient techniques for generating suitable functional test stimuli for the different modules that are usually found in a SoC. Moreover, some reduced Design for Testability structures are sometimes introduced in SoCs to better support functional test. The seminar will overview the state-of-the-art in functional test for SoC and discuss the latest advancements and current activities in the area.


Matteo SONZA REORDA took his MS degree in Electronic Engineering from Politecnico di Torino (Torino, Italy) in 1986, and the PhD degree in Computer Engineering from the same Institution in 1990. Since 1990 he works with the Department of Computer Engineering and Automation of Politecnico di Torino, where he is currently a Full Professor. His main research interests include Testing and Fault Tolerant design of Electronic Systems. He has published more than 250 papers on international journals and conference proceedings. He is a Senior Member of IEEE. He has been the General (1998) and Program Co-chair (2002, 2003) of the IEEE International On-line Testing Symposium (IOLTS), the Program Chair of the IEEE Workshop on Design and Diagnostics of Electronic Circuits & Systems (DDECS) in 2006, and the General Chair of the IEEE European Test Symposium (ETS) in 2008. Currently, he is the chair of the European Test Technology Technical Council (eTTTC).

Mobile Agents Technology as an approach to Delay/Disruption Tolerant Networking

Date: June 8, 2010. Place: Alan Turing Time: 16:15

Joan Borrell , Autonomous University of Barcelona


Mobile agents, understood as autonomous elements with negotiation capabilities and the ability of moving from one location to another during their execution, can be used to provide a Delay/Disruption Tolerant approach for distributed applications.

As an example, the MAETT (Mobile Agent Electronic Triage Tag) application, developed by our group, will be presented. MAETT considers an emergency scenario, with several medical personnel carrying mobile terminals (PDAs) equipped with JADE mobile agent platforms. Medical personnel classify the victims according to a medical triage protocol, trying to accelerate the transport of the most urgent cases to the field hospital or coordination point that serves the emergency. In MAETT, mobile agents act as data mules to transport triage tag data of the victims between agent platforms, without requiring any underlying network infrastructure, neither any end-to-end connectivity. Routing is opportunistic, and based on the estimated return time of each person or vehicle in the emergency zone to the coordination point.

Besides MAETT initial configuration, our recent activities to improve the application will also be commented. On the one hand, to make MAETT more dynamic, we are replacing the initial RFIDs associated to the victims with wireless sensors running Agilla mobile agents. On the other hand, to ease the collaboration between the different rescue and medical teams in the emergency scenario, we are designing a fuzzy attribute conversion mechanism to enable an interoperable access control in such a multi-domain environment.

Power-Efficient Fault Tolerant Microarchitecture for Chip Multiprocessors

Virendra Singh, IISc Bangalore

Date: June 7, 2010. Place: Alan Turing Time: 15:15


Relentless scaling of silicon fabrication technology coupled with lower design tolerances are making ICs increasing susceptible to wear-out related permanent faults as well as transient faults. A well known technique for tackling both transient and permanent faults is redundant execution, specifically space redundancy, wherein a program is executed redundantly on different processors, pipelines or functional units and the results are compared to detect faults. In this presentation, we describe a power-efficien architecture for redundant execution on chip multiprocessors (CMPs) which when coupled with our per-core dynamic voltage and frequency scaling (DVFS)algorithm significantly reduces the power overhead of redundant execution without sacrificing performance. Using cycle accurate simulation combined with an architectural power model we estimate that our architecture reduces dynamic power dissipation in the redundant core by an mean value of 76% with an associated mean performance overhead of only 1.2%. We also present an extension to our architecture that enables the use of cores with faulty functional units for redundant execution without a reduction in transient fault coverage. This extension enables the usage of faulty cores, thereby increasing yield and reliability with only a modest power-performance penalty over fault-free execution.


Virendra Singh obtained Ph.D in Computer Science from Nara Institute of Science and Technology (NAIST), Nara, Japan in 2005. He receive B.E and M.E in Electronics and Communication Engineering from Malaviya National Institute of Technology (MNIT), Jaipur, in 1995 and 1997 respectively. Currently, he is a faculty member at Supercomputer Education and Research Centre (SERC), Indian Institute of Science (IISc), Bangalore since May 2007. He served Central Electronics Engineering Research Institute (CEERI), Pilani (Rajasthan), as a Scientist for 10 years prior to join IISc. He also served as a faculty at Department of Computer Science, Banasthali University from June 1996 to March 1997. His research interests are testing and verification of high performance processors, VLSI testing, formal verification, fault tolerant computing, high performance computer architecture, embedded system design, and design for reliability, complexity of test generation algorithms. He is a member of the IEEE, the ACM, the VSI, and life member of the IETE.

Arbutus: Reliable and Scalable Data Collection in Low-Power Sensor Networks

Daniele Puccinelli , Univ of Applied Sciences of Southern Switzerland

Date: April 16, 2010. Place: Donald Knuth Time: 15:15


In data collection applications of low-end sensor networks, a major challenge is ensuring reliability without a significant goodput degradation. Short hops over high-quality links minimize per-hop transmissions, but long routes may cause congestion and load imbalance. Longer links can be exploited to build shorter routes, but poor links may have a high energy cost, and there exists a complex interplay among routing performance (reliability, goodput, energy-efficiency), link estimation, congestion control, and load balancing. We illustrate the design of a novel routing architecture, Arbutus, that leverages on this interplay, and we present an extensive experimental evaluation on testbeds of 100-150 Berkeley motes.

Speaker's profile:

Daniele Puccinelli is a postdoc at University of Applied Sciences of Southern Switzerland at Lugano. He holds a PhD in Electrical Engineering from the University of Notre Dame, Notre Dame, Indiana, USA, 2008.

Multimedia Power Management on a Platter: From Audio, to Video and Games

Date: Wednesday, February 24th, 10:15. Place: Alan Turing TIME: 10:15

Prof. Samarjit Chakraborty, TU München, Germany


Multimedia applications today constitute a sizeable workload that needs to be supported by a host of mobile devices ranging from cell phones, to PDAs and portable game consoles. Battery life is a major design concern for all of these devices. In this talk I will discuss some of our recent efforts towards developing application-specific power management schemes for a variety of multimedia applications.

Speaker's Bio:

Samarjit Chakraborty is a Professor of Electrical Engineering at the Technical University of Munich, where he heads the Institute for Real-Time Computer Systems. He obtained his Ph.D. in Electrical and Computer Engineering from ETH Zurich in 2003. Prior to joining TU Munich, from 2003 -- 2008 he was an Assistant Professor of Computer Science at the National University of Singapore. His research interests are primarily in system-level power/performance analysis of real-time and embedded systems.

GPU architectures and OpenCL: What's important and why

Date: Feb 03 2010 Place: Alan Turing Time: 10:15

Dr. David Black-Schaffer, Uppsala university, Sweden


Today everyone is positioning GPUs for general purpose computing. They claim that you can get 10-100x speedups over conventional CPUs, and sometimes they're even right. However, to get the most out of current- (and next-) generation GPUs, one needs to understand the architectural differences and how they effect your choice of algorithm. In this talk I will cover GPU architecture in comparison to current CPUs, discuss the implications for getting good performance, and introduce OpenCL as a general-purpose programming language for accessing GPUs and CPUs today.

Speaker's profile:

David Black-Schaffer received his PhD in electrical engineering from Stanford University in 2008 focusing on parallel programming systems for many-core processors. After that he worked for at Apple designing and developing the first implementation of the new OpenCL specification for heterogeneous parallel processing on CPUs and GPUs. Since the fall of 2009 he has been working as a postdoctoral researcher in the Uppsala Architecture Research Team at Uppsala University.

Page responsible: Christoph Kessler
Last updated: 2012-08-17