Hide menu

SaS Seminars 2013

Software and Systems Research Seminar Series


The SaS Seminars are a permanent series of open seminars of the Division of Software and Systems (SaS) at the Department of Computer and Information Science (IDA), Linköping University. The objective of the seminars is to present outstanding research and ideas/problems relevant for SaS present and future activities. In particular, seminars cover the SaS research areas software engineering, programming environments, system software, embedded SW/HW systems, computer systems engineering, realtime systems, parallel and distributed computing, and theoretical computer science. - Two kinds of seminars are planned:

  • talks by invited speakers not affiliated with SaS,

  • internal seminars presenting lab research to whole SaS.

The speakers are expected to give a broad perspective of the presented research, adressing the audience with a general computer science background but possibly with no specific knowledge in the domain of the presented research. The normal length of a presentation is 60 minutes, including discussion.

The SaS seminars are coordinated by Christoph Kessler.



SaS Seminars (2013)



Locality-aware concurrency - An approach to energy-efficient computing

Prof. Dr. Phuong Ha, Univ. of Tromsø, Norway

Monday, 9 december, 11:00 (sharp), room Alan Turing

Abstract:
Energy efficiency is becoming a major design constraint in computing systems ranging from embedded to high performance computing (HPC) systems. In order to construct energy efficient software systems, data structures and algorithms must support not only high parallelism but also data locality. Unlike conventional locality-aware data structures and algorithms that concern only whether data is on-chip (e.g. data in cache) or not (e.g. data in DRAM), new energy-efficient data structures and algorithms must consider data locality in finer-granularity: where on chip the data is. It is because in modern multicore systems the energy difference between accessing data in nearby memories (2pJ) and accessing data across the chip (150pJ) is almost two orders of magnitude, while the energy difference between accessing on-chip data (150pJ) and accessing off-chip data (300pJ) is only two-fold.
In this talk, I will present our initial research results on locality-aware concurrency. I will first introduce a new relaxed cache oblivious model, which enables developing concurrent algorithms that can utilize fine-grained data locality as desired by energy efficient computing. I will then demonstrate how to use the new model to develop a novel dynamic van Emde Boas data layout that is suitable for locality-aware concurrent data structures. As an example for the layout applications, I will show how to use the layout to devise a search tree (DeltaTree) that supports both high concurrency and fine-grained data locality. Experimental evaluation comparing DeltaTree with AVL and red-black trees shows that DeltaTree achieves the best performance when the update contention is not too high.
I will conclude the talk by highlighting our ongoing research on locality-aware and energy-aware concurrency.

Short Bio:
Phuong Ha is an associate professor at the Department of Computer Science, University of Tromsø, Norway. His research interests include parallel computing and systems, with special emphasis on energy-aware concurrency, parallel algorithms and concurrent data structures. He is a work-package leader in the new EU-funded project Execution Models For Energy-Efficient Computing Systems (EXCESS) and a Management Committee member of EU COST Action Transactional Memories: Foundations, Algorithms, Tools, and Applications (EuroTM). He obtained a PhD degree in Computer Science from Chalmers University of Technology, Sweden in 2006. Currently, he is a visiting research scholar at the Department of Computer Science, Rutgers University.



Resource-Aware Capacity Evaluation for Heterogeneous, Disruption-Tolerant Networks

Dr. Gabriel Sandulescu, NECTEC Thailand

Wednesday, 4 december, 10:15, room Alan Turing

Abstract:
Estimating end-to-end capacity is challenging in disruption-tolerant networks (DTNs) because reliable and timely feedback is usually unavailable. The aim of this talk is to present a resource-aware framework for estimating capacity between node pairs in networks lacking end-to-end connectivity. The proposed framework builds on information gathered autonomously by nodes so that results emerge from actual network properties (mobility, routing, resource distribution). I will start from a strict periodic scenario (such as an ideal public transportation system) and then extend the model to a random mobility scenario. In periodic scenarios achievable capacity can be formulated as a linear programming problem and computed deterministically. In scenarios with random mobility a probabilistic approach can be used to compute upper and lower bounds on achievable capacity between pairs of nodes. I will discuss results obtained under various simulation settings (different mobilities, different store-carry-forward protocols, homogeneous and heterogeneous resource distribution).

Speaker's bio:
Gabriel Sandulescu is currently a postdoctoral researcher with the Networking Lab, NECTEC Thailand. He obtained his PhD in Computer Sciences from the University of Luxembourg in 2011. His doctoral dissertation focused on resource allocation in delay- and disruption-tolerant networks. In his previous career, he was employed in the industry in the areas of software engineering and project management.



Bulk Synchronous Streaming Model for the MPPA-256 Manycore Processor

Dr. Benoît Dupont de Dinechin, Kalray, France

Thursday, 28 november, 13:00 (sharp), room Alan Turing

Abstract:
The Kalray MPPA-256 is an integrated manycore processor manufactured in 28nm CMOS technology that consumes about 10W for 230GFLOPS at 400MHz. Its 256 data processing cores and 32 system cores are distributed across 16 shared-memory clusters and 4 I/O subsystems, themselves connected by two networks-on-chip (NoCs). Each Kalray core implements a general-purpose Very Long Instruction Word (VLIW) architecture with 32-bit addresses, a 32-bit / 64-bit floating-point unit, and a memory management unit. This talk explains the motivations and the directions for the development of a streaming programming model for the MPPA-256 processor. Like the IBM Cell/BE or the Intel SCC, the Kalray MPPA-256 architecture is based on clusters of general-purpose cores that share a local memory, where remote memory accesses require explicit communication. By comparison, GP-GPU architectures allow direct access to the global memory and hide the resulting latency with massive hardware multithreading. On-going port of OpenCL to the MPPA-256 processor may only reach limited performances and gives up run-time predictability, as the global memory has to be emulated by software with a Distributed Shared Memory (DSM) technique. The alternative we propose is to develop a stream-oriented programming model called 'Bulk Synchronous Streaming' (BSS), by adapting the classic Bulk Synchronous Parallel (BSP) model. The BSP model belongs to the family of symmetric parallel programming models for distributed memory supercomputers, like Cray SHMEM and Co-Array Fortran. The adaptations envisioned for the BSS include: maintaining the global data objects in the DDR memory, instead of distributing them across the local memories; enabling execution of BSP-like programs with a number of processing images larger than the number of clusters, by streaming their execution onto the available clusters; extending the precise BSP performance model to the BSS model. The result can be characterized as a generalized vector execution model, since the global data updates are not visible until after the superstep synchronizations.

Speaker's bio:
Benoît Dupont de Dinechin is the CTO of Kalray and one of the MPPA MANYCORE main architects. He joined Kalray in 2009 as head of the software development group. Prior to Kalray, he was leading the development of production compilers and architecture description tools for DSP and VLIW cores at STMicroelectronics. Benoît contributed to the production compiler of the Cray T3E while working at the Cray Research park between 1995 and 1998. He holds an engineering degree from the Ecole Nationale Supérieure de l'Aéronautique et de l'Espace, and earned a PhD from University of Paris 6 under the supervision of Paul Feautrier.



Beyond Technical Security: the Socio-Technical Analysis

Dr. Gabriele Lenzini, Univ. of Luxemburg

Thursday, 24 october, 10:15, room Alan Turing

Abstract:
Despite the enormous success of applied cryptography, how to achieve end-to-end security still is an open question. This should not surprise: in fact, a system's security depends not only on its technology robustness but also on how it integrates with the environments where it is deployed and with the people it interacts. Consequently, one the greatest challenges facing computer security today is to contain attacks that exploit weaknesses in the ``social layers''. These are the layers that hackers now prefer to target, combining technical skills with social-engineering abilities.
In this way, intruders get control of systems more easily. Often and not surprisingly they take advantage of confusing interfaces, cumbersome security mechanisms, and poorly designed human-computer ceremonies. In this talk I will present some preliminary research on building a framework for studying security socio-technically.
Such a framework consists of a reference model and a toolkit of methodologies that we applied to study the security of two specific security socio-technical systems: validation of TLS certificates and selection of WiFi access points.

Speaker's profile:
Dr. Gabriele Lenzinis expertise is about modelling, analysis and design of secure and trustworthy systems. He holds a PhD in Computer Science (University of Twente, The Netherlands) and two MSc, respectively in Computer Science and in Information Technologies (University of Pisa, Italy).
He has worked at the University of Pisa and at the Italian National Council of Research (CNR), in Italy, and at the University of Twente and at Telematica Institute in the Netherlands. He participated in the development and execution of numerous national and international projects, most of them with a strong industrial participation.
In 2010 he joined the Interdisciplinary Centre for Security, Reliability and Trust (SnT). He is now a member of the Applied Security and Information Assurance (APSIA) research group. He works on electronic voting security, on location and privacy assurance, and in socio-technical security.



Verification and testing of extra-functional properties in software

Dr. Sudipta Chattopadhyay, ESLAB, IDA

Wednesday 9 October 2013, 13:15, room Alan Turing

Abstract:
Over the last few decades, software verification and testing have made significant progress. However, most of these classic techniques target the verification and testing of software functionality. For embedded software, it is particularly important to validate them against several extra-functional properties, such as timing and energy. Violation of these non-functional properties might lead to serious consequences, potentially costing human lives. In this talk, I shall describe our past and ongoing efforts to validate the extra-functional properties of software, such as timing and energy. First, I shall talk about a proof architecture to analyze the cache performance of embedded software. This proof architecture systematically combines classical abstract interpretation with program verification (e.g. model checking and symbolic execution) and derives tight bounds on the worst case execution time (WCET) of embedded software. Secondly, I shall describe briefly about our recent works on performance partitioning and performance testing of embedded software. I shall end the talk by mentioning our ongoing work (with my colleagues in National University of Singapore) on energy testing of smart-phone applications and I shall also discuss some open issues in this area.

Speaker's bio:
Sudipta Chattopadhyay is currently a postdoctoral researcher at the embedded systems lab in IDA. He has recently finished his PhD from National University of Singapore in January, 2013. His doctoral dissertation has focused on worst case execution time analysis of embedded software running on multi-core platforms. His research interests are broadly in program analysis, verification and testing, specifically targeting embedded, real-time software and parallel programs.



Rethinking Code Generation in Compilers

Prof. Dr. Christian Schulte, KTH

Thursday 5 September 2013, 14:00, room Alan Turing

Abstract:
In this talk I will show how to use constraint programming as a combinatorial optimization approach for code generation in a compiler back-end. The talk will briefly review code generation and the basic ideas behind constraint programming.
The talk will focus on a new model for global register allocation that combines several advanced aspects: multiple register banks (subsuming spilling to memory), coalescing, and packing. The model is extended to include instruction scheduling and bundling. Solving the model uses a decomposition scheme exploiting the underlying program structure and exhibiting robust behavior for functions with thousands of instructions. Evaluation shows that code quality is on par with LLVM, a state-of-the-art compiler infrastructure.
I will conclude the talk with highlighting on-going research and projects related to constraint-based code-generation.

Speaker's profile:
Christian Schulte is a professor of computer science at the unit Software and Computer Systems, School of Information and Communication Technology, KTH Royal Institute of Technology in Stockholm, Sweden. Christian also works as expert researcher at the Computer Systems Laboratory of the Swedish Institute of Computer Science (SICS).
Before joining KTH in 2002, he got a diploma in computer science from the University of Karlsruhe, Germany (1992), worked as a researcher and project leader at the German Research Center for Artificial Intelligence (DFKI) (1992-1997) and as a researcher at Saarland University, Germany (1997-2002), from which he also obtained a doctoral degree in engineering (2001). At KTH, he earned a docent degree in computer systems in 2009.
His research interests include constraint programming, programming systems, and distributed systems. His current research focus is on constraint-based compilation and on models, architectures, and implementation techniques for constraint programming systems. He is heading the development of Gecode, one of the most widely used Constraint Programming systems.
More information is available at web.it.kth.se/~cschulte.



3G Long Term Evolution (LTE) - A 4G Technology

Dr. Eva Englund, Ericsson AB

Friday 17 May 2013, 10:15, room Alan Turing

Abstract: The presentation provides an overview of Long Term Evolution (LTE) standard, also referred to as the 4G mobile communications technology. The presentation highlights the key techology components introduced in LTE as well as the 3GPP evolution path leading to LTE - Advanced and beyond. The presentation will also share some experience from the early field introduction of LTE.

Speaker's profile: Eva disputerade på ISY/Datatransmission och forskade därefter en period på FOI med militära paketradionät. Därefter flyttade Eva till Ericsson Research i Linköping där hon varit verksam i många år som innovatör och projektledare. Hon var med från början med att ta fram grunderna för 4G/LTE och jobbade sedan som projektledare för Ericsson Research största LTE projekt. Eva har skapat många patent och fått pris som årets innovatör i Ericsson 2008. Nu är Eva Teknisk Koordinator på LTE produktutveckling och är även en uppskattad föreläsare. Eva är också en tufft skolad balettdansör från Balettakademin.



A Decade of Modelling Parallel Computing Systems

Dr. Sabri Pllana, Linnaeus University Växjö

Thursday 2 may 2013, 13:15, room Alan Turing

Abstract: In mature engineering disciplines such as civil engineering, before an artefact (for instance a bridge) is built, the corresponding model is developed. We argue that a practice model first then build the code would benefit also software engineers. Since programming parallel systems is considered significantly more omplex than programming sequential systems, the use of models in the context of parallel computing systems is of particular importance. In this talk we will highlight some of our models of parallel computing systems that we have developed in the last ten years. We will first address various modelling aspects in the context of clusters of SMPs, continue thereafter with the Grid, and conclude this talk with heterogeneous computing systems.

Speaker's profile: Dr. Sabri Pllana is an associate professor at the department of computer science at Linnaeus University in Växjö since April 2013. Before that he worked as a senior research scientist at the Research Group Scientific Computing at the University of Vienna, Austria. His current research interests include intelligent programming environments and performance-oriented software engineering for parallel and distributed systems. He contributed to several EU-funded projects and was coordinator of the recently completed FP7 project Performance Portability and Programmability for Heterogeneous Many-core Architectures (PEPPHER).
He has published over 50 peer-reviewed publications, and has contributed as member or chair to more than 60 program committees.
Sabri Pllana holds a PhD degree (with distinction) in computer science from the Vienna University of Technology. He is member of the IEEE, member of the HiPEAC network of excellence, member of the Networked European Software and Services Initiative (NESSI), member of the MIR Labs network of excellence, and member of the Euro-Par Advisory Board.





Previous SaS Seminars



Page responsible: Christoph Kessler
Last updated: 2015-01-15