SaS Seminars
Software and Systems Research Seminar Series
The SaS Seminars are a permanent series of open seminars of the Division of Software and Systems (SaS) at the Department of Computer and Information Science (IDA), Linköping University. The objective of the seminars is to present outstanding research and ideas/problems relevant for SaS present and future activities. In particular, seminars cover the SaS research areas software engineering, programming environments, system software, embedded SW/HW systems, computer systems engineering, realtime systems, parallel and distributed computing, and theoretical computer science. - Two kinds of seminars are planned:
talks by invited speakers not affiliated with SaS,
internal seminars presenting lab research to whole SaS.
The speakers are expected to give a broad perspective of the presented research, adressing the audience with a general computer science background but possibly with no specific knowledge in the domain of the presented research. The normal length of a presentation is 60 minutes, including discussion.
The SaS seminars are coordinated by Ahmed Rezine.
Recent / Upcoming SaS Seminars (2017)
SiLago: The Next Generation Synchoros VLSI Design Platform
Prof. Ahmed Hemani, Royal Institute of Technology (KTH), Kista, Sweden.
Thursday, December 14th, 2017, 15:15 room Alan Turing.
Abstract:
The VLSI design community faces the challenge of
unscalable large engineering and manufacturing costs and 2-4 orders
loss in computational efficiency compared to hardwired solutions. As a
solution, SiLago raises the abstraction of physical design platform
from the present day boolean level standard cells to
micro-architectural level SiLago (Silicon Large Grain Objects) blocks
as the atomic physical design building blocks and introduce a grid
based synchoros VLSI Design scheme to compose arbitrary designs by
abutting SiLago blocks to eliminate the logic and physical syntheses
for the end user. The word synchoros is derived from the Greek word
for space ? choros. Synchoros objects discretize space uniformly with
the grid, the way synchronous objects discretize time with clock
ticks. The synchoros design style and micro-architectural level
physical design enables SiLago method to rapidly explore the higher
abstraction design space and generate valid VLSI designs at GDSII
level corresponding 10-100 million gate complexity in minutes with an
engineering effort comparable to programming. The SiLago method also
holds the promise to eliminate the mask engineering cost.
Bio of speaker:
Ahmed Hemani is Professor in Electronic
Systems Design at School of ICT, KTH, Kista, Sweden. His current
areas of research interests are massively parallel architectures and
design methods and their applications to scientific computing and
autonomous embedded systems inspired by brain. In past he has
contributed to high-level synthesis and his doctoral thesis was the
basis for the first high-level synthesis product introduced by Cadence
called visual architect. He has also pioneered the Networks-on-chip
concept and has contributed to clocking and low power architectures
and design methods. He has extensively worked in industry including
National Semiconductors, ABB, Ericsson, Philips Semiconductors, and
Newlogic. He has also been part of three start-ups.
Challenges for dependable autonomous and cooperative driving
Prof. Antonio Casimiro, University of Lisboa
Thursday, November 30, 2017, 13:15 room Alan Turing.
Abstract:
Current vehicles are becoming increasingly autonomous and increasingly
safe, making the automated driving vision a not so distant
possibility. However, the known examples of fully autonomous cars are
still very limited, and these examples require very controlled
environments, imply performance restrictions or require the use of
expensive technology. Concerning cooperative driving, examples are
even more scarce. In fact, the move to connected vehicles raises
several significant challenges to security and safety. This talk
addresses some of the challenges in the way to autonomous and
cooperative driving, with particular focus on those that may impair
safety aspects. It also provides a perspective on a possible
architectural approach to deal with uncertainty, when it comes to
support cooperation. This approach was developed in the context of
the FP7 project KARYON, providing means to handle temporal and value
uncertainties in sensor data, communications and execution of complex
functions. An example application is given, to illustrate the
approach.
Bio of speaker:
Antonio is an Associate Professor at the Department of Informatics of the
Faculty of Sciences of the University of Lisboa. He received his PhD
degree in Computer Science (Informatics) from the University of Lisboa
in 2003. He is also a member of the LaSIGE (Large-Scale Informatic
Systems Laboratory) research unit and of the Navigators group, where he
leadd the Timeliness and Adaptation in Dependable Systems research
line.
His main research interests are in the area of dependable adaptive
systems, focusing on architectures and middleware solutions for
distributed embedded real-time applications. He was the coordinator of
the FP7 KARYON project, providing solutions for safe cooperative
applications in the automotive and avionics domains, and of the the
CMU-Portugal TRONE project. Other EU projects in which he has been
previously involved include HIDENETS (FP6) and CORTEX (FP5).
In the last few years he has been teaching several courses of the
Informatics Engineering Degree, including Real-Time and Embedded
systems, Fault-tolerant Distributed Systems, Parallel Computing,
Computer Architectures. Currently he is responsible for the Master
courses on Programming in Distributed Systems and on Cyber-Physical
Systems, as well as the undergraduate course on Computer Networks.
Executable UML: A language to define detailed and precise requirements models that can be run, tested and later deployed on diverse platforms without changing the models.
Leon Starr from Model Integration, San Francisco.
The slides are available here
Wednesday, November 15th, 2017, 13:15 room Alan Turing.
Abstract:
MBSE (Model Based Software Engineering) is increasingly accepted as an
essential factor in the design of software for Mission and Safety
Critical Systems. As it is currently practiced, MBSE spans a variety
of modeling languages including Modelica, Dymola, Simulink and some
variations of UML. In fact, UML is merely set of standard
object-oriented notations and is not itself a cohesive modeling
language. Confusion over this fact has led to a great many project
failures and stigmatized software modeling in general. Leon will be
presenting Executable UML, a true modeling language built upon strong
mathematical foundations including first order predicate logic and set
theory. This language is designed to precisely capture real world
information, policies and constraints, essential synchronization and
computations necessary to satisfy a system's requirements. Leveraging
the power of math rather than ad-hoc object oriented design
assumptions, nothing in the Executable UML models themselves demand
any particular data storage, threading, tasking, computation
sequencing model or coding philosophy (such as object-oriented
vs. functional). Consequently, the resultant models can be executed,
tested and then implemented on highly diverse software and hardware
platforms without necessitating any changes to the models
themselves. Furthermore, Executable UML uses a platform independent
domain partitioning scheme to incorporate multiple modeling languages
and non-modeled elements to define a complete system. SAAB is using
this approach to integrate Executable UML, Simulink and other modeling
languages in the Gripen-E. Thanks to the simplicity of its underlying
mathematical definitions, there is a clear path from models to code,
supported by a variety of existing open source model compilers.
Bio of speaker:
Leon Starr is the lead author of the recently published Models to Code
(with no mysterious gaps) by Springer/Apress 2017. He has been
developing real-time distributed and embedded software with object
oriented, executable models since 1984. His models have been used in
fighter jets, factory material transport control systems, ultrasound
diagnostic and cardiac pacing systems, gas chromatography and
semiconductor wafer inspection systems, video post-production systems
and networked military battle simulators. He has taught numerous
courses on executable system and data modeling to systems engineers
and software developers worldwide through his company Model
Integration, LLC (modelint.com) based in San Francisco, California. He
is the author of the books How to Build Shlaer-Mellor Object Models,
How to Build Class Models, Executable UML: A Case Study and assorted
papers at uml.org and modeling-languages.com. He regularly assists
project teams who model complex requirements and generate code from
those models for challenging hardware and software
platforms. Throughout 2013-2014 he worked with key engineering teams
at SAAB to help develop models for the Gripen-E.
Writing, translating, and implementing stream programs
Dr. Jörn Janneck from Lund University, Sweden.
Monday, October 30th, 2017, 13:15 room Alan Turing.
Abstract:
Stream programs compute by incrementally
transforming streams of input data into streams of output data, and
are a common occurrence in a wide range of application areas,
including signal processing, video and audio coding, cryptography, and
networking.
In this talk I will discuss the work going on in the Embedded Systems
Design group at Lund University that attempts to provide support for
creating and implementing stream programs on today's increasingly
parallel computing platforms, and outline some of the research
challenges we would like to address in the future.
Bio of speaker:
Jorn W. Janneck is a senior lecturer at the computer science department
at Lund University. He received a PhD from ETH Zurich in 2000, was a
postdoctoral scholar at the University of California at Berkeley, and
worked in industrial research from 2003 to 2010, first at Xilinx
Research in San Jose, CA, and then at the United Technologies Research
Center in Berkeley, CA.
He is co-author of the CAL actor language and has been working on tools
and methodology focused on making dataflow a practical programming model
in a wide range of application areas, including image processing, video
coding, networking/packet processing, DSP and wireless baseband
processing. He has made major contributions to the standardization of
RVC-CAL and dataflow by MPEG and ISO. His research is focused on
programming parallel computing machines, including programming
languages, machine models, tools, code generation, profiling, and
architecture.
sVote: a secure remote electronic voting system
Jordi Cucurull from Scytl for secure election managmeent and electronic voting.
Thursday, September 21st, 2017, 13:15 - 14.15, room John von Neumann.
Abstract:
Remote electronic voting systems enable elections
where voters can vote remotely without geographical constraints using
any Internet connected device. In order to be adopted, these systems
need to provide confidence of their operation to voters and
stakeholders. This is the reason why they have to fulfill a set of
security requirements (e.g. voter authentication, vote secrecy and
integrity, accuracy of election results, verifability, etc.), which
are focused on 1) ensuring at least the same properties as traditional
voting scenarios and 2) increasing the system verifiability and
auditability.
In this seminar we will introduce the main security requirements
expected from a remote electronic voting system, we will explain the
different types of verifiability (individual and universal
verifiability) and we will give an overview of sVote, a secure remote
electronic voting system implemented by Scytl, which has already been
used in several elections. Both the voter's experience and the
internal voting protocol of sVote will be presented.
Bio of speaker:
Dr. Jordi Cucurull is a researcher at the Scytl's Research & Security
department. He contributes to the design of electronic voting systems
and to the analysis of their security. In addition, he is doing
applied research in the areas of electronic voting, trust and security
in the context of several industrial projects. He is also involved in
research projects with academic partners.
Before joining Scytl, Jordi Cucurull, was a post-doctoral researcher
at Linköping University in Sweden. His research was devoted to
intrusion detection and mitigation applied to delay tolerant
networks. He was also involved in teaching real-time systems,
operating systems and green computing. Jordi Cucurull has a PhD in
Computer Science from Universitat Autònoma de Barcelona. His thesis
was devoted to the mobility, interoperability, and security of mobile
intelligent agents.
Deep Learning on Big Data Sets in the Cloud with Apache Spark and Google TensorFlow
Patrick GLAUNER from the University of Luxemboug.
Thursday, August 24th, 2017, 11:00 (sharp!) - 12.00, room John von Neumann. (A tutorial assuming basic deep learning backgorund is planned the same day kl 13-15, followed kl 15-17 by a talk describing applications of machine learning.)
Abstract:
Deep Learning is a set of cutting-edge machine
learning algorithms that are inspired by how the human brain works. It
allows to self-learn feature hierarchies from the data rather than
modeling hand-crafted features. It has proven to significantly improve
performance in a number of machine learnings problems, in a particular
in computer vision and speech processing. In this tutorial, we will
first provide an introduction to the theoretical foundations of neural
networks and Deep Learning. Second, we will demonstrate how to use
Deep Learning in a cloud using a distributed environment for Big Data
analytics. This combines Apache Spark and TensorFlow, Google's
in-house Deep Learning platform made for Big Data machine learning
applications. Practical demonstrations will include character
recognition and time series forecasting in Big Data sets. Attendees
will be provided with code snippets that they can easily amend in
order to analyze their own data.
Bio of speaker:
Patrick GLAUNER is a PhD student at the University of Luxembourg working on the detection of electricity theft through machine learning. He graduated as valedictorian from Karlsruhe University of Applied Sciences with a BSc in computer science and obtained his MSc in machine learning from Imperial College London. He was a CERN Fellow, worked at SAP and is an alumnus of the German National Academic Foundation (Studienstiftung des deutschen Volkes). He is also adjunct lecturer of artificial intelligence at Karlsruhe University of Applied Sciences. His current interests include anomaly detection, big data, computer vision, deep learning and time series.
Joint ADIT/SaS seminar:
The Role of Visual Data Analysis for Data-Driven Science
Prof. Dr. Ingrid Hotz, Scientific Visualization group, ITN, LiU
Wednesday, June 21, 2017, room Alan Turing.
Abstract:
Technical advances in computing have enabled a
revolution in Big Data also impacting the everyday work in scientific
applications. Traditional scientific discovery that is mostly built on
theory and experiments is more and more complemented by data-driven
science. However, while data-centric science opens many unforeseen
possibilities it is also a major bottleneck in today's knowledge
discovery process. The increasing size and complexity of the datasets
raises many new challenges for data analysis. In this talk I will
demonstrate the role of visual data analysis in this context. I will
discuss selected visualization examples pointing at the variety of
concepts and applications in this field including interaction and
exploration principles, abstraction of data and multi-level
representations, distinguishing typical and outlier behavior.
Bio of speaker:
Ingrid Hotz received her M.S. degree in
theoretical physics from the Ludwig Maximilian University in Munich,
Germany, and the PhD degree from the Computer Science Department at
the University of Kaiserslautern, Germany. During 2003-2006 she
worked as a postdoctoral researcher at the Institute for Data Analysis
and Visualization (IDAV) at the University of California. From
2006-2013 she was the leader of a research group at the Zuse Institute
in Berlin, Germany. From 2013-2015 she was the head of the scientific
visualization group at the German Aerospace Center (DLR). Since 2015
she is a Professor in Scientific Visualization at Linköping
University, in the Scientific Visualization group in Norrköping, and
has an affiliation with the Center for Medical Image Science and
Visualization (CMIV) in Linköping. The main focus of her research
lies in the area of data analysis and scientific visualization,
ranging from basic research questions to effective solutions to
visualization problems in applications including flow analysis,
engineering and physics, medical applications, and mechanical
engineering ranging from small- to large-scale simulations. Her
research builds on ideas and methods originating from different areas
of computer sciences and mathematics, such as computer graphics,
computer vision, dynamical systems, computational geometry, and
combinatorial topology.
Joint ADIT/SaS seminar:
Dynamic Speed-Scaling: Theory and Practice
Prof. Carey Williamson, University of Calgary, Canada
Tuesday, June 13th, 2017, 13:15-14:15, room Alan Turing
Abstract:
This talk provides two different perspectives on dynamic CPU speed scaling systems. Such systems have the ability to auto-scale their service capacity based on demand, which introduces many interesting tradeoffs between response time, fairness, and energy efficiency.
The talk begins by highlighting key results and observations from prior speed scaling research, which straddles both the theory and systems literature. One theme in the talk is the dichotomy between the assumptions, approaches, and results in these two different research communities. Another theme is that modern processors support surprisingly sophisticated speed scaling functionality, which is not yet well-exploited by current operating systems.
The main part of the talk shares several insights from our own work on speed scaling designs, including coupled and decoupled speed-scaling systems. This work includes analytical and simulation modeling, as well as empirical system measurements on a modern Intel i7 processor, which we have used for calibration and validation of our speed scaling simulator.
(This talk represents joint work with Maryam Elahi and Philipp Woelfel)
Bio of speaker:
Carey Williamson is a Professor in the Department of Computer Science at the University of Calgary. His educational background includes a BSc Honours degree in Computer Science from the University of Saskatchewan in 1985, and a PhD in Computer Science from Stanford University in 1991.
Dr. Williamson's research interests include Internet protocols, wireless networks, network traffic measurement, workload characterization, network simulation, and Web server performance. He is a member of ACM, SIGMETRICS, and IFIP Working Group 7.3. He served as SIG Chair for ACM SIGMETRICS from 2007-2011, and as conference chair for ACM SIGMETRICS 2005, WWW 2007, ACM IMC 2014, and IEEE MASCOTS 2017. He is also a founding co-Editor-in-Chief of the new ACM Transactions on Modeling and Performance Evaluation of Computing Systems.
Real-Time Scheduling of Mixed-Criticality Systems: What are the X Factors?
Prof. Risat Pathan, Chalmers University Of Technology, Sweden
Wednesday, April 5th, 2017, 9:00 (sharp)-10:00, room Alan Turing
Abstract:
Mixed-criticality (MC) systems consist of tasks
with different degrees of importance or criticality. Correctly
executing relatively higher critical tasks (e.g., meeting their
deadlines) is more important than that of any lower critical
task. Therefore, scheduling algorithm and its analysis have to
consider runtime situations where the correct execution of higher
critical tasks can be threatened by some events that I call ?X?
factors of MC systems. Example of such an X-factor is ?execution
overrun? which is pointed out by Steve Vestal in RTSS 2007. The
purpose of my talk is to highlight another X factor: the frequency of
error detection and recovery.
The design and analysis of real-time scheduling algorithms for safety-critical systems is a challenging problem due to the temporal dependencies among different design constraints. This work is based on scheduling sporadic tasks with three interrelated design constraints: (i) meeting the hard deadlines of application tasks, (ii) providing fault tolerance by executing backups, and (iii) respecting the criticality of each task to facilitate system?s certification. First, a new approach to model mixed-criticality systems from the perspective of fault tolerance is proposed. Second, a uniprocessor fixed-priority scheduling algorithm, called fault-tolerant mixed-criticality (FTMC) scheduling, is designed for the proposed model. The FTMC algorithm executes backups to recover from task errors caused by hardware or software faults. Third, a sufficient schedulability test is derived, when satisfied for a (mixed-criticality) task set, guarantees that all deadlines are met even if backups are executed to recover from errors. Finally, evaluations illustrate the effectiveness of the proposed test.
Bio of speaker:
Risat Pathan is an assistant professor in
the Department of Computer Science and Engineering at Chalmers
University of Technology, Sweden. He received the M.S., Lic.-Tech.,
and Ph.D. degrees from Chalmers University of Technology in 2006,
2010, and 2012, respectively. He visited the Real-Time Systems Group
at The University of North Carolina at Chapel Hill, USA during fall
2011. His main research interests are real-time scheduling on uni- and
multi-core processors from efficient resource utilization,
fault-tolerance and mixed-criticality perspectives.
From C++98 towards C++17 and beyond
Prof. Jose-Daniel Garcia-Sanchez, University Carlos III of Madrid, Spain
Thursday, January 26th, 2017, 10:15-11:15, room Alan Turing
Abstract:
C++ has now a long history and it is a highly
used language in a wide range of application domains (videogames,
finance, aerospace, embedded systems, scientific computing, to name
only some of them). After a steady period in the last decade, in
recent years we have seen a revitalization with the publication of two
new standards (C++11 and C++14) as well as a number of additional
technical specifications. The new version of the ISO C++ standard is
scheduled to be published during 2017 and there are additional plans
for evolution.
In this talk, I will provide a view on the evolution of the language
trying to highlight what are the design principles behind C++
evolution. I will illustrate examples of this evolution with features
scheduled for C++17. I will also pay some attention to features
provided in additional technical specifications complementing the main
standard that we envision for the near future. Finally I will try to
outline what could be the future of C++.
Bio of speaker:
J. Daniel Garcia is an Associate Professor
in Computer Architecture at University Carlos III of Madrid in Spain
since 2006. He has been serving as head of spanish delegation to ISO
C++ standards committee since 2008.
Before joining academia he worked as a software engineer in industrial
projects in different domains including real time control systems,
civil engineering, medical imaging, aerospace engineering, and high
performance scientific computing.
Since he moved to University he has participated in many funded
research projects at national and international levels. He was the
coordinator of the EU FP7 REPARA project aiming at refactoring C++
applications for parallel heterogeneous architectures. He also leads
his university participation in the H2020 RePhrase project also
related to better software engineering practices for parallel C++
applications. He has co-authored more than 22 articles in
international journals as well as many other in international
conferences.
His research is focused on programming models for applications
improvement. In particular, his aim is improving both performance of
applications (faster applications) and maintainability (easier to
modify).
Previous SaS Seminars
Page responsible: Christoph Kessler
Last updated: 2017-12-15