Hide menu

SaS Seminars

Software and Systems Research Seminar Series


The SaS Seminars are a permanent series of open seminars of the Division of Software and Systems (SaS) at the Department of Computer and Information Science (IDA), Linköping University. The objective of the seminars is to present outstanding research and ideas/problems relevant for SaS present and future activities. In particular, seminars cover the SaS research areas software engineering, programming environments, system software, embedded SW/HW systems, computer systems engineering, realtime systems, parallel and distributed computing, and theoretical computer science. - Two kinds of seminars are planned:

  • talks by invited speakers not affiliated with SaS,

  • internal seminars presenting lab research to whole SaS.

The speakers are expected to give a broad perspective of the presented research, adressing the audience with a general computer science background but possibly with no specific knowledge in the domain of the presented research. The normal length of a presentation is 60 minutes, including discussion.

The SaS seminars are coordinated by Ahmed Rezine.



Recent / Upcoming SaS Seminars (2017)



Deep Learning on Big Data Sets in the Cloud with Apache Spark and Google TensorFlow

Patrick GLAUNER from the University of Luxemboug.

Thursday, August 24th, 2017, 11:00 (sharp!) - 12.00, room John von Neumann. (A tutorial assuming basic deep learning backgorund is planned the same day kl 13-15, followed kl 15-17 by a talk describing applications of machine learning.)

Abstract:
Deep Learning is a set of cutting-edge machine learning algorithms that are inspired by how the human brain works. It allows to self-learn feature hierarchies from the data rather than modeling hand-crafted features. It has proven to significantly improve performance in a number of machine learnings problems, in a particular in computer vision and speech processing. In this tutorial, we will first provide an introduction to the theoretical foundations of neural networks and Deep Learning. Second, we will demonstrate how to use Deep Learning in a cloud using a distributed environment for Big Data analytics. This combines Apache Spark and TensorFlow, Google's in-house Deep Learning platform made for Big Data machine learning applications. Practical demonstrations will include character recognition and time series forecasting in Big Data sets. Attendees will be provided with code snippets that they can easily amend in order to analyze their own data.

Bio of speaker:
Patrick GLAUNER is a PhD student at the University of Luxembourg working on the detection of electricity theft through machine learning. He graduated as valedictorian from Karlsruhe University of Applied Sciences with a BSc in computer science and obtained his MSc in machine learning from Imperial College London. He was a CERN Fellow, worked at SAP and is an alumnus of the German National Academic Foundation (Studienstiftung des deutschen Volkes). He is also adjunct lecturer of artificial intelligence at Karlsruhe University of Applied Sciences. His current interests include anomaly detection, big data, computer vision, deep learning and time series.



Joint ADIT/SaS seminar:

The Role of Visual Data Analysis for Data-Driven Science

Prof. Dr. Ingrid Hotz, Scientific Visualization group, ITN, LiU

Wednesday, June 21, 2017, room Alan Turing.

Abstract:
Technical advances in computing have enabled a revolution in Big Data also impacting the everyday work in scientific applications. Traditional scientific discovery that is mostly built on theory and experiments is more and more complemented by data-driven science. However, while data-centric science opens many unforeseen possibilities it is also a major bottleneck in today's knowledge discovery process. The increasing size and complexity of the datasets raises many new challenges for data analysis. In this talk I will demonstrate the role of visual data analysis in this context. I will discuss selected visualization examples pointing at the variety of concepts and applications in this field including interaction and exploration principles, abstraction of data and multi-level representations, distinguishing typical and outlier behavior.

Bio of speaker:
Ingrid Hotz received her M.S. degree in theoretical physics from the Ludwig Maximilian University in Munich, Germany, and the PhD degree from the Computer Science Department at the University of Kaiserslautern, Germany. During 2003-2006 she worked as a postdoctoral researcher at the Institute for Data Analysis and Visualization (IDAV) at the University of California. From 2006-2013 she was the leader of a research group at the Zuse Institute in Berlin, Germany. From 2013-2015 she was the head of the scientific visualization group at the German Aerospace Center (DLR). Since 2015 she is a Professor in Scientific Visualization at Linköping University, in the Scientific Visualization group in Norrköping, and has an affiliation with the Center for Medical Image Science and Visualization (CMIV) in Linköping. The main focus of her research lies in the area of data analysis and scientific visualization, ranging from basic research questions to effective solutions to visualization problems in applications including flow analysis, engineering and physics, medical applications, and mechanical engineering ranging from small- to large-scale simulations. Her research builds on ideas and methods originating from different areas of computer sciences and mathematics, such as computer graphics, computer vision, dynamical systems, computational geometry, and combinatorial topology.



Joint ADIT/SaS seminar:

Dynamic Speed-Scaling: Theory and Practice

Prof. Carey Williamson, University of Calgary, Canada

Tuesday, June 13th, 2017, 13:15-14:15, room Alan Turing

Abstract:
This talk provides two different perspectives on dynamic CPU speed scaling systems. Such systems have the ability to auto-scale their service capacity based on demand, which introduces many interesting tradeoffs between response time, fairness, and energy efficiency. The talk begins by highlighting key results and observations from prior speed scaling research, which straddles both the theory and systems literature. One theme in the talk is the dichotomy between the assumptions, approaches, and results in these two different research communities. Another theme is that modern processors support surprisingly sophisticated speed scaling functionality, which is not yet well-exploited by current operating systems. The main part of the talk shares several insights from our own work on speed scaling designs, including coupled and decoupled speed-scaling systems. This work includes analytical and simulation modeling, as well as empirical system measurements on a modern Intel i7 processor, which we have used for calibration and validation of our speed scaling simulator. (This talk represents joint work with Maryam Elahi and Philipp Woelfel)

Bio of speaker:
Carey Williamson is a Professor in the Department of Computer Science at the University of Calgary. His educational background includes a BSc Honours degree in Computer Science from the University of Saskatchewan in 1985, and a PhD in Computer Science from Stanford University in 1991. Dr. Williamson's research interests include Internet protocols, wireless networks, network traffic measurement, workload characterization, network simulation, and Web server performance. He is a member of ACM, SIGMETRICS, and IFIP Working Group 7.3. He served as SIG Chair for ACM SIGMETRICS from 2007-2011, and as conference chair for ACM SIGMETRICS 2005, WWW 2007, ACM IMC 2014, and IEEE MASCOTS 2017. He is also a founding co-Editor-in-Chief of the new ACM Transactions on Modeling and Performance Evaluation of Computing Systems.



Real-Time Scheduling of Mixed-Criticality Systems: What are the X Factors?

Prof. Risat Pathan, Chalmers University Of Technology, Sweden

Wednesday, April 5th, 2017, 9:00 (sharp)-10:00, room Alan Turing

Abstract:
Mixed-criticality (MC) systems consist of tasks with different degrees of importance or criticality. Correctly executing relatively higher critical tasks (e.g., meeting their deadlines) is more important than that of any lower critical task. Therefore, scheduling algorithm and its analysis have to consider runtime situations where the correct execution of higher critical tasks can be threatened by some events that I call ?X? factors of MC systems. Example of such an X-factor is ?execution overrun? which is pointed out by Steve Vestal in RTSS 2007. The purpose of my talk is to highlight another X factor: the frequency of error detection and recovery.

The design and analysis of real-time scheduling algorithms for safety-critical systems is a challenging problem due to the temporal dependencies among different design constraints. This work is based on scheduling sporadic tasks with three interrelated design constraints: (i) meeting the hard deadlines of application tasks, (ii) providing fault tolerance by executing backups, and (iii) respecting the criticality of each task to facilitate system?s certification. First, a new approach to model mixed-criticality systems from the perspective of fault tolerance is proposed. Second, a uniprocessor fixed-priority scheduling algorithm, called fault-tolerant mixed-criticality (FTMC) scheduling, is designed for the proposed model. The FTMC algorithm executes backups to recover from task errors caused by hardware or software faults. Third, a sufficient schedulability test is derived, when satisfied for a (mixed-criticality) task set, guarantees that all deadlines are met even if backups are executed to recover from errors. Finally, evaluations illustrate the effectiveness of the proposed test.

Bio of speaker:
Risat Pathan is an assistant professor in the Department of Computer Science and Engineering at Chalmers University of Technology, Sweden. He received the M.S., Lic.-Tech., and Ph.D. degrees from Chalmers University of Technology in 2006, 2010, and 2012, respectively. He visited the Real-Time Systems Group at The University of North Carolina at Chapel Hill, USA during fall 2011. His main research interests are real-time scheduling on uni- and multi-core processors from efficient resource utilization, fault-tolerance and mixed-criticality perspectives.



From C++98 towards C++17 and beyond

Prof. Jose-Daniel Garcia-Sanchez, University Carlos III of Madrid, Spain

Thursday, January 26th, 2017, 10:15-11:15, room Alan Turing

Abstract:
C++ has now a long history and it is a highly used language in a wide range of application domains (videogames, finance, aerospace, embedded systems, scientific computing, to name only some of them). After a steady period in the last decade, in recent years we have seen a revitalization with the publication of two new standards (C++11 and C++14) as well as a number of additional technical specifications. The new version of the ISO C++ standard is scheduled to be published during 2017 and there are additional plans for evolution. In this talk, I will provide a view on the evolution of the language trying to highlight what are the design principles behind C++ evolution. I will illustrate examples of this evolution with features scheduled for C++17. I will also pay some attention to features provided in additional technical specifications complementing the main standard that we envision for the near future. Finally I will try to outline what could be the future of C++.

Bio of speaker:
J. Daniel Garcia is an Associate Professor in Computer Architecture at University Carlos III of Madrid in Spain since 2006. He has been serving as head of spanish delegation to ISO C++ standards committee since 2008. Before joining academia he worked as a software engineer in industrial projects in different domains including real time control systems, civil engineering, medical imaging, aerospace engineering, and high performance scientific computing. Since he moved to University he has participated in many funded research projects at national and international levels. He was the coordinator of the EU FP7 REPARA project aiming at refactoring C++ applications for parallel heterogeneous architectures. He also leads his university participation in the H2020 RePhrase project also related to better software engineering practices for parallel C++ applications. He has co-authored more than 22 articles in international journals as well as many other in international conferences. His research is focused on programming models for applications improvement. In particular, his aim is improving both performance of applications (faster applications) and maintainability (easier to modify).




Previous SaS Seminars



Page responsible: Christoph Kessler
Last updated: 2017-06-29