Software and Systems Research Seminar Series
The SaS Seminars are a permanent series of open seminars of the Division of Software and Systems (SaS) at the Department of Computer and Information Science (IDA), Linköping University. The objective of the seminars is to present outstanding research and ideas/problems relevant for SaS present and future activities. In particular, seminars cover the SaS research areas software engineering, programming environments, system software, embedded SW/HW systems, computer systems engineering, realtime systems, parallel and distributed computing, and theoretical computer science. - Two kinds of seminars are planned:
talks by invited speakers not affiliated with SaS,
internal seminars presenting lab research to whole SaS.
The speakers are expected to give a broad perspective of the presented research, adressing the audience with a general computer science background but possibly with no specific knowledge in the domain of the presented research. The normal length of a presentation is 60 minutes, including discussion.
The SaS seminars are coordinated by Ahmed Rezine.
Recent / Upcoming SaS Seminars (2019)
On logic programming and locating errors in programs
Prof. Włodzimierz (Włodek) Drabent, Department of Computer and Information Science, Linköping.
Friday, November 8th kl 13:15-14:00, room Alan Turing.
Logic programming, and the programming language Prolog, make it possible to program declaratively. The programmer may reason about her programs in terms of their declarative semantics ("what has to be computed"), and abstract from their operation semantics ("what are the computations of the programs"). However when it comes to debugging, the advantages of declarative programming are lost. Methods of declarative locating of errors in programs are known (called declarative diagnosis, or algorithmic debugging), but no tools are available. We discuss why these methods have not been accepted, and suggest a way of overcoming the main obstacle. I will try to make the presentation accessible also to persons not familiar with logic programming, by including a kind of popular introduction to this programming paradigm. Related published papers:
- W. Drabent. "Logic + control: On program construction and verification." Theory and Practice of Logic Programming, 18(1):1-29, 2018.
- W. Drabent. "Correctness and Completeness of Logic Programs." ACM Transactions on Computational Logic. 17(3), 2016.
- W. Drabent. "On definite program answers and least Herbrand models." Theory and Practice of Logic Programming, 16(4):498-508, 2016.
Bio of speaker:
Prof. Włodzimierz (Włodek) Drabent is professor both at the Computer and Information Science Department in Linköping and at the Institute of Computer Science, Polish Academy of Sciences in Warsaw. Prof. Włodek's main interests include logic programming: semantics, proving program properties, descriptive types, diagnosing program errors, semantic analysis of programs, negation. He is also interested in programming paradigms related to logic programming and in the semantics of programming languages, proving program correctness.
High-level programming of data-centric reconfigurable dataflow systems
Prof. Dr. Georgi Gaydadjiev, Maxeler Technologies, London.
Thursday, July 4, 9:15, room Ada Lovelace.
Streaming dataflow has been recognised as a promising paradigm for addressing the data intensive parts of many applications. Silicon manufacturing technologies will keep delivering growing numbers of transistors per unit silicon area for at least few more generations and efficiently deploying these additional resources into direct performance advantages is a real challenge. The dataflow approach offers a valid solution by implicitly "hardening" all basic operations inside a large computational structure, tight control and data movements minimisation at all levels, while at the same time allowing highest degree of customization at very fine levels of granularity. Massive, deeply pipelined dataflow accelerator structures with at least thousands of pipeline stages can deliver unprecedented throughput advantages even when operated at frequencies order of magnitude lower than traditional technology. However, the programmability of such custom computing structures remains challenging. We will present a programming and execution model designed with streaming dataflow execution in mind. The basic assumption is that all operations happen in space on the reconfigurable silicon substrate and by default are performed in parallel. Our approach allows designers to partition, lay out and optimize their programs at all levels starting from high-level algorithmic transformations all the way down to individual customised bit manipulations. In addition, the execution model enforces highly efficient scheduling (or better called choreography) of all basic computational and data movement actions with the guarantee of no side effects. This approach is facilitated by a set of dedicated design tools and novel design methods. We will demonstrate how scientists and domain experts can program power efficient, reconfigurable custom computing systems with minimal knowledge of the low-level hardware details. Relevant code examples and results achieved by real systems will support our claims.
Bio of speaker:
Prof. Dr. Georgi N. Gaydadjiev is director of Maxeler IoT-Labs BV in Delft and VP of Dataflow Software Engineering at Maxeler Ltd in London. He is also a professor at TU Delft and a honorary visiting professor at Imperial College London. Previously he held the Chair in Computer Systems Engineering at Chalmers in Sweden.
Generic Programming and Parallel Patterns
Prof. Jose Daniel Garcia Sanchez, University of Carlos III, Madrid, Spain.
Friday, July 5, 9:15, room Ada Lovelace.
Generic programming provides a way to abstract common programming patterns and algorithms, allowing application programmers to provide the domain specific details. For decades, this has been one of the key principles of the Standard Template Library (STL) in C++. The very same principle can be applied to parallel skeletons and patterns, where an additional configuration parameter is the execution model to be used. In this talk, I will consider different aspects of integrating generic programming and parallel programming, also providing an overview of the current status in the latest ISO C++ standard. I will also present an alternative approach based on the GrPPI library.
Prof. Jose Daniel Garcia Sanchez is an Associate Professor in Computer Architecture and Technology at the Department of Computer Science and Engineering of University Carlos III of Madrid. He holds a PhD in Informatics Engineering from University Carlos III of Madrid and a Bachelor in Computer Science from Madrid Technical University. Before joining university he worked in projects for companies like Telefonica, British Telecom, FCC, Siemens or ING Bank. Since 2008 he is the Spanish representative in committee ISO/IEC JTC1/SC22/WG21 in charge of standardizing the C++ programming language. At the national level, he is the president of subcommittee CTN71/SC22 (programming languages, its environment and systems software interfaces) and CTN1/SC22/GT21 (C++ language). Since 2008 he has actively contributed in the wording of all international standards related to the C++ programming language. He has co-authored more that 70 papers in international journals and conferences. Additionally, he has participated in 20 competitive funding projects and 15 research and technology transfer contracts with companies. His research activity is framed within the Computer Architecture, Communications and Systems research group, where he works in the research line of Programming Models for Application Improvement. His main goal is to make software developers' lives easier by improving balance between performance improvement and maintainability with a special focus to multi-core processors and parallel heterogeneous computing systems.
Determining Minimum Hash Width for Hash Chains
Prof. Dr. Jörg Keller, Fern Universität in Hagen, Germany.
Wednesday, May 22, 9:00, room John von Neumann.
Cryptographic hash functions are used in authentication, and repeated application in hash chains is used in communication protocols. In embedded devices, the width of hash values and the associated effort to evaluate the hash function is crucial, and hence the hash values should be as short as possible but should still be sufficient to guarantee the required level of security. We present a new proof for a known result by Flajolet and Odlyzko (Crypto 1989), using only elementary combinatoric and probabilistic arguments. Using this result, we derive a bound on the expected number of hash values still reachable after a given number of steps in the hash chain, so that given any two of the three parameters: hash chain length, width of the hash value, and security level, the remaining parameter can be computed. Furthermore, we illustrate how to ?refresh? a hash chain to increase the number of reachable hash values if the initial seed is long enough. Based on this, we present a scheme that allows reduced width of hash values, and thus reduced energy consumption in the device, for a hash chain of similar length and similar security level. We illustrate our findings with experiments.
Bio of speaker:
Jörg Keller is professor at the faculty for mathematics and computer science of Fern Universität in Hagen, Germany, where he holds the chair of Parallelism and VLSI. His research interests include Internet Security, Fault Tolerant Computing, VLSI Design and Parallel Computing.
Model-driven dependability forecasting of software systems
Prof. Simona Bernardi, University of Zaragoza, Spain.
Tuesday, March 5, 11:00, room John von Neumann.
In this talk I introduce the model-driven approach to the modelling and analysis of dependability of software systems in the early life-cycle, that considers three types of models:
- 1) software models, used for architecture/design specification and
represented with a general purpose software modelling language, that
is Unified Modeling Language (UML);
- 2) software models with
"dependability annotations" obtained from software models by adding
information related to dependability properties;
- 3) formal models used
for dependability analysis; such models have the advantage of being
supported by existing analysis methods.
Bio of speaker:
Simona Bernardi is an Assistant Professor in the Department of Computer Science and Systems Engineering at the University of Zaragoza, Spain, from 2017. She received a M.S. degree in Mathematics and a Ph.D. degree in Computer Science, in 1997 and 2003, respectively, both from the University of Torino, Italy. In the past, she held a researcher position in the University of Torino, from 2004 to 2010, and a professor position at the Centro Universitario de la Defensa in the General Military Academy of Zaragoza from 2010 to 2017. She has been visiting researcher at the Carleton University (ON, Canada), the University of L'Aquila and the University "Federico II" of Naples, Italy. Her research interests are in the area of software engineering and process mining, in particular model-driven engineering, verification and validation of performance, dependability and survivability software requirements, and formal methods for the modelling and analysis of software systems. She is co-author of the book "Model-driven dependability assessment of software-systems", published by Springer.
Previous SaS Seminars
Page responsible: Christoph Kessler
Last updated: 2019-11-04