SaS Seminars 2012
Software and Systems Research Seminar Series
Security testing of fast time-to-market code
Dr. Mariano Ceccato, FBK Trento, Italy
Thursday 22/11 13:15 room Alan Turing
Given the fast time-to-market model of web applications, often a short
time is devoted to the assessment of code quality. While crashes and
faults are detrimental for the user experiences, subtle bugs that
involve security features are dangerous for the security and the
confidentiality of data. Our research objective is to develop novel
approaches for automating the security testing of web applications, to
guarantee high security while preserving a fast development model.
The problem of identifying those input values that expose these vulnerabilities can be formulated as a search problem. Search results represent test cases to be used by software developers to understand and fix security problems. A security oracle is required to evaluate if a security defect has been solved by a valid fix. The oracle is a classifier able to decide if the application passes security tests. Diverse approaches will be presented for the generation of appropriate inputs, i.e. the security test cases, and for the implementation of the security oracle.
Mariano Ceccato is tenured researcher in FBK (Fondazione Bruno Kessler)
in Trento, Italy. He received the master degree in Software Engineering
from the University of Padova, Italy, in 2003 and the PhD in Computer
Science from the University of Trento in 2006 under the supervision of
Paolo Tonella, with the thesis Migrating Object Oriented code to Aspect
His research interests include security testing, migration of legacy systems and empirical studies. He has been program co-chair of the 12th IEEE Working Conference of Source Code Analysis and Manipulation (SCAM 2012) held in Riva del Garda, Italy.
Discovering Parallelism with an Architecture Independent Abstraction
(and Supporting it Architecturally on CMPs)
Dr. Martti Forsell, VTT Oulu, Finland
Thursday 15 november 2012, 15:15, room Alan Turing
The essence of parallel computing is to divide the functionality at hand
into a number of subtasks that can be executed in parallel and somehow
to form the solution of the original problem from the subtask results.
Efficient architectural realization of this requires streamlined execution
of computational threads, cost-efficient synchronization of subtasks
and a scalable mechanism for communication latency hiding.
As current chip multiprocessor (CMP) architectures are moving to the
direction of heterogeneous collections of computing engines optimized
more and more for certain application domains, these efficiency requirements
are fading farther and farther if the application does not match the
architecture. At the same time, parallelism discovery and efficient
mapping (and thus parallel programming in general) are becoming
increasingly challenging because architectures provide their best
performance (and efficiency) for a limited set of computational patterns.
In this presentation we show - against the common belief - that these problems are architectural rather than related to programming models and tools. For current CMP architectures, our solution is to use an architecture independent abstraction letting the programmer to focus on intrinsic parallelism of the computational problem without the burden of taking architectural optimizations into account and then manually refining/transforming/optimizing the solution to the target architectures. We also outline our REPLICA architecture that is able to execute the above architecture dependent abstraction natively and therefore is free of these problems.
Martti Forsell is a Chief Research Scientist of Computer Architecture
and Parallel Computing at VTT, Oulu, Finland, as well as an Adjunct
Professor in the Department of Electrical and Information Engineering
at the University of Oulu. He received M.Sc., Ph.Lic., and Ph.D. degrees
in computer science from the University of Joensuu, Finland in
1991, 1994, and 1997 respectively. Prior to joining VTT, he has acted
as a lecturer, researcher, and acting professor in the Department of
Computer Science, University of Joensuu.
Dr. Forsell has a long background in parallel and sequential computer architecture and parallel computing research. He is the inventor of the first scalable high-performance CMP architecture armed with an easy-to-use general-purpose parallel application development scheme (consisting of a computational model, programming language, experimental optimizing compiler, and simulation tools) exploiting the PRAM-model, as well as a number of other TLP and ILP architectures, architectural techniques, models of computation and development methodologies and tools for general purpose computing.
At the application-specific front, he has acted as the main architect of the Silicon Hive CSP 2500 processor and programming methodology aimed for low-power digital front-end radio signal processing. He has co-organized the Highly Parallel Processing on a Chip (HPPC) workshop series between 2007-2011.
Currently he acts as the leader of a large VTT funded project, REPLICA, aiming to remove the performance and programmability limitations of chip multiprocessor architectures with a help of a strong model of computation.
Some new aspects of software modeling
Dr. Pär Emanuelson, Ericsson AB and IDA/PELAB
Thursday 1 november 2012, 15:15, room Alan Turing
Abstract: Since the middle of the nineties I have been working with several aspects of software modeling. I started out with using models as a high level language to do code generation from. UML seemed to be a good base for doing such languages, but the lack of well-defined semantics and the general use of the language has made this difficult. It was not until 10 years that we got an action language and UML still has lots of "semantic variation points" that has been an obstacle for understanding and model portability. This talk will focus on two aspects of modeling that I have worked with the last 5 years. The first is model based testing which means that, instead of defining dozens or hundreds of test cases, you make only one - very general - test case. You can then use this test case to generate as many test cases as you would like and have time to execute, and you can do this any number of times. I will talk about experiences from two - very different - model based testing tools. One of them is input oriented and the other one is state machine oriented. The second area I will talk about is diff/merge for models. In large industrial projects, with hundreds of developers working simultaneously, it cannot be avoided that developers attempt to change the same model unit at the same time. The changes done by several developers then have to be merged, a problem that has been very underestimated. The lack of good merge tools is considered as one of the greatest threats to industrial use of modeling.
Speaker's profile: Pär Emanuelsson holds a PhD in computer science from Linköping University (1980). Pär has been working in industry the major part of his career, but has in several ways been connected to universities and research. Methods for modeling of software started to appear in the nineties and Pär made one of the first applications in Sweden. He has worked on several aspects of modeling such as model based testing, code generation, modeling language design and merge of models. He has also done research in methods for improving software quality such as static analysis and fault prediction. Since half a year Pär shares his working time between Ericsson as a researcher and IDA as adjungerad Professor.
Broadcast-free Data Collection for Low-Power Sensor Networks
Dr. Daniele Puccinelli , University of Applied Sciences of Southern Switzerland (SUPSI), Manno, Switzerland
Wednesday 20 june 2012, 11:00 (sharp) in room Alan Turing
Asynchronous low-power listening techniques reduce the energy footprint of radio communication by enforcing link layer duty cycling.
At the same time, these techniques make broadcast traffic significantly more expensive than unicast traffic. Because broadcast is a key network primitive and is used widely in various protocols, recently several techniques have been proposed to reduce the amount of broadcast activity by merging broadcasts from different protocols. In this paper we focus on collection protocols and investigate the more extreme approach of eliminating broadcast completely.
To this end, we design, implement and, evaluate a Broadcast-Free Collection Protocol, BFC. Compared to the Collection Tree Protocol, the de facto standard for data collection, BFC achieves double-digit percentage improvements on the duty cycles. The specific benefits to individual nodes depend on the relative cost of unicast activity; we show that the nodes that benefit the most are the sink's neighbors, which are crucial for network lifetime extension. Eliminating broadcast also brings several other advantages, including extra flexibility with link layer calibrations and energy savings in the presence of poor connectivity.
Distributed Solutions for Energy-Aware Routing
Dr. Aruna Bianzino
Thursday 24 May 2012, 10:00 (sharp), room Alan Turing
In the context of Green Networking research, different approaches have been proposed in last years to reduce the gap between the capacity offered by the networks, and the resources consumed by the users. A promising technique acting in this direction is known as resource consolidation. It consists in concentrating the workload of an infrastructure on a reduced set of devices, while switching off the others.
Differently from previous works, we present here novel resource consolidation algorithms, acting on-line in a fully distributed fashion. The presented solutions do not assume the knowledge of the current Traffic Matrix and routing paths, nor the presence of a central control unit. Moreover, they do not require explicit synchronization among nodes. These assumptions widen the applicability of resource consolidation to different network scenarios. Nonetheless, results obtained on realistic case studies considering the proposed algorithms show that they achieve performance comparable to existing centralized solutions, both in terms of energy saving, and of QoS.
Short bio: Aruna Bianzino is a post doctoral researcher currently at politecnico di Torino, and obtained his PhD in 2012 from Telecom Paris Tech (ENST) in the area of Green Networking. http://perso.telecom-paristech.fr/~bianzino/pmwiki/pmwiki.php
Parallel computing applications for bioinformatics and environmental protection
Prof. Dr. Milena Lazarova, Technical University of Sofia, Bulgaria
Monday 14 May 2012, 10:45, room Donald Knuth
The rapid development of high-performance computer systems and increasing availability of multicore clusters and supercomputers makes it possible to support the solving of different problems in many fields requiring vast amount of computational power. This talk will present some applications that make use of parallel computing to speed up certain optimization problems, bioinformatics computational problems, environmental protection issues.
Milena Lazarova is an Associate Professor for Computer Systems and Technologies at the Faculty of Computer Systems and Control, Department "Computer Systems", Technical University of Sofia, Bulgaria. Her research interests are in the field of parallel architectures and parallel programming, image processing, pattern recognition.
Architecture and Compiler Techniques to Improve Processor Energy Efficiency
Prof. Dr. David Whalley , Florida State University, USA
Tuesday 8 May, 13:15, room John von Neumann
A new generation of mobile applications is requiring reduced energy consumption without sacrificing application performance. We propose three separate techniques to address this challenge. The instruction register file (IRF) provides reduced energy consumption and decreased code size with little effect on execution time due to accessing frequently occurring instructions in registers. The lookahead instruction fetch engine (LIFE) provides lower energy consumption with no execution time penalty by making guarantees about instruction fetch behavior. The statically pipeline processor reduces energy consumption by providing simpler hardware that requires the control for each portion of the processor to be explicitly represented in each instruction. We will present an overview of these different techniques and evaluate their benefits.
David Whalley received his PhD in CS from the University of Virginia in 1990. He is the E.P. Miles professor in the Computer Science Department at Florida State University and a Distinguished Member of the ACM. His research interests include low-level compiler optimizations, tools for supporting the development and maintenance of compilers, program performance evaluation tools, predicting execution time, computer architecture, and embedded systems. He is currently spending a sabbatical at Chalmers University of Technology with the support of a Fulbright Award. More information about his background and research can be found on his home page, www.cs.fsu.edu/~whalley.
Prof. Dr. Uwe Assmann , TU Dresden, Germany
Wednesday 2 May 2012, 13:15, room Alan Turing, IDA
We are going from fly-by-wire to drive-by-wire to life-by-wire. Many aspects of our life are already controlled by software and electronics, and many more will be in the future. In this talk, we investigate the technical requirements for reliable cyber-physical systems in the future internet of things (iot). We show that CPS must be self-adaptive to changing requirements, while nevertheless offering full reliability and safety. This can be mastered with MOO architectures based on multi-objective optimization. We also look at the market mechanisms and software platforms for life-by-wire and the resulting software ecosystems.
Uwe Assmann holds the Chair of Software Engineering at the Technische Universität Dresden. He has obtained a PhD in compiler optimization and a habilitation in "invasive software composition" (ISC), a composition technology for code fragments enabling flexible software reuse. ISC unifies generic, connector-, view-, and aspect-based programming for arbitrary program or modeling languages. The technology is demonstrated by the Reuseware environment, a meta-environment for the generation of software tools (http://www.reuseware.org).
Currently, in the Sonderforschungsbereich "Highly-Adaptive Energy- Efficient Computing (HAEC)" at TU Dresden, Prof. Assmann's group applies ISC to energy-aware autotuning (EAT), a technique to dynamically recompose code adapted to the required quality of service, to the context of the system, and to the hardware platforms. EAT is based on multi-objective optimization (MOO) and always delivers an optimal system configuration with respect to the context parameters. It is a promising technology also for the optimization of other qualities of future cyber-physical systems (CPS).
To model or not to model - that is the question
Prof. Dr. Tony Gorschek, Blekinge Institute of Technology
Thursday 12 april 2012, 13:15, Room John von Neumann
Since the mid nineties, and the introduction of UML, modeling has held the promise of improving quality, enabling communication and speeding up development. Since then UML has become the de facto standard för conceptual analysis in the development of OO systems. Many empirical studies, investigations and experiments have been conducted and reported over the years, covering how modeling is used, and testing new concepts, however, very few studies, if any have posed and answered the central question of "if" UML is used. During 2009-2010 the largest empirical study of its kind, yielding over 4800 respondents, was conducted investigating OO concepts, including the use of modeling.
Dr. Tony Gorschek is a Professor of Software Engineering at Blekinge Institute of Technology (BTH). He has over ten years industrial experience as a senior executive consultant and engineer, but also as chief architect and product manager. Currently he manages his own industry consultancy company, works as a CTO, and serves on several boards in companies developing cutting edge technology and products. His research interests include requirements engineering, technology and product management, process assessment and improvement, quality assurance, and practical innovation. Dr. Gorschek bases his research on challenges identified in industry, then develops solutions in collaboration with industry practitioners, and ultimately validates and tests the solutions in a real industrial setting. For more information/publications/contact: www.gorschek.com
Communication challenges in atmospheric sensing with unmanned aircraft
Prof. Dr. Brown, University of Colorado at Boulder, USA
Tuesday 14 feb 2012, 15:15, Room John von Neumann
This talk describes the networking and communication challenges with airborne sensors in small (10kg) unmanned aircraft. In particular we discuss the roles of flight dynamics, interference, limited spectrum, and flight safety requirements. Mobility is a key feature of these networks and can be exploited to improve network performance. We also describe techniques for using so-called cognitive radios to provide sufficient spectrum for unmanned aircraft operations. Experiments with atmospheric sensing in a variety of environments will be described including a recent "tornado chasing" campaign to understand the origins of tornados.
Professor Brown investigates how to use adaptation in complex communication systems. His group has developed new protocols for wireless ad hoc networks and he now heads a project to test these protocols in a network of small unmanned airplanes at University of Colorado at Boulder. He has also studied cellular system design and quality of service in packet networks. Before joining the University of Colorado in 1995, he developed algorithms and architectures for hardware neural networks at Bell Communications Research and the Jet Propulsion Laboratory. He is the recipient of the NSF CAREER Award and the Colorado Junior Faculty Development Award.
After the SaS seminar there will be a short break and then another presentation (RTSLAB-seminar) by the same speaker:
Random cellular deployments for analysis of multi-tier mobile radio network performance
This talk addresses the carrier-to-interference ratio (CIR) and carrier-to-interference-plus-noise ratio (CINR) performance at the mobile station operating within multiple tiers of co-channel wireless networks. In each tier the base station distribution is given by the homogeneous Poisson point process. We present: (1) semi-analytical expressions for the tail probabilities of CIR and CINR; (2) a closed form expression for the tail probability of CIR in the range [1,infinity); (3) a closed form expression for the tail probability of an approximation to CINR in the entire range [0,infinity); (4) a lookup table based approach for obtaining the tail probability of CINR, and (5) a study of the effect of shadow fading and ideal sectorized antennas on the CIR and CINR. Based on these results, it is shown that, in a practical mobile radio system, the installation of additional wireless networks (microcells, picocells and femtocells) over the already existing macrocell network will always improve the CINR performance at the MS.
Research in Partial Evaluation
Prof. Dr. Anders Haraldsson , IDA, Linköpings universitet
Friday, 3 feb 2012, 13:15, room Alan Turing
I will soon retire, so it is time again for me to make a seminar of my and others work, where I will discuss:
The concept of Partial Evaluation (PE), mixed computation, specialization of programs, optimization of programs. The relation between interpretation and compilation, self-applicable partial evaluators - Futamuras projections.
The history of PE, especially our own work started 1970 in Uppsala and ended about 1982 here in Linköping. It resulted in 3 PhD theses, by myself 1977, Pär Emanuelson 1980 and Jan Komorowski 1981 (partial deduction) and also a part in Ulf Nilsson's PhD 1991.
Motivation why we used it, and the implementation of a Partial Evaluator REDFUN (REDuce FUNarg or in more modern terminology - Reduce closures)
Continued work, especially at DIKU, Copenhagen, by Neil Jones and his colleagues. Self-applicable partial evaluator.
The seminar will be a discussion with you all, and I would like to see connections from this work to what is done today, especially optimization in compilers.
Anders Haraldsson is a professor at the computer science department of Linköping university. Short CV: - Datalogilaboratoriet, Uppsala university, 1969-1975. - MAI and IDA since 1976. PhD 1977. - Division AIICS, but the founder of PELAB early 80. - Director of undergraduate studies during 80's. - Head of department during 90's. - Head of program board for all computer and media curricula at LiTH during 00's.
To Harness The Long Tail Online, Location Does Matter As Does Time
Prof. Dr. Chetan Kumar, California State University San Marcos, USA
Wednesday, 25 jan 2012, 15:15, room Alan Turing
Abstract: There has been a tremendous growth in the amount and range of information available on the Internet. The users' requests for online information can be captured by a long tail model. A few popular websites enjoy a high number of visitations while the majority of the rest are less frequently requested. In this study we investigate this phenomenon using real world data from ten proxy servers in four US time zones. We demonstrate that both users' physical location and time of access affect the heterogeneity of website requests. The effect may partially be explained by differences in demographic characteristics at locations and diverse user browsing behavior on weekdays and weekends. These results can be used to design better online ad pricing strategies, affiliate advertising models, and Internet caching algorithms with sensitivities to user location and time of access differences. This may be of interest to online services providers such as Google and Facebook, given the increased interest in localized and customized online deliveries.
Speaker's bio: Chetan Kumar is an Associate Professor of Information Systems and Operations Management at California State University San Marcos. He received his PhD from Purdue University. His research interests include managing computer networks, electronic commerce, web analytics, and green IT. He has published articles in refereed journals such as Decision Support Systems and Electronic Commerce Research and Applications, and in books such as Encyclopedia of E-Business Development and Management in the Global Economy. He has presented his research at conferences such as Institute for Operations Research and the Management Sciences Annual Meeting, Workshop for EBusiness, Workshop on Information Systems and Economics, and International Conference on Information Systems Doctoral Consortium. Prior to his PhD, he worked for Reebok International where he managed the supply chain for Asia and Africa regions. Earlier he received an MBA from Indian Institute of Management Ahmedabad and a BS in Computer Science from Bharathidasan University, Trichy, India.
Page responsible: Christoph Kessler
Last updated: 2013-03-13