In less than a decade, concurrency and the imperative to parallelize have invaded almost every computer application domain. In fact, most commercialized computers have nowadays a Multicore inside and mainstream Graphical Processing Units have already hundreds of them. One of the main reasons for this trend is that processorsí» clock frequencies have reached their inevitable plateau. Future computer platforms will contain more and more cores, requiring more and more concurrent or parallel code to be produced. Harnessing the potential concurrent software can offer on modern platforms is both an opportunity and a complex challenge. Already, it is not rare to see programmers spend more time finding and fixing bugs in existing sequential code than they do writing new one. It is estimated that more than half of the software development costs of a typical project are spent on identifying and fixing defects. Software errors that remain uncovered can result in huge material losses or even human casualties. According to a study by the American National Institute for Standards and Technology, the cost of software errors to the American economy alone exceeds 59 billion dollars a year, or about 0,6 % of the US GDP. The ever increasing dependency on complex computer systems, even for safety critical systems, and the inherent impossibility for testing to cover all scenarios have pushed big names like Microsoft and NASA to investigate and to gradually adopt formal methods and automatic verification techniques like software model checking.
Software model checking
Software model checking is recognized as a breakthrough in software verification and resulted, for example, in tools shipped with the Windows Driver Development Kit. The original idea of Model Checking (whose authors obtained the 2007 Turing Award) is to build a model of the system to be verified, and to automatically analyze it in order to check whether all behaviors respect a property typically expressed in temporal logic. As a result, the approach states whether the model satisfies the property and, maybe more importantly, exhibits a counter example in case it does not. This counter example can be of tremendous help to the human designer or programmer in order to track the error and to fix it. Applying directly the idea to modern computer systems is intractable due to the huge number, practically infinite, of possible states and behaviors the underlying system might exhibit. This is known as the state explosion problem, and several approaches or techniques like partial order reduction, symbolic representations, bounded model checking, abstraction, interpolation, and SAT and SMT solvers have been developed to combat it. As a result, software model checking is the combination of the original automatic model checking framework with decision procedures, symbolic representations, and static analysis as formalized by abstract interpretation. Software model checking is now a well established verification approach in certain areas like communication protocols or driver software. approach.
Modern software commonly exhibits features like concurrency, distribution, unbounded data domains, dynamic data-structures and recursion. These result in state spaces of arbitrary sizes and introduce subtle behaviors whose correctness is notoriously difficult to check. In this context, size refers to the number of threads and variables values, depth of the stack or amount of allocated heap. These difficulties are exacerbated by the unavoidable rise of multicore computers and by the imperative need to parallelize both new and legacy software. Due to inherent technological limitations, most commercialized stationary computers have nowadays a multicore inside. Future computers will contain more and more cores, requiring more and more concurrent or parallel code to be produced. The challenge is to get the most of the potential multicore computers do offer, but without getting overwhelmed by the complexity of the software bugs. It is indeed notoriously arduous to write correct concurrent programs because of the high intricacy of their behaviors. In addition to the bugs commonly found in sequential code, those found in concurrent code are essentially due to unexpected interleavings between the actions of the concurrent programs. Building techniques to help the programmer check the behavior of concurrent programs, to track bugs in order to fix them, and ultimately, to prove their absence is a challenging yet crucial task. Without such help, the nightmare of manually ensuring correctness of concurrent computer programs will only worsen.
Software verification is at a turning point, and promising results are currently being obtained by several academic and industrial actors. This project will contribute in tackling the challenge of helping developers track errors, and ultimately, prove correctness of intricate and complex concurrent code. This will require deriving new techniques and to combine existing ones in order to come up with concrete solutions and tools that can be of use to industrial partners. In this respect, application driven research is of prime importance as there exists already a wealth of techniques that can be of use, and new techniques still need to be derived to solve concrete problems.
Ultimately, solving these intricate problems will cut on development costs and improve products quality. This projects will pursue several lines of work: One line is to speed up verification of certain systems by using clever parallel verification algorithms. Another line targets bugs originating at various levels of multicore software, for example: i) due to the memory models adopted by compilers and hardware, ii) in concurrency libraries implementing synchronization operations, and iii) in concurrent applications using high-level synchronization primitives.
National and International Cooperation: