Hide menu

Theoretical Computer Science Lab

The Theoretical Computer Science Laboratory (TCSLAB) does fundamental research on algorithms and computational complexity, and we currently focus on computational problems encountered in artificial intelligence. Research topics of interest include the following:
  • Fine-grained complexity. Classical complexity theory aims at distinguishing between computational problems that have relatively efficient solutions and those that are intractable---polynomial-time solvable problems versus NP-hard problems is the archetypical delineation. Fine-grained complexity aims to refine this distinction into a quantitative understanding of the exact time required to solve problems. This amounts to prove both upper bounds (i.e. invent faster algorithms) and lower bounds (i.e. proving the impossibility of algorithms with given time bounds  under reasonable complexity theoretical assumptions). A typical example is to distinguish between NP-complete problems where exhaustive search is the best  possible algorithm and those that have improved exponential time algorithms. 
    We mainly study the fine-grained complexity of (1) infinite-domain CSPs with immediate applications in spatio-temporal reasoning and numeric computation, and (2) finite-domain CSPs and SAT problems defined via algebraic invariants.
  • Algebraic methods. The so-called algebraic approach is a highly influential method for analyzing computational complexity of  problems, based on the idea of describing closure properties of relations via algebraic invariants. It is the main method behind the recent proof of  the CSP conjecture :every finite-domain CSP is either polynomial-time solvable or NP-complete. This method is mainly useful for studying classical complexity of computational problem and it has proven difficult to use it for analyzing fine-grained complexity. We study ways of generalizing it to fine-grained complexity, for instance, by utilizing partial polymorphisms. Our work include both methods for constructing algorithms and for proving lower bounds.
  • Parameterised complexity.  Parameterized complexity is a measure of problem complexity with one or more input parameters besides input size. This theory is motivated, for example, by the observation that there exists relevant  problems that require exponential runtime when the complexity is measured in terms of the input size only,  but that are computable in f(k)*poly(n) time where n is input size and k is some problem-specific parameter. Such problems can  often be considered tractable despite their traditional classification as intractable. This theory provides a systematic framework for a refined analysis of complex computational problems, and it is one of the main tools for efficiently solving problems that exhibit "hidden structure". We currently focus on the parameterized complexity of CSPs and on kernelization methods. Kernelization is one of the most important techniques within this area and it can be viewed as a preprocessing step that aims at reducing data size with perfomance guarantees.
  • Inexact methods. It is well-known that algorithms that guarantee to compute an exact answer within a given time bound may be much slower than algorithms that works under slightly relaxed conditions. Examples include  randomized algorithms where the time bound may be inexact and approximation lgorithms where the answer may be inexact. The difference compared to heuristic methods is that we always insist that the algorithms always must have provable performance guarantees. Our research has focused on optimisation problems that appear in automated planning and constraint satisfaction. In particular, we are interested in connections between inexact methods and parameterized complexity.