Hide menu

Research

My research builds bridges between formal language theory and computational linguistics. I study formalisms, develop algorithms, and apply them to practical problems in natural language processing.

See my list of publications

Dependency Parsing

Much of my research has been motivated by applications in the automatic syntactic analysis of natural language text, a task also known as parsing. The syntactic structure of a sentence is crucial for obtaining its meaning or translation, and automated techniques that give us access to this structure can help us manage the vast quantities of information that are available today on our personal computers, in large databases, and on the Internet. One way to represent this structure is by means of dependency trees, a certain type of directed graphs with labeled arcs between individual words. Dependency representations have a long-standing tradition in descriptive linguistics, and have proven to be useful for a wide range of natural language processing applications, including question answering, language understanding, and machine translation. The task of automatically assigning dependency trees to natural language text is called dependency parsing.

Mildly Context-Sensitive Dependency Grammar

I have made central contributions to the formal foundations of dependency-based syntactic description. Most of the work on dependency syntax has been confined to dependency trees that satisfy a graph-theoretic constraint known as projectivity. Roughly speaking, a projective dependency tree is one that can be drawn without crossings. While projectivity facilitates efficient computations, it does so at the cost of accuracy, as it rules out many patterns that occur in actual data. On the other hand, parsing with completely unrestricted dependency trees easily becomes intractable. I have developed a powerful grammar formalism for non-projective dependency syntax and shown how in this formalism, there is a tight coupling between different degrees and types of non-projectivity on the one hand, and language-theoretic and computational properties on the other. In particular, I have shown that ‘mildly’ non-projective dependency trees can be described by mildly context-sensitive grammar formalisms in the sense of Joshi (1985), and hence can be parsed in polynomial time. As it turns out, these mildly non-projective dependency trees suffice to cover virtually all of the linguistic data.

Other Mildly Context-Sensitive Grammar Formalisms

Apart from laying the ground for mildly context-sensitive dependency grammar, I have also made important contributions to the research on other mildly context-sensitive formalisms, in particular tree-adjoining grammar (TAG), combinatorial categorial grammar (CCG), and linear context-free rewriting systems (LCFRSs). A celebrated result about TAG and CCG, due to Vijay-Shanker and Weir (1994), is that they are weakly equivalent, in the sense that they generate the same string languages. Together with my coauthors, I have shown that things change dramatically when comparing the two formalisms with respect to the structural descriptions that they can provide. In more recent work, we have shown that even the weak equivalence does not actually hold for the version of CCG that is used today. In a result about TAG we have refuted the long-standing claim that all grammars from this class can be lexicalized (Schabes, 1990). My work on LCFRSs has focused on the design of efficient parsing algorithms.

Data-Driven Dependency Parsing

State-of-the-art dependency parsers are induced from large collections of linguistic data. They can assign syntactic analyses to unseen text with great accuracy and efficiency, and are available for a wide range of languages; but they are typically restricted to projective dependency trees. I have contributed to extending the paradigm of data-driven dependency parsing to mildly non-projective dependency trees. I have developed parsing algorithms and learning methods for these trees under two paradigms: exact inference, where parsing is modelled as the search for the highest-scoring analysis among all possible candidate analyses; and approximate inference, where parsing is modelled as a chain of interdependent classification tasks. These two paradigms make different tradeoffs between accuracy and efficiency, and are suitable for different applications.

Constraint-Based Dependency Parsing

I was one of the first researchers to apply constraint programming to dependency parsing. Constraint programming is a technique for modelling and solving combinatorial optimization problems, and dependency parsing can be understood as a combinatorial optimization problem where the search space consists of all possible dependency trees for the input sentence, and the goal is to find a solution that is maximally likely, given some corpus of previously observed examples. I developed basic frameworks and search techniques for constraint-based dependency parsing, demonstrated how it can be used to integrate various types of linguistic information in a highly modular way, and contributed to the implementation of constraint-based parsing in practical systems.

Bibliography

  • Aravind K. Joshi. Tree Adjoining Grammars: How Much Context-Sensitivity Is Required to Provide Reasonable Structural Descriptions?. In Natural Language Parsing, pages 206–250. Cambridge University Press, 1985.

  • Yves Schabes. Mathematical and Computational Aspects of Lexicalized Grammars. Ph.D. thesis, University of Pennsylvania, Philadelphia, USA, 1990.

  • K. Vijay-Shanker and David J. Weir. The Equivalence of Four Extensions of Context-Free Grammars. Mathematical Systems Theory, 27(6):511–546, 1994.


Page responsible: Marco Kuhlmann
Last updated: 2014-11-26