Project financed by:  

Multimodal Interaction for Information Services

Introduction

Research goals

Research Issues


Relevance

Project work plan


Scientific production

Project prototypes


Members

Links

Summary

Introduction

Given the ever increasing information load on the Internet and elsewhere, it is of utmost importance to promote efficient means and techniques for pinpointing specific and relevant information. Recent research within the fields of Information Extraction and Open Domain Question Answering is showing that the key to successful information processing lies not only in efficient retrieval methods, but also in methods of understanding the precise meaning of information requests. Natural language (spoken or written) provides an intuitive way of stating such requests. For example, if a user wants to know about the relative size of populations in two countries, there is a number of ways that this request can be expressed in natural language, such as Which country is the largest: Bahamas or the Dominican Republic?, Is Bahamas larger than the Dominican Republic? etc. The important observation is, however, that natural language questions do contain specific leads to what the search engine should look for that cannot be captured in the state-of-the-art search engines. It has also been observed that information requests are not always successfully completed in single questions. Instead the user and system has to take part in a task-oriented dialogue where information has to be added by the user or the system to arrive at a satisfactory formulation of the information problem. To continue the example above, a straightforward follow up question could be Do you mean area or population?

In order to allow for such interaction a number of research issues must be addressed. In the proposed project we will utilise and combine knowledge and methods from two areas of language technology:

  • Multimodal interaction. This means that the user and system can utilise various types of modalities in order to present information, not only natural language (spoken or written) but also graphics, images, videos, and tables. In this project we will focus on multimodal dialogue where natural language plays a major role. The use of dialogue allows formulation of the information needs in a fragmented fashion. The system can collect further information from the user and ask for clarifications before searching the information base.
  • Information processing of documents. Dialogue systems need structured information in order to support advanced information retrieval and problem solving. Such structured information bases are often hand crafted. For the vast amount of public information available on the Internet it is not feasible to manually create structured information sources. Instead, we must be able to find the relevant information stored in unstructured formats and in a systematic way convert it to a suitable form. The problem is not only to bring structure to the information, a prerequisite for doing that is also to locate the relevant information, which, as the information is unstructured, is a complex task.


Page responsible: Webmaster
Last updated: 2012-05-07