Project financed by:  

Multimodal Interaction for Information Services

Introduction

Research goals

Research Issues


Relevance

Project work plan


Scientific production

Project prototypes


Members

Links

Summary

Research goals and expected results

Much information, especially information available on the Internet, is not in structured form. Furthermore, even with powerful techniques for extracting information, it is still very hard for a user to formulate a query that reflects the information need. The requests often require collecting information from the user, and possibly also other information systems, before they are defined precisely enough for the search engine.

The overall research goal of this proposal is to incorporate document and text processing techniques in multimodal dialogue systems. The main result is, thus, a natural language information system, where multimodal interaction allows for intuitive and efficient formulation of complex requests for information that can be extracted from unstructured, distributed information sources.

Such an integrated multimodal dialogue system requires a variety of shared knowledge sources, especially important is a common ontology, that can be utilised by both the dialogue system and the information extraction components. The development of a general ontology is far beyond the research goals of this project. However, we expect to have ontologies that are useful for various types of applications and domains. Furthermore, one research issue that will be investigated concerns maintaining ontologies in a dynamic environment.

Another important goal is to understand multimodal human computer interaction, and especially issues related to control and co-operation. The expected results are on the one hand knowledge on multimodal interaction, but also principles for design of multimodal dialogue systems.


Page responsible: Webmaster
Last updated: 2012-05-07