Multimodal Dialogue Systems for Industrial Applications
The project aims at investigating dialogue models for multi-modal
interaction and realize prototype implementations for various types of
such systems. Models and knowledge on multi-modal interaction, is of
vital importance for any industrial company developing future
The focus of our research is on development of:
- Guidelines for multi-modal interaction. With the increasing
availability of information on the Internet, for instance using
cellular phones and lap top computers as a complement to `normal'
computer interaction we need knowledge on how to design for
interaction using various modalities and interaction media. As we
already have established techniques for conducting such experiments
we can extend our work to new applications and evaluate the results
in order to develop guidelines for multi-modal interaction for
various classes of applications and modalities. We will especially
investigate how various types of spoken feedback influences the user
experince in a multimodal dialogue system.
- Investigations on the use of novel modalities in multimodal
interaction. We will especially investigate the use of eye gaze as
an input modality.
- Efficient models for interaction and dialogue control. The
interaction will differ depending on the sophistication of the
application, the complexity of the task(s) and the available
combination of modalities. Our interaction models must account for
this, which also involves dialogue models for interaction control
and contextual interpretation for various interaction situations.
Development of interaction models will be based on investigations
into the properties of users and applications and their implications
for knowledge representation models.
- Models for interpretation and coordination of multi-modal
interaction. It is of vital importance to be able to coordinate the
various input and output modalities that a user can utilise. Open
problems are how to synchronize spoken input with gestures such as
pointing in a map etc. One challenge is how contradicting
information is to be interpreted, e.g. if a user says one thing but
points at something else. This is not limited to input
coordination. We also need knowledge on how to combine various
output modalities. With the current work on synthetic faces carried
out by a group initially at Telia Research but now at NLPLAB we have
the ability to integrate a spoken face into our system. This can be
used for feedback during input and to enhance the user's
understanding of spoken output.
- Models for domain reasoning. Intelligent
dialogue systems must be able to respond properly to a variety of
requests involving knowledge of the dialogue, the task at hand, and
the domain. This requires powerful tools for information extraction
and the ability to map this to a representation of the domain or a
more general ontology and to utilise this in the interaction.
- Means for industrial use of language technology and especially
development of multimodal dialogue systems. This includes
tools and frameworks that can be used in industry as well as
- Design and development of adaptive multimodal dialogue
Page responsible: Webmaster
Last updated: 2004-04-22