Karagiannidis/Stephanidis:
The authors are well aware of the work of the Decision Theory
& Adaptive Systems Group of Microsoft Research, which also
employs decision-theoretic techniques for managing the complexity
of information displayed to people responsible for time-critical
decisions (e.g. [9,
10]). It should be noted,
however, that this paper does not intend to propose a decision-theoretic
framework for run-time adaptation, as this has already been presented
in other publications (see appendix). The paper rather aims to
focus on the impact of this framework on run-time adaptation (more
on this matter later on).
The authors have not had the benefit of reading [Brown et al],
as it is not yet available, but from the comments made, as well
as from other related publications, it is understood that Mr.
Brown also appreciates the merits and utility of similar decision-making
techniques.
Brown:
The authors take a human-computer interaction approach to run-time
adaptation. Researchers in artificial intelligence have also looked
at adaptable and adaptive user interfaces. If I had to choose
one word to characterize both the AI and HCI communities' research
into interface agents, the words would be "delegation"
and "customization", respectively. The AI community
as a whole has concerned itself with what an interface agent can
do *for* the user, whereas the HCI community has concerned itself
with what the user can do *with* the agent. The strength of AI
research lies in its years of experience in knowledge representation,
reasoning, and machine learning. The strength of HCI research
in interface agents is its attentiveness to the user, focusing
on what a user needs to perform his / her tasks, and how best
to represent information to the user. While there is no clear
delineation between the two groups with regards to their research
in interface agents, both approach the research field of interface
agents differently. I am curious about why the authors chose this
approach and comparisons to other techniques / approaches.
Karagiannidis/Stephanidis: It is correct that the authors take the "HCI view" for intelligent interfaces, i.e. they are concerned with what the user can do with an intelligent interface (as well as how). The motivation of this work stems from the vision of user interfaces for all ([19, 26]), i.e. interfaces that can be adapted to individual user attributes, thus facilitating universal access and high quality interaction for the broadest possible user population [18,25]. The authors consider that intelligent user interfaces have a catalytic role to play in this direction in the foreseeable future. In this context, our approach does not presuppose any particular interaction style, like direct manipulation, agent-based, kiosk-oriented, or cartoon-based interaction. Instead, it establishes a generic theoretical framework for a decision making mechanism that drives run-time adaptations. In that sense, we did not consider it necessary to analyse the different perspectives, such as the "HCI view" or the "AI view" (as these are defined by Mr. Brown), in a particular interaction domain (e.g. agent-based interaction), since that domain would be clearly "orthogonal" to our work.
Brown:
I am concerned about the lack of details concerning the assessment
phase / module. While although the authors state the assessment
of the raw data (i.e., sensors in the domain, user psychology,
etc.) is not a concern within the context of this paper, in general,
determining which sensors (i.e., in the paper's nomenclature,
which adaptivity determinants -- as processed via the assessment)
coincide via a rule and / or goal to which affectors (as defined
by your adaptivity rules) is very hard. See Leonard Foner's (MIT
researcher in agents) Focus of Attention thesis. This assessment
is very important in the system (see Figure 1 and the discussion
that follows). Do the authors address it anywhere within their
approach? If so, references should be given.
Karagiannidis/Stephanidis:
The authors consider that the two main phases / processes in intelligent
user interfaces are:
(i) run-time assessment of user-computer interaction, where "high-level"
interaction situations (e.g. user is disoriented, user is unable
to navigate, user is unable to successfully complete a specific
interaction task) are detected from "low-level" monitoring
information (e.g. user has provided invalid input, user continuously
invokes a dialogue and subsequently "cancels" it) [6];
and
(ii) design of run-time adaptation, where, based on the results
of the assessment process, specific adaptation decisions are made.
In the opinion of the authors, the above phases are interrelated
to a certain extend: there is not a one-to-one correspondence
between the assessment information and the respective adaptation
decisions. That is, the same assessment information may initiate
a specific adaptation in one system, while it may initiate another
adaptation (or no adaptation at all) in a different system, depending
on the design decisions made for the system.
While this paper addresses only the latter phase (i.e. the design
of run-time adaptation), the authors have also addressed the assessment
phase, and, in particular, they have proposed a queuing modelling
framework for assessing, at run-time, the load posed to the user's
sensory channels in the context of intelligent multimedia user
interfaces [16]).
It should be also mentioned that, in the current stage of their
work, the authors are not concerned with adaptation rules
(Mr. Brown probably refers to previous stages of this work, published
in [12]), but rather with decision making models, which, based
on the relationships between adaptation constituents and design
goals, facilitate the selection of specific adaptation constituents.
The design goals that are taken into account in this decision
making process depend on:
(i) the design decisions that have been made for a specific application,
and determine which adaptation constituents can contribute to
the satisfaction of specific design goals; and
(ii) the interaction situations (which are directly related to
the design goals) that are detected by the assessment process
at run-time.
In this sense, only the design goals that are considered critical
for a specific adaptation decision, and that have not been met
(as this is detected by the assessment process) participate in
each decision making process. Referring to the example of Mr.
Brown, we could say that the "sensors" provide the assessment
information, either directly, or after processing. The decision
making models include in the decision making process only the
information from those "sensors" that: (i) are considered
relevant to the specific decision situation; and (ii) have actually
provided some input.
For example, in the decision making process presented in section
3.3.1 of the paper, it is shown that the decision concerning the
selection of a style for the "Open Location" task is
based on the interaction situations: "high error rate",
"disorientation", "user idle", and "inability
to navigate". However, when the decision making process is
initiated, the "disorientation" interaction situation
is not taken into account, as there is no evidence (from
the respective "sensor") on whether it holds, or not.
Brown:
In a related, earlier paper, Karagiannidis, Koumpis and Stephanidis
[5] present the same approach as is presented
in this paper. That is, an approach to determine, within the domain
of intelligent multimedia presentation systems (IMMPS), what,
when, why, and how, to adapt the system's presentation. This paper
does not differ significantly from that work.
Karagiannidis/Stephanidis: Paper [12] has presented some earlier ideas on intelligent user interfaces, which differ significantly from the decision-theoretic approach, which forms the basis of this paper. In particular, in [12], the authors have focused on the diversity of adaptation constituents, determinants, goals and rules, in implemented and available systems, and on the need for a more methodological approach to their use. On the other hand, the work reported in the paper currently under discussion simply outlines the employment of decision making models for the design of run-time adaptation (as necessary background information), and proceeds to focus on the fact that, following this approach, run-time adaptation is an iterative decision making process with feedback.
Brown: Have the author's looked at other paradigms? For example, agent-based environments.
Karagiannidis/Stephanidis: The authors have indeed looked at other paradigms concerning the design of adaptation in intelligent user interfaces. However, such "related work" has been thoroughly addressed in other publications by the authors, (see appendix) and therefore has not been included in this paper, so as to avoid unnecessary repetition. Moreover, this paper does not intend to propose the decision-theoretic framework for run-time adaptation, as this has already been presented in other publications (see appendix), but rather to address its impact to run-time adaptation. To this extend, the paper is not concerned with architectural, or other issues (e.g. whether the interface is agent-based, or not).
Brown:
The authors' work has several weaknesses. They have no way
of determining whether their method of adaptation, the "how",
is feasible within their approach, nor its impact. Is it the authors'
approach is to rely on the application to determine whether the
adaptation method is feasible? And if so, does this ignore the
aforementioned assessment phase or at least delegate it to another
part of the system? This places an unnecessary burden on the application
designer to account for this. Furthermore, it makes integration
of their approach into legacy software nearly impossible. From
a computational stand-point, certain adaptations could be abandoned
given the evidence that the approach would not be feasible.
Karagiannidis/Stephanidis:
These comments are not very clear to the authors. If they are
understood correctly, then Mr. Brown probably refers to the "availability"
of, or "feasibility" of applying, a specific adaptation
suggested by the Decision Making Module (DMM). This problem would
appear, if the input provided to the DMM did not include
the relationships and dependencies between adaptation constituents.
However, in the current implementation of the DMM, this problem
does not appear.
More specifically, as it is discussed in section 2.3 of the paper,
the DMM has been based on the AVANTI system [1], which aims to
address the interaction requirements of disabled users using Web-based
multimedia telecommunications applications and services
[4,5,6
,7 ,8 ]. In the context of the AVANTI system, syntactic adaptation
refers to the selection of instantiation styles for specific interaction
tasks, i.e. the adaptation constituents are the styles defined
for each task. These styles have been defined following the Unified
User Interface Design Methodology (U2ID) [25]; thus,
the task decomposition that is provided as input to the DMM includes
the relationships and dependencies between different styles (as
shown in Figure 2 of the paper, which presents the task decomposition
of the "open location" task). Therefore, when the DMM
selects a specific style, it can determine the effects of that
selection on the availability of other styles in the same (sub-)task
decomposition. For example, as shown in Figure 2 of the paper,
if the DMM selects the DOL style, it can then determine that the
DOLG style can also be instantiated, while the IOL and IOLG styles
cannot (within the current "decision cycle").
As it is mentioned in the paper, the decision making process is
initiated when new interaction situations arrive from the assessment
process; in other words, when the assessment process detects that
one, or more, of the design goals are not met (this is why
adaptation is initiated). The adaptation decisions are
made on the basis of the "appropriateness" of each adaptation
constituents, as this is determined from the utility- or preference-based
models. In particular, the DMM performs an evaluation of the appropriateness
of each adaptation constituent, and if it is found that the currently
selected constituent (e.g. due to previous decisions) is not the
most appropriate one, then (this is when an adaptation
decision is taken) it suggests a new adaptation constituent
(this is how adaptation decisions are made) to be instantiated.
Concerning the implementation of the DMM, and the requirements
for its employment in different applications (including "legacy
software"), the adopted approach presumes that the user interface
comprises the following components:
(i) an Assessment Module (AM), e.g. a user model server [17],
which detects and communicates to the Decision Making Module (DMM)
interaction situations that have been defined for the particular
application;
(ii) a Monitoring Mechanism [3], which sends information to the
AM, as well as
(iii) an Adaptation Mechanism [24], which is responsible for realising
/ implementing the adaptations that are suggested by the DMM (see
Figure 1 of the paper).
These assumptions, to the authors' opinion, do not restrict the
scope of the adopted approach, since they can be considered as
necessary (not in terms of individual software modules, but, rather,
in terms of interrelated processes) for any user interface
which exhibits intelligent behaviour (agent-based or not).
In this sense, the adopted approach can be used in the context
of any application which includes the above software modules /
processes.
Finally, the re-usability of the DMM in another application (which
includes a monitoring mechanism, an assessment module, and an
adaptation mechanism) requires that:
(a) the new application utilises the same situation notification
protocol currently used in DMM (i.e. the AVANTI situation notification
protocol [3]), or that the message interpreter of the DMM is modified
according to the new protocol;
(b) the new application utilises the adaptation notification protocol
currently used in DMM (i.e. the AVANTI function calls [3]), or
that the message composer of the DMM is modified according to
the new protocol;
(c) the relations between adaptation constituents and design goals
are represented in a way similar to the one currently used in
DMM (i.e. the task decomposition of the AVANTI system [2]), or
that the adaptation constituents and design goals interpreter
of the DMM is modified accordingly.
The authors consider that the above requirements are not at all
restrictive, and that the DMM can be easily re-used in different
applications, or application domains.
In conclusion, the authors do not agree with Mr. Brown's comment:
"The authors' work has several weaknesses".
Brown:
It is not apparent if their approach is extensible beyond IMMPS.
All goals within a system deal with adaptation of the presentation.
These are not necessarily explicit user goals. Therefore, the
assistance offered may not help the user achieve a goal they are
pursuing directly, but may indirectly help them by presenting
the information in the "best" (as determined by the
designer) way.
Karagiannidis/Stephanidis:
This is not true. The adaptation framework we propose is based
on the selection of appropriate dialogue artefacts, among alternatives,
to comprise, at run-time, the adapted interface. Such artefacts
may concern both presentation and interaction syntax; the unified
design method as well as our decision making framework are not
restrictive in this respect. Hence, our approach is not limited
to adaptation of presentation. More specifically, we assume that
the designer will design appropriate alternative sub-dialogues
providing such goal-oriented help, while we are concerned with
the decision making process for intelligent selection and activation,
at run-time, of those sub-dialogues, when needed. Therefore, explicit
user goals are not required to be addressed at the decision making
phase; instead, they are taken into account during the design
and implementation of the specific sub-dialogues.
If the users' goals were to be taken explicitly into account in
the decision making process, then one (or possibly many) new interaction
situation(s) would need to be defined, e.g. "minimise errors",
"speed up interaction", as well as their (absolute or
relative) relation to the adaptation constituents. Then, the DMM
would take this information into account in the decision making
process. It should be clear from the above, that the adopted approach
is "extensible", in the sense defined by Mr.
Brown.
Brown:
The paper is lacking in two key areas: related work and results.
Concerning the former, the authors fail to distinguish their work
from the work of other researchers. For example, Szekely provides
an overview of the past 10 years of model-based interface development
[szekely]. In the research field of agent-based interfaces (i.e.,
"personal assistants", interface agents, etc.), many
researchers have investigated the use of the agent paradigm to
modify the user interface.
Karagiannidis/Stephanidis: This paper does not intend to deal with the development of intelligent user interfaces in general, but only with the way that adaptation decisions are made. In this respect, the authors do not differentiate existing systems according to their architecture (i.e. whether they are agent-based or not, or whether they follow the model-based user interface development paradigm or not), but only according to their "adaptation logic". To this end, the authors briefly describe the current "state-of-the-art" regarding the encapsulation of adaptation logic in existing systems, which is mainly realised through a set of pre-defined adaptation rules. More detailed information on this matter (including a number of example adaptation rules used in existing systems) has been included in [15].
Brown:
Concerning the latter, for the number of papers the authors
have published on this architecture, I have yet to see results
of their system. For that matter, whether a real application exists
is doubtful. How well does the architecture work? Is it extensible
to other domains? What are the "lessons learned"? The
authors state "this paper has focused on the impact of this
framework to the success of run-time adaptation." I do not
see where and how the authors are measuring the impact. It is
not obvious from their presentation.
Karagiannidis/Stephanidis:
Mr. Brown has shown some interest in our work since June 1997,
and he has had the benefit of receiving copies of previous papers
([11, 12]), at his request. Additionally, he has had the benefit
of receiving answers to specific questions that he has raised
in private e-mail communications, regarding the authors' work.
The authors are, therefore, somewhat surprised to see the comments
above.
Regarding the availability of a real application, the AVANTI Web
browser ([27]) is the most representative example of an interface
embodying run-time adaptation capabilities, based on work by the
authors. In fact, the AVANTI Web browser is based on a modular
architecture that allows for experimentation with different approaches
in deciding upon, and performing adaptations at the user interface
level. Currently, two different decision-making components have
been implemented and integrated in the AVANTI Web browser. The
first one employs a rule-based approach in performing adaptations
(this work has been published in several conference proceedings
-e.g. [27, 28]-, and demonstrated in an exhibition in the HCI
International '97 Conference, San Francisco California, USA, 23-29
August 1997 and the Telematics Applications Conference, Barcelona,
Spain, 4-7 February,1998); the second one uses the Decision Making
Mechanism (DMM) described in this paper (an extensive paper on
the implementation details of the DMM and the results of its employment
in the context of the AVANTI Web-browser is under preparation).
The paper under discussion addresses the impact of the decision-theoretic
framework to run-time adaptation. More specifically, this paper
argues that the decision-theoretic framework is radically different
from "rule-based" adaptation. In particular, the rule-based
approach constitutes a "static" approach, in the sense
that adaptations are pre-determined by the rules (although,
in some cases, these can be modified by the user interface designer,
but not at run-time). Thus, the adaptation decisions cannot
be modified at run-time, even if there is evidence (derived through
the assessment process) that their application does not
have the desired effect. The decision-theoretic framework, on
the other hand, enables adaptation decisions to be modified "dynamically",
based on the assessment information that is continuously provided
by the assessment process. The incorporation of this information
in the decision making process can radically modify the adaptation
decisions, since it can adjust the importance (i.e. weight) assigned
to each design goal, i.e. it can automatically modify the "adaptation
strategy". Thus, the (absolute or preference) relations between
adaptation constituents and design goals that are defined by the
user interface designer are automatically "revisited"
at run-time. Given the above, if we define an "optimal"
adaptation constituent as the one that satisfies all associated
design goals, it is argued that the adaptation process will automatically
"converge" towards "optimal" constituents
over time.
Brown:
I feel the authors need to address these deficiencies before
this paper should be considered for publication.
Karagiannidis/Stephanidis: The authors wish to thank Mr. Brown for his comments at this "discussion phase", which will be undoubtedly taken into account in revising the manuscript before the paper is re-submitted for formal peer review.
References
Karagiannidis C., Stephanidis C., "Preference-Based Decision
Making for Run-Time Adaptation in Intelligent User Interfaces",
Submitted, 1998.
This paper presents preference-based decision making models for
run-time adaptation, and outlines the implementation of a decision
making module that employs the proposed approach (this paper it
is not yet available electronically, as it is still under review).
[11] Karagiannidis C., Koumpis A., Stephanidis C., "Adaptation
in IMMPS as a Decision Making Process", Computer Standards
and Interfaces, Special Issue on Intelligent Multimedia Presentation
Systems, 18(6-7), December 1997.
This paper resulted from the Workshop "Towards a Standard
Reference Model for Intelligent Presentation Systems" (see
below), and outlines the employment of (utility-based) decision
making techniques in the context of the standard reference model
for intelligent multimedia presentation systems.
[14] Karagiannidis C., Koumpis A., Stephanidis C., "Modelling
Decisions in Intelligent User Interfaces", International
Journal of Intelligent Systems, 12(10), October 1997, pp.
753-762.
This paper introduces the theoretic background of the (utility-based)
decision-theoretic framework, and defines properties of, and relationships
between, adaptation-design strategies, based on this framework.
[20] Stephanidis C., Karagiannidis C., Koumpis A., "Decision
Making in Intelligent User Interfaces", ACM 1997 International
Conference on Intelligent User Interfaces, Orlando, USA, 6-9
January 1996, pp. 195-202.
This paper focuses on the need for a decision-theoretic framework
for run-time adaptation in intelligent user interfaces.
[12] Karagiannidis C., Koumpis A., Stephanidis C., "Deciding
'What', 'When', 'Why', and 'How' to Adapt in Intelligent Multimedia
Presentation Systems", 12th European Conference
on Artificial Intelligence, Workshop "Towards a Standard
Reference Model for Intelligent Presentation Systems", Budapest,
Hungary, 13 August 1996, 4 pages.
This paper discusses the diversification of adaptation constituents,
determinants, goals and rules, and focuses on the need for a methodological
approach in the context of intelligent multimedia presentation
systems.
[21] Stephanidis C., Karagiannidis C., Koumpis A., "Integrating
Media and Modalities in the User-Machine Interface", 1st
International Conference on Applied Ergonomics, Istanbul,
Turkey, 21-24 May 1996, pp. 256-261.
This paper outlines the employment of data from the literature
for the integration of media and modalities in multimedia user
interfaces, through a decision-theoretic approach.
[15] Karagiannidis C., Koumpis A., Stephanidis C., "Supporting
Adaptivity in Intelligent User Interfaces: the case of Media and
Modalities Allocation", 1st ERCIM Workshop
on "User Interfaces for All: Current Efforts and Future Trends",
Heraklion, Greece, 30-31 October 1995, 14 pages.
This paper reviews implemented intelligent user interfaces, and
focuses on the diversification of adaptation constituents, determinants,
goals and rules.
[13] Karagiannidis C., Koumpis A., Stephanidis C., "Media/Modalities
Allocation in Intelligent Multimedia User Interfaces: Towards
a Theory of Media and Modalities", 1st International
Workshop on Intelligence and Multimodality in Multimedia Interfaces:
Research and Applications, Edinburgh, UK, 13-14 July 1995,
12 pages.
This paper introduces the notions of adaptation constituents, determinants, goals and rules, and their use for the media / modalities allocation problem in intelligent multimedia user interfaces.