What Is Cognitive Systems Engineering?
(The following characterisation of CSE reflects the writings
of David Woods and myself
In the second half of the 20th Century, one of
the major challenges to human factors engineering and industrial psychology has
been to understand the complexities of human-machine systems. These systems have
become indispensable in the fabric of modern society as the technical processes
that sustain production, services, and communication continue to grow in
complexity and interdependency. Although technological developments, especially
within information technology, have made it possible to build powerful,
efficient, and highly reliable machines (such as aircraft, trains, power plants,
factories, hospitals, etc.), the human operator remains an essential element.
The conditions when the human operator is needed may have changed, but the need
of humans in complex systems has not been significantly reduced. While humans
may be required to do very little when everything goes as planned, the need to
act is often extreme in critical situations. Furthermore, in many daily
activities, such as buying a train ticket or getting cash money, human-human
interactions have been replaced by human-machine interactions.
In order to understand the complexities of human-machine
systems, it is necessary to have an appropriate basis, or conceptual foundation,
for description and analysis. In this respect the study of human-machine systems
is no different than any other scientific discipline. Each requires a set of
concepts and a corresponding set of methods. The concepts are the basic
hypotheses and assumptions about the domain, which in this case comprise humans
working with technology. The concepts help identify what the important phenomena
are and how they can be understood, and include the hypotheses and theories that
are part and parcel of the scientific discipline. The concepts are the basis for
the distinctions and analyses that can be made, and provide the
"intellectual glue" that keeps everything together. The methods refer
to the consistent and systematic ways in which the concepts can be applied, for
instance, in the form of a classification system. The application can have a
practical or utilitarian purpose such as in design, or a more scientific
purpose, such as improving the understanding of the set of causes that have led
to a specific consequence. The methods are intrinsically linked to the data,
which constitute the empirical basis for the field and thereby provide the
justification for the concepts.
The
Broken Circle
The classical view of human-machine systems depicts a human
and a machine that are linked by inputs and outputs (Figure 1). The control
input to the machine determines whether the machine changes state or remains in
the same state. As a result of this, some output is produced, for instance a set
of measurements that indicate the state of the machine and the value of specific
process parameters. The measurements, or the output from the machine, becomes
the input to the human operator. According to the classical view, this input is
"processed" by the operator, and results in a response or output,
which becomes the control input to the machine. While the engineering sciences,
such as control theory, have focused on describing how the machine works, the
behavioural sciences have been more concerned with describing how the operator
works, i.e., what goes on between receiving the input and producing the output.
Figure
1: The classical
view of human-machine systems.
When the challenge to describe the human operator was fully
accepted by the behavioural sciences in the late 1960s, the focus changed from
the human-machine system as a whole to the human operator as a separate system
(Figure 2). In this way the circle or coupling between human and machine was
broken. The link to the process was maintained in the sense that there was both
an input to and an output from the operator, but for all intents and purposes
these were considered less important than the processes that were assumed to
take place within the operator’s mind. Typically, only the operator part of
the human-machine system was developed in any detail - or considered at all. The
original cyclical model that described the coupling between humans and machines
was transformed into a sequential or linear information processing model that
mainly tried to account for the details of the human’s responding to the
input.
Figure
2: The broken circle
with the human operator as a separate system.

The conventional approach to the study of human-machine
interaction is based on the notion of the human as an information processing
system, either in the weak sense as an analogy or in the strong, metaphysical
sense. This approach puts the focus on the interface or the interaction, i.e.,
on that which lies between the human and the machine, and studies
the human operator as an information processing system interacting with a
process or an artefact. This view is characteristic of, for instance, human
factors engineering and ergonomics, human-computer interaction, cognitive
science and some versions of cognitive engineering. Although it has been very
successful as a basis for models, theories, and experiments, there is a growing
consensus that it includes some fundamental limitations. Foremost among them is
that it unavoidably separates or differentiates between human and machine,
instead of seeing them together or as a whole. Yet understanding how a
human-machine system works requires the ability to describe the system as a
whole, hence to see it as more than a set of interacting parts. Since the
concepts and methods of the classical view have proven insufficient for this, an
alternative is required.
Cognitive
Systems
Cognitive Systems Engineering (CSE) was formulated in the
beginning of the 1980s (Hollnagel & Woods, 1983) to provide a consistent
conceptual and methodological basis for research on human-machine systems, with
design and evaluation as the two major activities. In CSE the focus is not on
human cognition as an internal function or as a mental process, but rather on
human activity or "cognition at work", i.e., on how cognition is
necessary to accomplish effectively the tasks by which specific objectives
related to either work or non-work activities can be achieved. Cognition is
necessary to cope with the dilemmas, double binds, and trade-offs that arise
from multiple and possibly inconsistent goals, organisational pressures, and
clumsy technology. Rather than being isolated in the mind of a thoughtful
individual, cognition at work typically involves several people distributed in
space or time, which makes co-operation and co-ordination at least as important
as human information processing. The interacting people are embedded in larger
groups, professions, organisations, and institutions, which together define the
conditions for work – the constraints and demands as well as the resources.
Humans at work do not passively accept the technological artefacts nor the
general conditions of their work, but actively and continuously adapt their
tools and activities to respond to irregularities, disturbances, and to meet new
demands. Cognition is part of an interconnected stream of activity that ebbs and
flows, where extended periods of lower activity are interspersed with busy, high
tempo operations where correct and timely responses may be critical.
CSE proposes that composite operational systems can be
looked at as single cognitive systems. Structurally they may comprise the
individual people, the organisation (both formal and informal), the high level
technology artefacts (AI, automation, intelligent tutoring systems,
computer-based visualisation) and the low level technology artefacts (displays,
alarms, procedures, paper notes, training programs) that are intended to support
human practitioners. But functionally they can be seen as a single system. This
is reflected by the main issues of CSE:
Coping
with complexity.
The complexity is due to the multiple sources of
information and control and to the possibly conflicting goals that characterise
the working situation. People usually try to cope with complexity by reducing
it, for instance by structuring the information at a higher level of abstraction
with less resolution and making the required decisions at that level. The
development of information technology has made it possible to provide
computerised tools which to some extent accommodate the operators' needs, and
thereby help their coping. This kind of support clearly involves a replication
of parts of human cognition, hence the use of an artificial cognitive system.
The
use of artefacts
Tools are artefacts that are used with a specific purpose
to achieve a specific goal. Tools have traditionally been used to amplify human
capabilities - in terms of physical performance (reach, force, speed, and
precision), and in terms of perception and discrimination. More recently, tools
have been introduced which are aimed at amplifying cognition. Although some
cognitive tools have existed for ages, the use of computers has made it possible
to design tools for more sophisticated functions, for instance decision making
and planning.
Joint
cognitive systems
CSE recognises that technological systems gradually have
become "cognitive", in the sense that they are goal-driven and make
use of cause-based (feedforward) regulation. Technological systems can thus be
seen as artificial cognitive systems that interact with natural cognitive
systems (i.e., humans). It is therefore appropriate to develop a view of joint
cognitive systems, i.e., of co-operating systems which are described using a
common set of terms – neither as machines nor as humans, but as cognitive
systems.
Any discussion of CSE must obviously refer to a definition
of what a cognitive system is. To avoid the thorny issues of what cognition is,
and whether it can exist in non-human systems, a cognitive system can be defined
as a system that is as able to control its behaviour using information about
itself and the situation, where the information can be prior information
(knowledge, competence), situation specific information (feedback, indicators,
measurements) and constructs (hypotheses, assumptions). The control can be
complete or partial and will in the main depend on the ratio between expected
and unexpected information. More formally, a cognitive
system can be defined as a system that can modify its pattern of behaviour on
the basis of past experience in order to achieve specific anti-entropic ends.
Consequences
Of CSE
The notion of a joint cognitive system cannot easily be
accommodated within the decomposed human-machine paradigm, and CSE can be seen
as an independent alternative, with the associated model being the Contextual
Control Model. Current methods mainly support a decomposition of
a system into its parts (and in some cases also the reverse process of
aggregation), but in a manner that implies partial independence between the
parts. Some attempts have been made to develop methods that focus on the
interaction and dependencies between sub-systems rather than on the
components-elements. An example of that is multi-level flow modelling (MFM)
which supports the goals-means analysis principle (Lind & Larsen, 1995). The
overall framework for analysis must, however, be extended to recognise the
dependency between data and interpretation, to account for the specific role of
cognition (be it natural or artificial), and to highlight the consequences for
design - supported by specific guidelines and design rules whenever possible.
In accordance with the intentions of CSE, the data must be found in situations that are representative of the real world (Hutchins, 1995). Any kind of systematic study carries with it some assumptions about what is being observed. These assumptions are relatively easy to understand when cognitive systems are studied under controlled condition, which partly explains the preponderance of such studies. Yet it is a fundamental tenet of CSE that human action always is constrained by the context and studying cognition in the "wild" therefore does not release us from the obligation of understanding the assumptions that are made, even though they may be less easy to detect.
See
also the item on Cognitive Task Design
Literature
Hollnagel,
E. & Woods, D. D. (1983). Cognitive systems engineering: New wine in
new bottles. International Journal of Man-Machine Studies, 18,
583-600.
Lind, M. & Larsen, M. N. (1995). Planning and the
intentionality of dynamic environments. In J.-M. Hoc, P. C. Cacciabue & E.
Hollnagel (Eds.), Expertise and technology: Cognition and human-computer
interaction. Hillsdale, N. J. Lawrence Erlbaum Associates.
© Erik Hollnagel, 2005