The Substance Of Cognitive Modelling

Introduction

 

The interest in the modelling of cognition goes beyond the common use of models as a tool of science, and can probably best be understood as a backlash to classical behaviourism. The modern study of cognition is intrinsically linked to the invention of the digital computer and the emergence of scientific disciplines such as cybernetics and information theory, and in large parts derives its legitimacy from those. Within this tradition it has long been the accepted norm that the modelling of human performance requires the modelling of the inner, mental processes and knowledge of the user - in the shape of the ubiquitous user model. The merits of this view notwithstanding, during the 1990s it has been questioned whether cognition should be described primarily as a mental process, and in particular whether it can be considered as a context-free mental process. The aim of this note is to investigate this view, as well as its more recent alternative, and propose a basis for resolving the substance issue.

 

Cognition Without Context

 

The notion that cognition could be modelled as specific processes or functions without raising the spectre of mentalism goes back to the very beginning of the time when computers started to become commonly known. In a seminal paper, Edwin Boring (1946) described a five-step programme for how the functions of the human mind could be accounted for in a seemingly objective manner. Firstly, the functional capacities of the human should be analysed, e.g., by listing the essential functions. Secondly, the functional capacities should be translated into properties of the organism by providing a description of the input-output characteristics (essentially a black-box approach). Thirdly, these functions should be reformulated as properties of a hypothetical artefact, which in modern terms means expressing them as operational descriptions for a computer system, e.g., in the form of flow charts or programs. The fourth step was to design and construct actual artefacts, which is equivalent to programming the functions in detail and running a simulation. The fifth and final step was to explain the workings of the artefact by known physical principles. This would serve finally to weed out any mentalistic terms and capacities.

 

Boring believed that the ultimate explanation should be given in terms of psychophysiology, so that "an image is nothing other than a neural event, and object constancy is obviously just something that happens in the brain". Although written more than 50 years ago, the approach is in all essentials identical to the principles of information processing psychology, as it became popular in the mid 1970s. The essence of this view was that cognition should be studied and understood as an inner - mental - process rather than as action, i.e., as process genotypes rather than performance phenotypes. More particularly, cognition was explained in terms of more fundamental processes, hence in all essence treated as an epiphenomenon of human information processing. This idea of context free cognition was promoted by people such as Herbert A. Simon, who argued very convincingly for the notion of a set of elementary information processes in the mind. One consequence of this assumption was that the complexity of human behaviour was due to the complexity of the environment, rather than the complexity of human cognition (Simon, 1972). This made it legitimate to attempt to model human cognition independently of the context, which effectively was reduced to a set of inputs. In the same manner, actions were reduced to a set of outputs, and the inputs and outputs together represented the direct interaction with the context.

 

As the main interest of human information processing was to model cognition per se, the chosen approach corresponded well to the purpose. Cognitive ergonomics and cognitive systems engineering, on the other hand, are rather more interested in developing better ways of analysing and predicting human performance. This purpose cannot be achieved by the information processing types of models, and therefore requires an alternative approach.

The Language Of Cognition

 

The language of cognition, as the term is used here, does not refer to descriptions of how cognition is assumed to take place in the mind or the brain, i.e., the neurophysiological or computational processes to which cognition allegedly can be reduced. It refers rather to the terminology that is used to describe cognition and its role in human behaviour. The language of cognition is important because the terms and concepts that are used determine both which phenomena come into focus and what an acceptable explanation is. This is illustrated by the classical work in the study of cognition (e.g., Newell & Simon, 1972), which used astute introspection in well-controlled conditions to try to understand what went on in peoples' heads, predicated on the notion of information processes. While many of the features of cognition that have been found in this way are undeniably correct on the phenomenological level, researchers were not satisfied with that but wanted to unravel the putative underlying causes (genotypes). This required descriptions that usually imply assumptions about the nature of cognition that are ambiguous, incorrect, or unverifiable. This can be illustrated by a small example.

 

In the daily use of the language of cognition these assumptions are easily forgotten, and the reality of the underlying mechanisms or concepts, such as human information processing, is taken for granted hence rarely questioned. Consider, for instance, the fact that it is sometimes difficult to recognise people that one knows. In one situation a face may be recognised as familiar, yet it may be impossible to recall the context. In another, a person may be fully recognised, yet the name cannot be recalled. On the other hand, it rarely happens that someone is recognised as familiar and that the name can be recalled, but that nothing else is about the person comes to mind.

 

The language of cognition determines how this phenomenon is explained. In one case it can be described as a failure to retrieve the name of the person from long-term memory, and the explanation is that the brain stores information about people’s names separately from all other information about them. This explanation implies that there are a number of separate memories, and that the recognition of a person goes through a number of steps, starting by seeing the face and ending by retrieving the name. In another case the phenomenon can be described as the difficulty in remembering peoples’ names. The explanation is in this case that names are hard to remember because they are generally meaningless. This explanation does not require an elaborate theory about human information processing, nor a mental model, but simply states a fact (although it also requires an independent definition of what "meaningless" is).

 

Cognition In Context

 

Since the late 1980s the scientific disciplines that study cognition - predominantly cognitive science, cognitive psychology, and cognitive engineering - have increasingly emphasised the relation between context and cognition. This has been expressed in a number of books, such as Hutchins (1995) and Klein et al. (1993). The essence of this "new look", which has been referred to by terms such as "situated cognition", "natural cognition" and "cognition in the wild" is:

  1. that cognition is not confined to a single individual, but is rather distributed across multiple natural and artificial cognitive systems;

  2. that cognitive activity is not confined to a short moment as a response to an event, but is rather as a part of a stream of activity;

  3. that sets of active cognitive systems are embedded in a social environment or context which constrains their activities and provides resources;

  4. that the level of activity is not constant but has transitions and evolutions; and

  5. that almost all activity is aided by something or someone beyond the unit of the individual cognitive agent, i.e., by a tool.

Many people have seen this development as a significant step forward, although the enthusiasm has not been the same on both sides of the Atlantic. Yet while it is praiseworthy, indeed, that the study of cognition at long last acknowledges that cognition and context are inseparable, it should not be forgotten that "situated cognition" is far from being something new. As long ago as 1976, Ulrich Neisser (1976, p. 8) wrote that:

 

"(w)e may have been lavishing too much effort on hypothetical models of the mind and not enough on analyzing the environment that the mind has been shaped to meet."

and a few years later Donald Broadbent (1980, p. 117) echoed the same view when writing that:

 

"... one should (not) start with a model of man and then investigate those areas in which the model predicts particular results. I believe one should start from practical problems, which at any one time will point us towards some part of human life."

After many years of gradually moving down a cul-de-sac of human information processing, it is easy to see the shortcomings of this approach to the study of cognition. It is less easy to see what the problems are in the alternative, since its power to solve – or rather dissolve - many of the difficult problems is deceptive. Although it was a mistake to assume that cognition could be studied without considering the context, it is equally a mistake to assume that there is a difference between cognition in context, i.e., in natural situations whatever they may be, and context free cognition. The methods of investigation may be widely different in the two cases, but this does not warrant the assumption that the object of study – cognition – is different as well.

 

The hypothetico-deductive approach preferred by academic psychology emphasises the importance of controlled conditions, where independent parameters can be varied to observe the effect of pre-defined dependent variables. This classical approach is often contrasted to the so-called naturalistic studies, which put the emphasis on less controlled but (supposedly) more realistic studies in the field or in near-natural working environments. It is assumed that the naturalistic type of study is inherently more valid, and that the controlled experiments run the risk of introducing artefacts and of studying the artefacts rather than the "real" phenomena.

 

Many of the claims for naturalistic studies are, however, inflated in a misguided, but understandable, attempt to juxtapose one paradigm (the "new") against the other (the "old") and to support the conclusion that the "new" is better than the "old". Quite apart from the fact that the "new" paradigm is not new at all (e.g., Brunswik, 1956), the juxtaposition disregards the essential reality that all human performance is constrained, regardless of whether it takes place under controlled or naturalistic conditions. Given any set of conditions, whatever they may be, some activities are more likely than others - and indeed some may be impossible under given circumstances. In the study of fire fighters during actual fires, the goals (e.g., to put out the fire as quickly as possible) and the constraints (resources, working conditions, command and control paths, roles and responsibilities, experience, etc.) will to a large extent determine what the fire fighters are likely to do, and how they will respond to challenges and events. In the study of fire fighters using, e.g., a forest-fire simulation game, there will be other goals and constraints, hence a different performance. The difference between the controlled and the naturalistic situations is not the existence or the reality of the constraints, but rather the degree to which the constraints are pre-defined and controllable. In both cases the performance will be representative for the situation, but the two situations may not be representative of each other.

 

The hypothetico-deductive approach requires the conditions of a controlled experiment in order to succeed and the ceteris paribus principle reigns supreme, although it is well known that this is a strong assumption that rarely is fulfilled. The degree of control is often less than assumed, which may lead to problems in data analysis and interpretation. The important point is, however, to realise that all human performance is constrained by the conditions under which it takes place, and that this principle holds for "natural" performance as well as controlled experiments. For the naturalistic situation it is therefore important to find the constraints by prior analysis. If that is done, then we have achieved a degree of understanding of the situation that is similar to our understanding of the controlled experiment, although we may not be able to establish the same degree of absolute control of specific conditions (e.g., how an event begins and develops). Conversely, there is nothing that prevents a "naturalistic" approach to controlled studies, as long as the term is understood to mean only that the constraints are revealed by analysing the situation rather than by specifying it in minute detail. Possibly the only thing that cannot be achieved in a controlled setting are the long-term effects and developments that are found in real life.

Mental Models And The Law Of Requisite Variety

One answer to the substance issue is provided by the Law Of Requisite Variety which was formulated in cybernetics in the 1940s and 1950s (Ashby, 1956). This "law" is concerned with the problem of regulation or control and expresses the principle that the variety of a controller should match the variety of the system to be controlled. The latter usually is described in terms of a process plus a source of disturbance. The Law Of Requisite Variety states that the variety of the outcomes (of a system) can only be decreased by increasing the variety in the controller of that system. Effective control is therefore not possible if the controller has less variety than the system.

 

It is consistent with the interests of cognitive ergonomics and cognitive systems engineering, that the study of cognition should focus on problems which are representative of human performance, i.e., which constitute the core of the observed variety. The implication is that the outcome of the regularity of the environment is a set of representative ways of functioning, and that these should be investigated rather than performances that are derived only from theoretical predictions or from impoverished experimental conditions. With regard to the modelling of cognition the substance should thus be the variety of human performance as it can be ascertained from experience and empirical studies – but, emphatically, not from theoretical studies. The requirement to the model is therefore that model variety is sufficient to match the observed variety of human performance. The model must, in essence, be able to predict the actual performance of the user to a given set of events under given conditions.

 

The difference between the observed variety and the theoretically possible variety is essential. The theoretically possible variety is an artefact, which mirrors those assumptions about human behaviour and human cognition that are inherent in the theory. The theoretically possible variety may therefore include types of performance that will not occur in practice - because the theory may be inadequate or because of the influence of the working conditions. (It follows that if the working conditions are very restricted, then only a very simple model is needed. This principle has been demonstrated by innumerable experimental studies!) If research is rooted in a very detailed theory or model we may at best achieve no more than reinforcing our belief in the theory. The model may fail to match the observed variety and may very likely also be more complex than strictly needed, i.e., the model is an artefact rather than a veridical model in the sense of being based on observable phenomena.

 

The requirements to modelling of human cognition should be derived from the observed variety, since there is clearly no reason to have more variety in the model than needed to account for the observed variety. The decision about how much is needed can therefore be based on the simple principle that if the predicted performance sufficiently well matches the actual performance, then the model has sufficient variety. This furthermore eliminates the affliction of model validation. The catch is, of course, in the meaning of "sufficiently" which, in any given context and for any given purpose, must be replaced by a well-defined expression. Yet even if this problem may sometimes be difficult to solve, the answer to the substance issue should definitely be found by asking what cognition does, rather than what cognition is.

 

Literature

Ashby, W. R. (1956). An introduction to cybernetics. London: Methuen & Co.

Boring, E. G. (1946). Mind and mechanism. The American Journal of Psychology, 59(2), 173-192.

Broadbent, D. E. (1980). The minimization of models. In A. J. Chapman & D. M. Jones (Eds.), Models of man. Leicester: The British Psychological Society.

Brunswik, E. (1956). Perception and the representative design of psychological experiments. Berkeley: University of California Press.

Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press.

Klein, G. A., Orasanu, J., Calderwood, R. & Zsambok, C. E. (1993). Decision making in action: Models and methods. Norwood, NJ: Ablex.

Neisser, U. (1976). Cognition and reality. San Francisco: W. H. Freeman.

Newell, A. & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ.: Prentice-Hall.

Simon, H. A. (1972). The sciences of the artificial. Cambridge, MA.: The M. I. T. Press.

 

© Erik Hollnagel, 2005

 

Back