[Table of Contents]
Definitions and Interpretations: Comments on the Symposium on Connectionist Models and Psychology
Ellen Watson
Department of Philosophy
University of Queensland
At a number of points during the seminar, discussion became paralysed
over matters of vocabulary. By the end of the day, anyone who still
dares to use words such as 'frame', 'schema', 'psychological theory',
'functional architecture', 'account for', 'metaphor', 'rule',
'subsymbol', of even 'model' had to do so with apologies or caveats.
Because we kept coming back to these questions, and because they have a
potential to be so paralysing if not answered, permit me to draw the
following conclusion. Whatever neural networks have to contribute to
psychology, and whatever psychologists are able to contribute to the
legitimacy of neural networks or computational modelling in general,
philosophers have a number of things that they can contribute to both.
Philosophers do not (usually) perform experiments or run simulations,
but we do specialise in defining terms, uncovering assumptions,
specifying the problem to be tested, and helping to determine whether
that problem has been tested after all. The seminar focussed on the
possibilities for interdisciplinary cooperation between computer
scientists and psychologists; I would propose that philosophers also
have a useful role they might play in this program of cooperative
research.
Metaphor
Mike Johnson argued that new metaphors are needed, but who is it that
needs them? He suggested that psychology needs a new machine metaphor,
because these have been shown to drive psychological theory throughout
its history (even back to Aristotle and Descartes). But with neural
networks the technology (and attendant mathematics) has outstripped our
ability to understand it. We need metaphors in order to understand the
neural networks themselves, with their vectors, tensor products, hidden
units and "subsymbols".
Sally Andrews suggested that metaphors are necessary and invaluable
because in trying to link brain and behaviour we a re trying to link two
incommensurable levels of analysis. Peter Slezak questioned why
psychologists hedge their theories more than other scientists by
calling them metaphors, and why they don't adopt principles of
scientific realism and put their theories forward as, maybe not
perfect, but at least purported descriptions of reality.
To navigate the waters between Andrews and Slezak, for and against
metaphors, we need to sort out who is to use the metaphors, for what,
where they appear in the theory, at what level, and what truth bearing
descriptions could potentially replace them -- are they unavoidable and
the limit of psychology, or are they holding a place for more detailed
and well worked out theories of human mind? Philosophy of science could
help us sort out all of these questions, because it investigates the
nature of theories, their relation to evidence and their relation to
the phenomena to be explained. Andrews' justification of metaphor
suggests that neuronal and behavioral levels of explanation really are
incommensurable; Slezak's comments suggest that psychologists treat
their theories differently than those in other sciences. Do
psychologists want to continue making these assumptions? Making the
assumptions more explicit might make the choice more clear.
What is a psychological theory?
Philosophy of science might be able to help here, too, since as I
mentioned above, some of the central questions of philosophy of science
are 'What is a theory?' and, 'What is the relationship between theory
and evidence?' However, here we start travelling in a circle (or maybe
pulling ourselves up by our bootstraps) because Paul Churchland's
recent book contains an argument based on connectionist assumptions
about the nature of mental states (Churchland, 1989). If we think of
theories in the context of philosophy of mind, and if we subscribe to
Churchland's form of connectionism, then theories turn out to be
collections of vectors in n-dimensional weight space (sound
familiar?). At this point, then, we need to turn back to the
philosophy of science and to epistemology to see if the circle in which
we are caught is vicious or virtuous. What happens when your philosophy
of mind grounds the science on which you have based your philosophy of
mind? This is another question with which philosophers have grappled.
As Cyril Latimer pointed out, philosophers have been asking questions
about the nature of objects and their properties and our perception and
representation of these properties since philosophy began. Although
philosophers are far from coming up with the last word on the subject, we
have made some mistakes that contemporary cognitive scientists shouldn't
have to make again, such as mistaking a prototypical representation for
a specific image (as Berkeley did), or forgetting to specify
how a cognitive machine can automatically extract features out
of a holistically observed scene (as Locke did long before schema
theorists).
What is a rule?
Peter Slezak brought up the question of whether a model has a rule, and
made an analogy to the work of Chomsky and his claim that human beings
have rules of grammar. When I learned my Chomsky in a philosophy of
language seminar, the instructor was careful to point out that there
are two ways that something can "have" a rule. One is to have the rule
actually inscribed inside; in this case, the system looks up the rule
and applies it. John Searle attributed this picture of rule-following
to strong AI in his Chinese room paper (Searle, 1980), and rightfully
criticised it for invoking irreducible homunicularism. The other way a
system can "have" a rule is that it can obey the rule, i.e. have its
behaviour align with it, without having to look up that rule anywhere
inscribed and without having to look it up. In this sense, planets obey
"rules" that describe their orbits (also known as "laws"), without far
as we know) representing those rules to themselves. You can't have a
debate about the nature of rules in a particular system without
delineating which sense of "have" you have in mind.
Subsymbol
The term 'subsymbol' is similarly ambiguous. Since the publication of
Paul Smolensky's article, "On the proper treatment of connectionism,"
critics have been trying to get Smolensky himself to be clear about
what he means. Are subsymbols still symbols, or not? In my opinion, the
whole excitement about neural networks is that they are able to process
information without internal programming that one can read off in
something like English propositions, and therefore show that people
like Jerry Fodor are wrong to argue that there must necessarily be a
language of thought. However, one must be cautious -- this way could
lie behaviourism (Max Coltheart was dangerously close to being called a
behaviorists when Sally Andrews accused him of having said that a
system has a rule depending on its performance on pronouncing
non-words). If neural nets are to serve the purpose of suggesting
formal structures for carrying out tasks, which several people claimed
for them, including Max Coltheart and George Oliphant, then we need
some language in which to read off the properties of hidden units. This
seems to me the central issue concerning neural nets with respect to
explanatory applications in cognitive psychology, and (therefore?) the
most difficult to resolve. It is here that discussions of the
interface between neural net research and cognitive psychology suddenly
makes contact with the entire philosophical tradition that investigates
the nature of meaning and representation itself.
Conclusions
The above topics represent some of the areas in which philosophers
might contribute some of their experience and expertise to discussions
in connectionism and psychology. However, the benefits of
collaboration would definitely flow both ways. As Cyril Latimer said,
modelling and real applications enforce theoretical rigour, and this
would be true for philosophers as well as scientists. Philosophers can
learn a lot from psychologists and AI researchers who try to build
models and put some of the theories in practise. These notes are an
appeal for continuing the dialogue and the cooperative enterprise.
References
Fodor, Jerry A. (1975) The language of thought. Cambridge, MA:
Harvard University Press.
Churchland, P. M. (1989) On the nature of theories: A
neurocomputational perspective, A Neurocomputational
Perspective: The Nature of Mind and the structure of science.
Cambridge, MA: MIT Press.
Churchland, P. M. (1989) On the nature of explanation: A PDP approach,
A Neurocomputational Perspective: The Nature of Mind and the
structure of science. Cambridge, MA: MIT Press.
Searle, J. R. (1980) Minds, Brains and Programs, The Behavioral and
Brain Sciences, 3, 417-457.
Smolensky, P. (1988). On the proper treatment of connectionism, The
Behavioral and Brain Sciences, 11,, 1-74.