Latimer's discussion serves as a most valuable provocation precisely through inciting the first two commentators to take up certain of these issues as they arise in the context of their own specific research areas. Max Coltheart and Sally Andrews present complementary perspectives on the problem of reading aloud - the problem of how orthographic representations (writing) are transformed into phonological representations (speech). Aside from the intrinsic interest of this problem, it provides an ideal vehicle for discussing the very general issues arising throughout cognitive science. The dispute over classical 'Dual Route' and connectionist 'Single Route' architectures for explaining reading aloud encapsulates many of the vexing questions raised by Latimer and which plague cognitive science generally.
In order to assist the reader, it might be useful to bring into clear focus a few of the central issues raised by Latimer's paper and addressed by the respondents. Perhaps the most important and, at the same time, most troublesome of these is the crucial distinction Latimer draws between "theory-relevant" and "theory-irrelevant" components of simulations. Aside from the way in which this question arises within any particular computer model, it is perhaps the issue central to the rivalry between connectionist and classical or symbolic approaches to AI. Indeed, these are essentially the same problem, since if we were clear about which aspects of simulations in either case were theory relevant, we could settle the dispute between them as well. For example, the issue between Coltheart's 'dual route' model and the 'single route' model of Seidenberg and McClelland is not merely how to compare them on brute empirical performance but rather what are the relevant criteria for comparison. A possibility not considered in the following papers is that, when Latimer's distinction is carefully attended to, the dispute between seemingly rival accounts may come to appear quite different. That is, when compared on appropriate criteria of theory-relevance in each case, the two accounts may be seen to be complementary rather than competitive. In this sense, what appears to be a substantive theoretical dispute actually turns on meta-theoretical or "philosophical" issues about the status and import of theoretical constructs.
Accordingly, it is telling, though not surprising, that in Latimer's paper as well as in the responses, there is a certain prominence of "philosophical" questions. There is some reason to think that the persistent intrusion of questions about realism, instrumentalism and the ontological status of theoretical constructs is a symptom of a certain malaise at the heart of cognitive science. George Oliphant is somewhat more blunt. His discussion is unwilling to acknowledge any value in connectionist approaches. But leaving such a grim diagnosis aside, I would note only that the persistence of philosophical problems in the midst of the substantive, technical discussions does suggest a characteristic conceptual confusion rather than straightforward "empirical" problems.
One symptom of this malaise is the seemingly simple question posed by Andrews: 'What is a Rule?' Of course, the agonizing over this question has been perhaps the principal methodological consequence of connectionist developments and is the specific form taken by Latimer's above distinction in this context - though Latimer formulates the same issue in terms of the status of schemata. It is salutary, however, to notice that this question produces a certain sense of deja vu: it is exactly the same question which was intensely debated for some twenty years in connection with Chomsky's generative grammars and the competence-performance distinction: What does it mean to ascribe the abstract formalisms of a grammar literally to a person? When is it appropriate to describe a system as literally following a rule as distinct from merely behaving according to the rule? This is not the place to pursue the question in detail, but it is no accident that Andrews and other participants steadfastly refuse to construe their theories as literal descriptions, preferring instead to regard them as metaphors. Latimer, too, warns against the sin of "reification", though it is one which psychologists appear to worry about more than their colleagues in other disciplines. Chomsky insisted that, notwithstanding their abstractness, rules of a grammar may be literally ascribed to systems, though of course they may be embodied or implemented in as-yet-unknown mechanisms. In other words, the attribution of "psychological reality" need not be reserved exclusively for theories of causal processes or mechanisms, but may be literally accorded to highly abstract "functionalist" specifications of computational formalisms. Though expressing some reservations in discussion, Coltheart's articulation of this functionalist approach is closest to such a realism about rules by comparison with the instrumentalism or 'fictionalism' of Andrews' talk of metaphor.
Since there was no consensus among the participants on these questions, it is perhaps helpful here to highlight the issues and their relevance to the specific theoretical problems discussed. The question of rules and their "psychological reality" is closely related to Latimer's question about 'levels of analysis'. In the present discussions, the question of levels is at the heart of the issue about the relation between 'dual route' and 'single-route' models. Coltheart emphasizes a crucial distinction between "functional architecture" and network architecture when considering connectionist models. This is precisely a distinction between levels of analysis in Latimer's sense. The importance of the point in relation to the specific domain of reading aloud theories is that the distinction may serve to dissolve rather than solve the problem in Wittgenstein's sense: as noted earlier, the appearance of rivalry may be an artifact of a misconception about Latimer's questions concerning levels of analysis, theory relevance and psychological realism. For example, describing the Seidenberg and McClelland model as a "Single Route" alternative to Coltheart's scheme is to assume that they are being compared on the same criteria of theory-relevance. That is, the dispute is predicated on the tacit assumption that the two models are comparable in a way which makes sense of counting the number of routes in each case. However, the so-called "single" route of Seidenberg and McClelland is actually the entire distributed, parallel network for which the very idea of counting "routes" does not make clear or direct sense. Though Coltheart does not express the point in this way, it follows from his own account of the functional level of analysis that the connectionist "alternative" may be better seen as an implementation level theory rather than a rival theory as Coltheart himself presents it. On this conception, which is the familiar one proposed by Fodor and Pylyshyn, both Coltheart's dual route model and the Seidenberg - McClelland PDP model may be regarded as literal, and compatible, theories at different levels of analysis. It is worth noting that this analysis explains an otherwise troublesome problem mentioned by Andrews - namely, that it has proved impossible to empirically distinguish models using explicit rules from other models. This is just what would be expected if the models are not competing alternatives but merely complementary 'levels of analysis'.
Some unclarity on this question was introduced into the debate by McClelland and Rumelhart who gave inconsistent accounts of this relationship between classical and PDP approaches. In describing their approach as concerned with the "microstructure of cognition" they used the analogy of the relation of quantum physics to chemistry - taken to parallel the relation of PDP to the symbolic level respectively. This is a relation of inter-theoretical reduction and a case in which the lower level implements the regularities at another level. However, confusingly, McClelland and Rumelhart also use the analogy of quantum physics and Newtonian mechanics to illustrate the same relation between connectionism and classical symbolic approaches. But this case is quite different from the first because quantum physics replaced Newtonian physics, much as the theory of heat replaced that of caloric. Again, these questions cannot be pursued here but they serve to indicate the way in which the specific problems raised in the following papers bear on the most central debates in cognitive science.
The same anxiety about the status of theoretical models can be seen in Latimer's response to his students' question about the significance of the units in a connectionist system. It is perhaps worth signalling in advance that Latimer's discussion appears to collapse two issues which ought to be kept distinct - namely, the question of the precise theoretical import of certain postulates and the question of the status of postulates per se in science. Thus, Latimer raises the general instrumentalist - realist debate in the philosophy of science here, whereas the puzzle over the precise meaning of the nodes in a connectionist model is a detailed theoretical question specific to this domain. Instrumentalism is the view that our theoretical entities are merely useful calculating devices. This is, once again, the idea that our models are not to be taken literally but only as 'convenient fictions' or metaphors. However, neither instrumentalism nor realism can answer the students' question concerning the best way to interpret the units in a neural net.
On the other hand, as I have already noted, the instrumentalism issue raised by Latimer is seen to be central to other aspects of the discussions here: If it is not sufficiently evident from the written papers, the deeply enigmatic nature of rules was openly confessed by participants in the verbal presentations. Some admitted, only half in jest, to not knowing what a rule is - despite working with them and postulating them in the course of actual theorising. The seeming paradox here is only the familiar point made within cognitive science itself - namely, that one can act expertly in some domain without understanding how one does it - knowing 'how' without knowing 'that'. I hasten to add that this is not a criticism of practitioners, since efficiency in any domain of expertise often requires precisely such inarticulate unreflective behavior - at least until the smooth performance is interrupted by a problem. Then, in the practice of science, just as in the control of action, the emergence of 'anomaly' is just when a self-conscious attention to fundamental principles is called for. In such cases, a clear understanding of the seemingly extraneous meta-theoretical or "philosophical" problems, such as the nature of rules, may be a pre-requisite for scientific progress. However, the candid admission of puzzlement by the scientists about rules is no grounds for smugness among philosophers: Kripke's recent account of Wittgenstein on rules has revealed that philosophers don't know what they are talking about either.