237067

(2004) Synthese 142 (2).

Nonmonotonic inferences and neural networks

Reinhard Blutner

pp. 143-174

There is a gap between two different modes of computation: the symbolic mode and the subsymbolic (neuron-like) mode. The aim of this paper is to overcome this gap by viewing symbolism as a high-level description of the properties of (a class of) neural networks. Combining methods of algebraic semantics and non-monotonic logic, the possibility of integrating both modes of viewing cognition is demonstrated. The main results are (a) that certain activities of connectionist networks can be interpreted as non-monotonic inferences, and (b) that there is a strict correspondence between the coding of knowledge in Hopfield networks and the knowledge representation in weight-annotated Poole systems. These results show the usefulness of non-monotonic logic as a descriptive and analytic tool for analyzing emerging properties of connectionist networks. Assuming an exponential development of the weight function, the present account relates to optimality theory – a general framework that aims to integrate insights from symbolism and connectionism. The paper concludes with some speculations about extending the present ideas.

Publication details

DOI: 10.1007/s11229-004-1929-y

Full citation:

Blutner, R. (2004). Nonmonotonic inferences and neural networks. Synthese 142 (2), pp. 143-174.

This document is unfortunately not available for download at the moment.