Phenomenology and artificial intelligence
Husserl learns chinese
For over a decade John Searle's ingenious argument against the possibility of artificial intelligence has held a prominent place in contemporary philosophy. This is not just because of its striking central example and the apparent simplicity of its argument. As its appearance in Scientific American testifies, it is also due to its importance to the wider scientific community. If Searle is right, artificial intelligence in the strict sense, the sense that would claim that mind can be instantiated through a formal program of symbol manipulation, is basically wrong. No set of formal conditions can provide us with the characteristic feature of mind which is the intentionally of its mental contents. Formally regarded, such intentionally is an irreducible primitive. It cannot be analyzed into non-intentional (purely syntactic, symbolic) components. This paper will argue that this objection is based on a misunderstanding. Intentionality is not simply something given which is incapable of further analysis. It only appears so when we mistakenly abstract it from time. When we regard its temporal structure, it shows itself as a rule-governed, synthetic process, one capable of being instantiated both by machines and men.
Full citation [Harvard style]:
Mensch, J. (1991). Phenomenology and artificial intelligence: Husserl learns chinese. Husserl Studies 8 (2), pp. 107-127.
This document is unfortunately not available for download at the moment.