[Next] [Up] [Previous]
Next: The Intentional Stance Up: AI and the Philosophy Previous: Consciousness and the Puzzle

Syntax, Semantics, and Intentionality

We shall return to the general issue of whether computers can experience pains and other experiential states towards the end of the chapter. Instead we now direct our attention to the claim that, even if computers could not literally be conscious, they might still be able to have genuine mental states insofar as their computational processes mirror the cognitive processes in our minds. So the next part of our discussion leaves consciousness entirely out of the picture, and concentrates on the nature of cognition, which is the prime territory of AI.

When we think, we think about things. For instance, Jane's belief that Tom is sprinkling salt on his French fries is a belief about something. Jane's thought -- in this case a belief -- has, as its content, the proposition

Tom is sprinkling salt on his French fries.

Similarly, if Jane wants to buy a pair of green leg-warmers later today, she has a thought -- this time a wish, rather than a belief -- whose content is

Jane buys a pair of green leg-warmers later today.

If Tom also wants Jane to buy green leg-warmers today, then he too has a wish, which has the same content as Jane's wish. Different thought, same content.

Many philosophers have thought that possessing a content in this way is a defining feature of mental states, at least of those mental states which we have been calling `cognitive processes', if not of conscious experiences as well. The term intentionality has often been used to refer to this characteristic, and some philosophers have seen it as providing a key to the essence of `mind'. The word `intentionality' is supposed to indicate the way in which thoughts are directed at objects or circumstances outside themselves. That is, `intentionality' connotes `aboutness' (see Searle, 1983).

The notion of intentionality is also used in order to explain the meaningfulness of language: the difference between meaningless utterances and meaningful communication. When we talk or write things down, our utterances have certain formal properties -- phonological and syntactical properties, which can be discussed completely in isolation from their meaning. For example, when Jane speaks or writes the sentence ``Tom is sprinkling salt on his French fries,'' we can talk of the fact that the sentence has eight words, that the sentence contains an embedded adverbial phrase, and so on. These are observations about the symbols in themselves, and do not make any reference to their meaning, to the fact that the sentence is about certain objects and events in the world, namely, Tom, salt, and French fries. To talk about the sentence's meaning is to talk on the level of semantics, and it is relatively easy to characterize meaning in terms of intentional content. So the notion of intentionality can be used in order to explain both inner mental states and external spoken or written utterances.

People who doubt that computers can tell us much of interest about the nature of the mind have frequently appealed to the intentionality of human mental activity and of meaningful communication as a way of denying mentality in machines. On the face of it, it may seem obvious to people who have read through this book that computers can have states which possess intentionality. The symbol-structures that a computer operates with are surely not necessarily mere collections of uninterpreted tokens, but will often be understandable in terms of meanings, or contents. The structures will refer to objects or circumstances outside themselves. We have seen, for example, how programs can be written to engage in natural language dialogues which do not merely operate on the level of syntax, but also on the level of semantics. Also we have observed that machines running AI (and other) programs operate with many internal representations which can only be understood in an `intentionalistic' way: they construct plans about the manipulation of objects, perform searches, compare, choose among alternatives, apply rules, and so on. All these operations are characterizable in terms of their reference to various subject-matters which are distinct from the symbolic structures which constitute the operations themselves.

In response to this, philosophers who are sceptical about machine mentality retort that such programs do not have genuine semantics or intentionality, but only pseudo-intentionality. Consider, first of all, the sentences of a book or of a letter. If we came across the sentence about Tom and the French fries in a letter written by Jane, we would say that the words in the letter `had meaning', but by this we would really be intending to refer to Jane's using the words to express her meaning -- or alternatively, to the meaning which we derived from the words on reading them. The piece of paper and the marks on it would not possess meaning or intentionality in their own right, or at least they would do so only in a derivative sense. They would simply be a vehicle to express the intentional contents of Jane's thoughts. Surely, the argument goes, a computer is just a more complicated kind of device for transmitting symbols whose meanings originate in their human users, rather than in the device itself. However complex we might make the computer and its software, and however life like its external behaviour, it will always be in essence simply a device for manipulating and transmitting strings of symbols which are, from the computer's point of view, merely formal patterns of tokens, to which meaning is given by us, the human users. This, in outline, is the position of those who are against attributing genuine intentionality to computers. Naturally enough, people have been quick to come forward in defence of the machines.


[Next] [Up] [Previous]
Next: The Intentional Stance Up: AI and the Philosophy Previous: Consciousness and the Puzzle

Cogsweb Project: luisgh@cogs.susx.ac.uk