In a paper called ``Intentional Systems'' (Dennett, 1978a), Daniel Dennett suggested the following line of thought. Consider a computer running a chess-playing program. We might consider this machine from a number of different points of view. We might adopt the `design stance': this will be our point of view if what we are interested in is primarily the construction of the program, how it is implemented in the hardware, and so on. Then there is the `physical stance': this will be our perspective if we are interested in the chemical or electronic properties of the semiconductor devices in the machine's circuit board, and so on. But apart from these perspectives there is what Dennett calls the `intentional stance'. This is the point of view you would adopt if you were actually playing chess with the machine: in this case you would consider its goals, strategies, the beliefs that it might have about your strategy, and so on.
When we adopt the intentional stance, we are treating the machine as if it had desires, beliefs, purposes, representations, etc., that is, intentional states. But this, says Dennett, is not just a luxury. It is a necessary condition of our being able to use the machine for its intended purpose -- namely, to play chess (or whatever). If we chose to limit ourselves to the design stance or, worse, to the physical stance, we would find it difficult or even impossible to play a good game with the machine. We need the intentional stance in order to make proper explanations and predictions of the machine's actions.
But if the intentional stance is necessary in one sense, in another sense it is, or need be, merely a product of our own predictive purposes. When reflecting ``It's threatening my knight, so it obviously wants to trap me into exposing my queen,'' we do not have to be attributing literal beliefs and desires to the machine. The issue of whether or not there really are such entities inside the computer will not in the slightest affect our game.
So the attribution of intentionality to the chess-playing machine is merely the product of the adoption of a certain sort of stance to the machine, a stance which is appropriate because of its predictive and explanatory value, and therefore to that extent objectively justified, but which need have no deeper `metaphysical' basis. But, insists Dennett, the same may just as well be true of intentionality in human beings. We need to make intentional characterizations of one another (and of ourselves), in order to make sense of each other's actions. So the adoption of the `intentional stance' is, for this reason, unavoidable in humans just as it is in the case of the chess machine. But do we therefore need to conclude that there must be metaphysically real intentional entities in our minds? Surely the cases are comparable. In each case intentionality appears to function as an indispensable descriptive and explanatory framework, but in each case, too, we have no especial need, once we have realized the nature of this explanatory framework, to wonder whether or not there is any inherent reality to the phenomena referred to in the framework.
Dennett's view is an example of what has been called the eliminativist approach to mental states. Eliminativists tend to be rather sceptical about the traditional vocabulary of mental states, such as `beliefs', `wishes', `intentions', `meanings', and so on, believing that such concepts belong to an outmoded, pre-scientific, `folk psychology'. (See chapters 2 and 3 of Churchland, 1984). Dennett believes that philosophers' notions of intentionality (among other commonsense notions of the mind) are infected by the mythologies of our folk psychology and that, as a result of findings in AI, neurophysiology, and the cognitive sciences in general, they will be replaced by other, more adequate notions in a fully scientific account of human nature and of the mind. This is not to say, of course, that there is any harm in using such notions on a day-to-day basis: obviously they have, as we have seen, an indispensable heuristic role to play. As long as we continue to recognize that our intentional ascriptions do have to play this heuristic role, nothing will be lost by supposing that there is no such thing -- in scientific or metaphysical fact -- as real intentionality, whether in machines or in humans.