[Next] [Up] [Previous]
Next: The Mind as Machine Up: Towards Artificial Intelligence Previous: What Is Artificial Intelligence?

Machine Models of Mind

To set the scene, I would like you to try a simple experiment.


Make up and write down four lines of rhyming verse; it doesn't matter how bad they are, it's the thought that counts. While you are devising the verse, try and describe to yourself the processes that are going on in your mind. How do you search your memory to find a rhyming word? Do you come up with more than one candidate for a word or phrase? How do you decide between them? Try it now, before you read on.

The questions above are more or less difficult to answer. You may have produced some answer for question two. For example, in thinking up the following lines,

Now I want you all to try
And write some rhyming poetry

I first came up with the word `eye' to rhyme with `try', but rejected it for the word `poetry'. You may be able to give a broad answer to question three (I could not think of a suitable line ending in `eye', and `poetry' seemed a more comic rhyme for `try'). But you will not be able to describe the process by which you searched your memory and generated a novel series of words. To give another example, try and think up rhymes for the word `orange'. Candidate words will begin to pop into your conscious attention (`carriage', `forage', `lozenge', etc.), but you will have no idea of how they were placed there, in that particular order.

All this suggests that we are consciously aware only of the products of our minds, not the processes. This is so much an accepted part of being human that, until recently, it has coloured our entire understanding of the mind. Language, for instance, has traditionally been studied in terms of its product: the style and structure of written texts; the vocabulary and intonation of speech. In language education, the emphasis has always been on teaching grammar (how words are organized into regular patterns), rather than helping young writers to manage the difficult process of creating text.

If introspection (looking into one's own mind) reveals little about the process of thinking, then how can we find out about it? One approach (that of behavioural psychologists) is to say that looking for mental states and processes is both unreliable and a waste of time. Instead we should study observable behaviour and look for consistent links between stimulus (such as setting a subtraction sum) and response (the numbers the child writes on the page). The early successes of behaviourism, particularly in the study of animal activity, led behavioural psychologists to propose a general theory of human functioning, in terms of Stimulus-Response (S-R) links. All observable behaviour is classified as stimulus (input) or response (output) and the job of the psychologist is to infer lawful relationships between observed stimuli and observed responses.

Unfortunately, the method that had proven so successful in describing animal behaviour was a far from adequate account of human activity. In general, the connection between stimulus and response is complex. If I were to provide you with a stimulus by saying ``What is two plus two?'' then your response would be fairly predictable (not entirely so: you might give some deliberately silly response, but then the behaviourists did not have much to say about human perversity). But when I ask you, ``Write down four lines of rhyming verse,'' then your response is far from predictable. Predictability is an important test of the success of a psychological theory; if we can say, in a given situation, what a person will do next, then it is a good indication that the theory is accurate. (You may say, after reading this book, that neither is cognitive science much good at predicting a person's response to such a question. This is true, but what cognitive science can offer is a theory about the general class of responses, and the method by which a typical response might be generated.) Behaviourists have attempted to bridge the gulf between stimulus and response by proposing chains of little internal S-R links, but these begin to look suspiciously like the mental states they were trying to avoid.

Another possibility is to study physical characteristics of the brain, using instruments like the electroencephlograph which records patterns of electrical current in the brain. While it is possible to say that one pattern indicates that a person is sleeping, that a different one shows the person to be awake but relaxed, and yet another indicates a burst of intense mental activity, the electrical patterns give no indication of the content of that mental activity. Deducing the content of mental processes from looking at the physiology of the brain is a bit like trying to find out what programme is on TV by measuring changes in electric current through transistors in a TV set. A study of the brain can give valuable information about mental functions and disorders, but it is not an open route to understanding the process of thinking.

Faced with the urge to make sense of a complex system, with sparse data and no obvious underlying rules, scientists have traditionally built models that mimic the observable parts of the system. Thus, in Renaissance times, astronomers built beautiful and intricate instruments -- orreries, planispheres, armillary spheres -- to model the whirl of heavenly bodies. The earliest ones were certainly inaccurate due to their builders' hazy understanding of planetary motion but, unlike the planets themselves, they were available for experiment; they could be systematically altered, then tested for accuracy by comparing their motions against observations of the planets themselves. Of course, Kepler and then Newton later came up with universal principles of planetary motion, mathematical abstractions that demoted the mechanical models to toys and teaching aids, but in the study of the mind we are still at the level of Renaissance astronomers. Psychologists have, at various times, put forward universal `principles of behaviour', such as Thorndike's Law of Effect, but these have usually been hedged with qualifications, and subsequently shown to be far from universal. Of more interest to us are the attempts to formulate `principles of reasoning', such as those of George Boole (see section 1.3).

Designing models of the mind is nothing new: people have long attempted to describe mental states and processes in terms of current technology. Medieval scientists saw the mind as a miniature plumbing system, with reservoirs of imagination, reason, and memory stored in the brain, topped up by supplies from the sense organs and ready to flow through `nerve fibres' to the muscles. In the late nineteenth century the favoured model was a telephone exchange, with `wires' connecting the `telephone exchange' in the brain to `subscribers' at the nerve ends.

In the 1940s computers became the vogue technology and, sure enough, people began to propose the computer as a model of the mind. The newspapers of the time were full of articles about the `superhuman brain' and `electronic genius'. So, is the computer yet one more metaphor for the mind, to be supplanted when the next piece of technology comes along? To answer this, we first need to distinguish between computers and computation.

The computer is the conglomerate of printed circuits, wires, magnetic tape drives, floppy disk units, and so on that carries out the work. Although present-day computers vary enormously in size and cost, they are almost all of the same basic design, or architecture (called the von Neumann Architecture after the Hungarian-American who first proposed it). Each machine has a single Central Processing Unit (CPU) that performs the computation. The CPU has access to main memory, a series of data cells (you might imagine them as a long line of boxes, each containing a single simple piece of information) that are used for two quite distinct tasks. One part of main memory holds the data to be operated on: initial data (if any), intermediate values, and the final results, ready to be outputted. A separate part of the main memory holds the computer's program, in the form of a string of coded instructions to carry out operations on the data. A typical instruction might be (decoded into English) load the data in cell 1000 into a cell (called a register) in CPU. The typical computer also has backup memory, in the form of disks or tapes, to supplement main memory, and devices for interacting with the outside world, such as a keyboard and a Visual Display Unit, or VDU (see figure 1.1).

  [IMAGE ]
Figure 1.1: The design of a conventional computer.

Despite the differences in appearance, these machines all carry out the same basic function, that of computation, where this can be defined as performing operations on symbolic structures according to stored instructions. Notice that in the previous paragraphs I have been careful to talk about the computer operating on `data' rather than `numbers'. This is because a number is only one kind of symbolic structure. There are many others -- words, diagrams, musical notation, chemical formulae and so on -- and the computer is capable of manipulating all of these; in fact at the most basic level, that of the electronic circuit, the computer makes no distinction between them.

Imagine a series of boxes (representing the computer's main memory). Each box can be either empty or full, so a line of them can be arranged in many different combinations: the longer the line, the more possible arrangements. (Computers actually use electrical voltages to represent the contents of a `memory cell', one voltage being equivalent to `full box' and another being equivalent to `empty box'.)

One combination of boxes might be  

where stands for a full box and stands for an empty box.

By themselves the boxes represent nothing, they are just `a line of boxes'. But let us construct a coding scheme, by labelling each box with a number according to its position:

Using this coding scheme, let a full box represent the number below it (an empty box is ignored) and then add up the numbers to get a single result. You may well recognize the coding scheme as corresponding to binary numbers; the advantage of this scheme is that different combinations of `full' and `empty' boxes can represent every number between 0 and 65535. Thus, the boxes above represent 16384+2048+64+8+1 = 18505.

This is by no means the only way of interpreting the line of boxes. Another method could be to divide the boxes into sets of eight and add up the numbers for each set of eight: [IMAGE ]

Then, by letting 65 stand for the letter A, 66 stand for the letter B, 67 stand for the letter C, and so on, we have the word HI. This letter coding scheme may seem a bit bizarre, but it is the one, called ascii coding, that is actually used by many computers to represent letters. A still different code, in which combinations of boxes stand for musical notes, would represent a snatch of music.

There are two important points to note. First, by using an appropriate coding scheme, the computer can be made to represent any symbolic structure. Second, although the coding scheme is arbitrary, in the sense that it was devised by humans and is not a characteristic of the computer itself, so long as the scheme is consistent and the computer performs operations that are appropriate to the scheme, then the computer can manipulate the boxes (memory cells) as if they were numbers, words, or music. For example, treating the boxes above as the number 18505, the computer can be instructed to add this number to another one, stored in another part of its memory, and print out the result. The computer could carry out the same addition operation on the `boxes as letters', but in this case the result would not make sense, as `addition' is not an appropriate operator for letters. Thus the computer performs operations on symbolic structures and the operations it carries out are determined by a set of instructions (which are themselves symbol structures stored in memory).

What makes the computer important as a `mind modeller' is the assumption that ``mental processes may be thought of, at some level, as a kind of computation.'' (Charniak and McDermott, 1985, p. 6)

This does not mean to say that our brains store information in the same way as computers: the coding scheme is entirely different, and, as yet, we have no idea what that scheme is. But that does not matter; what is important is that at the right level of description, that of symbolic structures, the computer can operate in such a way as to model mental states and operations on them.

What makes a computer superior to all previous models of the mind is that the model can actually be built, and the processes run. Nobody ever seriously suggested building a plumbing system to perform the same functions as the human mind, but computer programs that carry out tasks normally associated with minds -- such as holding conversations, translating text from one language into another, diagnosing illnesses, solving puzzles, or proving mathematical theorems -- have already been constructed. The great advantage of a working model is that it can be tested, by setting it well-chosen tasks to perform and seeing if it operates in the same way as a human mind. Thus, a medical diagnosis program might be asked to describe its line of reasoning, to see if it corresponds to that of a human doctor.


[Next] [Up] [Previous]
Next: The Mind as Machine Up: Towards Artificial Intelligence Previous: What Is Artificial Intelligence?

Cogsweb Project: luisgh@cogs.susx.ac.uk