[Next] [Up] [Previous]
Next: Intelligent Machines Up: Towards Artificial Intelligence Previous: Machine Models of Mind

The Mind as Machine

Earlier I referred to the assumption that the human mind acts like a computer. It is only an assumption, since we have no proof that this is the full story. We do know that the mind can perform symbolic operations, such as adding numbers, comparing words, or transposing music, but it may be that there are other things happening in our minds that are either completely non-symbolic (experiencing emotions, for instance) or are below the level of conventional symbol processing (such as seeing and distinguishing objects). Investigating the limitations of the computational model of mind is a fascinating new area of philosophy. If it is the case that the mind is purely a symbol manipulator (and, as I have said, this is an open question), then, some philosophers have suggested, an appropriately programmed computer may not just be able to model the mind, but may actually have a mind.

At first sight this seems absurd -- after all humans are made of flesh and blood and computers of metal and silicon -- but again we need to distinguish between computers and computation. Nobody is suggesting that we look like computers, or act like any existing computer, but rather that thinking consists (partly or wholly) of symbol manipulation, and manipulating symbols is exactly what computers do. Now one symbol does not make a thought, and there are plenty of symbol manipulators that by no stretch of the imagination can be called minds: adding machines and electric typewriters, for instance. What makes a computer different is its ability to act autonomously, guided by its internal stored program. The issue is this: can a program be built of sufficient elegance and complexity that the computer running it can be said to have a mind?

Look at the following lines: 

Why does my waiting child like to talk?
Why does my girl wish to dream of my song?
You are like a song.
By herself my waiting girl dreams.

They do not rhyme, but then neither do many of the poems of Dylan Thomas and e. e. cummings. If I told you that I wrote the lines, then you might comment that I was `wistful' when I wrote them, or say that they expressed `loneliness'.

In fact the lines were generated by a computer program (one called GRAM3, running on a DEC-VAX computer). Applying terms like `wistful' to a computer program is, to say the least, strange, yet what is it that separates computer from human? If we can program a computer to write poems (albeit rather poor ones), then where do we draw the line?

Philosophers have long wished to discover just what are those qualities that distinguish human beings. Thus, the French philosopher René Descartes (1596-1650) believed that humans were guided by an immaterial mind while the rest of nature (including all animals) were driven only by the laws of physics (lumping dogs and monkeys in with clocks and windmills as mindless objects conveniently allowed Descartes to ignore any considerations of care or sympathy towards them, and he carried out some, to us, horrific experiments on live animals).

One of the pastimes of the period was the building of `automata', clockwork dolls that looked and moved like people or animals. In a fascinating section of his book Discourse on Method, Descartes suggests that if it were possible to design an automaton which had the organs and outward shape of a monkey or ``other animal that lacks reason,'' then we should have no means of telling it from a real animal. But a machine to imitate humans would be far easier to detect because humans have two special characteristics that distinguish them from automata (and from animals).

First, according to Descartes, a machine ``could never use words, or put together other signs, as we do in order to declare our thoughts to others.'' Granted one could build a machine that utters words, e.g., if you touch it in one spot it asks you what you want of it, if you touch it in another it cries out you are hurting it, and so on,'' but it could not give an ``appropriately meaningful answer to what is said in its presence, as the dullest of men can do.''

Second the automaton would lack general reasoning abilities: ``even though such machines could do some things as well, or better, than humans, they would inevitably fail in others, which would reveal that they were acting not through understanding but only from the disposition of their organs.'' In other words, whereas a machine has a collection of parts to respond to particular situations (a chiming clock, for example, is set off by the position of its gear wheels), human reason is ``a universal instrument which can be used in all kinds of situations'' (Descartes, 1642).

In thus proposing the differences between people and machines Descartes did not appeal to our intuition, nor did he suggest that ethereal qualities like a `soul' or `emotion' set us apart from machines. Instead he indicated two testable human characteristics: the meaningful use of language and general reasoning abilities. As you will see later, language and reasoning are central themes of present-day research in artificial intelligence.

It is hardly surprising that Descartes considered there were fundamental differences between people and machines, since the only machines around at the time were either substitutes for human muscle, like the windmill, or highly specialized recording and tabulating machines, like clocks, or cunningly designed dolls that merely simulated the outward appearance and movements of humans.

As it happens, at almost the same time that Descartes wrote his Discourse on Method another French philosopher, Blaise Pascal (1623-1662), was designing a mechanical calculator that could perform addition and subtraction, and in the 1670s Gottfried Leibniz (1646-1716) built one that could multiply and divide. Although these were early examples of symbol manipulation machines, they were still specialized devices, dedicated to carrying out a narrow range of arithmetic tasks. The first general purpose programmable symbol manipulator (and as such a candidate for `mind model') came 200 years later.

Charles Babbage (1791-1871), an eccentric British mathematician, planned to build two different machines. The first, which he called the `Difference Engine', was for calculating mathematical tables. It was an elegant and complex device but, like Pascal's calculator, it was devoted to a single task. The second, his `Analytical Engine', was a general purpose calculator, and was the product of a magnificent combination of mathematical insight and mechanical skills. It had many of the features of a modern computer, with a Central Processing Unit (which he called the `Mill'), a data memory, and a controlling unit, and, unlike previous calculators, it could be programmed to perform different sequences of operations. The programs were encoded as holes punched on cards that were fed into the machine. The entire contraption would have been the size of a car; unfortunately it was never built, not because the design was faulty, but because nineteenth-century engineering was not up to the precision needed for the hundreds of gears and cogs.

Although the Analytical Engine was intended as a numeric calculator, Babbage and his friends were sufficiently astute to realize that similar machines could be devised and programmed to operate on other kinds of symbolic data. They also speculated on whether such machines might be called intelligent. A colleague of Babbage, Ada Lovelace, in a written commentary on a set of Babbage's lecture notes, wrote, ``The Analytical Engine has no pretensions to originate anything. It can only do whatever we know how to order it to perform'' (Bowden, 1953).

This is reminiscent of Descartes' argument that machines cannot reason (since reasoning involves the creation of new ideas), and the notion that computers cannot be creative persists to the present day. Certainly a computer is under the direct control of its program, but this need not be a restriction, for the simple reason that we can progam the computer to be creative.

One of the great intellectual achievements of the late nineteenth and early twentieth centuries was the invention of a `calculus of reasoning'. It began in 1854 with George Boole's Investigation of the Laws of Thought.   Boole tried to set down precise logical definitions of words like and and or

and the rules whereby they can be used to build complex statements out of simple ones, like

It is hot today and it will rain or it will get hotter.

Then, in the late nineteenth century, Gottlob Frege developed a formal method of representing more of the internal structure of sentences and set out formal rules of inference for deriving new statements from old. These ideas were developed during this century by Bertrand Russell and A. N. Whitehead, among others, into what is now known as predicate logic. The rules of predicate logic specify ways of checking whether an inference is valid merely by analysing the structures of symbols. Thus, a machine that attached no meaning to the symbols could be programmed to apply these rules and check, for example, that

All As are Bs

does not validly entail

All Bs are As

but does validly entail

If no Cs are Bs then no Cs are As.

No such machine had then been built, but during the 1930s mathematicians began to consider `what would happen if' the rules of predicate calulus were mechanized. The next step was to show that besides checking the validity of existing inferences, machines could also generate new valid inferences, thereby deriving new theorems from some set of axioms. If a machine could derive interesting ones that had not previously been discovered by people, that would be a form of creativity. This mechanization of reasoning also inspired the hope that yet more complex systems of rules would enable a machine to invent new concepts and new axioms, instead of simply deriving theorems from axioms given to it by its designers. This would overcome Descartes' objection that reasoning must be the preserve of humans and Lady Lovelace's that symbol manipulating machines cannot be creative. (It cannot be claimed that these more ambitious goals have been achieved already, though work in artificial intelligence seems to be steadily moving toward them, in ways that will be illustrated in this book.)

These, and other more subtle arguments against `machine intelligence', were discussed by the next great name in computing, Alan Turing. Turing was another British mathematician and, although not quite as eccentric as Babbage, his life was just as eventful. After setting out the theoretical foundations of computing in the 1930s he worked during World War II at Bletchley Park, where a group of academics had been assembled by the British government to try and crack the coded messages broadcast by the German armed forces. To help them in this task, they built what was arguably the world's first electronic computer. The machine, called Colossus, was built two years before ENIAC, the first US computer, but it was cloaked in military secrecy and, being designed for code breaking, did not have a general purpose architecture. Alan Turing also worked on the world's first commercially available electronic computer, the Ferranti Mark I.

In 1950 Turing published a celebrated paper entitled ``Computing Machinery and Intelligence'' in which he addressed the question ``Can machines think?'' (The paper has been reprinted many times and can be found in, for example, Hofstadter and Dennett, 1981.) The paper is entertaining and nontechnical. It begins with what he calls the `Imitation Game' (later to be known as the Turing Test), played between three people, a man (A), a woman (B), and an interrogator (C) (who may be of either sex). The people are in separate rooms and the interrogator cannot see A or B. The only method of communication is by teleprinter link, and the interrogator does not know which line goes to A and which to B. The object of the game is for the man (person A) to fool the interrogator into believing he is a woman (person B). Thus, the interrogator might type a message down the line saying ``Name me three types of knitting stitch,'' and, if the line were to the man, he would answer saying something like: ``Plain, purl, and basket.''

Now for the purpose of the Imitation Game: imagine it as before, with one teleprinter line from the interrogator to a human (the sex of the human is now not important), but the other to a computer. The object is now to program the computer so that it can imitate the human. The question and answer session, Turing suggested, might go something like this:

Q:
Please write me a sonnet on the subject of the Forth Bridge.
A:
Count me out on this one. I never could write poetry.
Q:
Add 34957 to 70764.
A:
(Pause about 30 seconds and then give as answer) 105621.
Q:
Do you play chess?
A:
Yes.
Q:
I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A:
(After a pause of 15 seconds) R-R8 mate.

Notice that the respondent gives a wrong answer to the subtraction sum in the dialogue above; imitating a person involves mimicking human errors and lapses. If, after a reasonable number of questions, the interrogator cannot tell which line is connected to the human and which to the computer, then the computer might be said to think.

In the second part of the paper, Turing raises, and dismisses, some of the reasons (such as the argument that computers cannot be creative) why it might not be feasible to program a computer to pass his test.

By the design of the experiment, Turing followed Descartes in implying that intelligence is the ability to reason and to communicate by language. More recent discussions of machine intelligence have tended to take such feats for granted (even though passing the Turing Test is way beyond the capabilities of any existing computer program) and instead have concentrated on whether other human qualities like consciousness and emotion can be ascribed to a (suitably programmed) computer.


[Next] [Up] [Previous]
Next: Intelligent Machines Up: Towards Artificial Intelligence Previous: Machine Models of Mind

Cogsweb Project: luisgh@cogs.susx.ac.uk