[Next] [Up] [Previous]
Next: Blocks-World Programs and Internal Up: Stored Knowledge Previous: Frames and Scripts

Two Models of AI

There is a superficial view of artificial intelligence which puts the accent on performance, on getting computers to display outward behaviour which is as human-like as possible. Programs like ELIZA encourage this view (though not necessarily as a result of the conscious intention of their creators), as does Alan Turing's key paper on ``Computing Machinery and Intelligence'' (1950), mentioned in the introductory chapter, where he discussed an (as yet) imaginary program which will be able to engage in conversation about any topic in such a convincing fashion that people will be genuinely and consistently unable to tell they are talking to a machine. The Turing Test has played an extremely important part in the history of AI, and many people working in the field have defined what they do ultimately in terms of producing a program which will qualify for the title `genuinely intelligent' by virtue of passing the Turing Test.

We can contrast this `performance model' of AI, as it might be called, with another model, which is concerned not so much with mimicking the outward behavioural displays of intelligence as with reproducing the inner processes, schemes of representation, inference mechanisms, processes of searching, problem solving, learning, etc. We shall call this latter view the internal representation model of AI.

You could think of a performance approach as concentrating on the inputs and outputs of a system, and an internal representation approach to AI as being concerned with what goes on inside the `black box' of the system. Most of the crucial problems in AI do not relate to the performances which a given AI system may eventually deliver, but rather to the details of its internal organization.

A good illustration of internal representation in AI is a famous early program called SHRDLU, written by Terry Winograd (Winograd, 1972). This program is quoted in many popular accounts of AI, no doubt because, like ELIZA, it gives a very convincing conversational performance. Figures 3.4-3.6 show the original SHRDLU in operation. Figure 3.4 shows the initial state of the blocks. Figure 3.5 shows a section of dialogue between a human user and SHRDLU (SHRDLU's contribution is in uppercase). Figure 3.6 shows the blocks after the dialogue.


In order to carry out the command `pick up the red big block' SHRDLU needs to represent many different kinds of knowledge. For example, it needs declarative knowledge about its objects and the relationships between them (it needs to know that `the big red block is on the table' and `the big green block is on the big red block'). Before reading on, try and write down in general terms what other kinds of knowledge are needed for it to respond correctly to the command.

  [IMAGE ]
Figure 3.4: The initial state of the SHRDLU example.
Adapted from T. Winograd (1972). Understanding Natural Language. New York: Academic Press, p. 8. Reprinted by permission.

  [IMAGE ]
Figure 3.5: A section of dialogue with SHRDLU.

  [IMAGE ]
Figure 3.6: The final state of the SHRDLU example.
Adapted from Winograd (1972), Understanding Natural Language, p. 12. Reprinted by permission.

Unlike ELIZA, SHRDLU's conversations are highly domain-specific. More to the point, however, the key achievements of SHRDLU are almost all `on the inside'.


[Next] [Up] [Previous]
Next: Blocks-World Programs and Internal Up: Stored Knowledge Previous: Frames and Scripts

Cogsweb Project: luisgh@cogs.susx.ac.uk