We said at the end of chapter 4 that one factor which makes state space descriptions of problems inappropriate for subject-centred accounts of human problem solving is that, although we may be able to describe the state space of a problem independently of a human subject's attempts to solve it, the description alone cannot give us much of an indication of the subject's personal problem solving strategies. There is no way of predicting in advance the exact steps by which, for a novel task, a subject will proceed tentatively and thoughtfully towards a solution. Consider the puzzle shown in figure 7.2.
[IMAGE ]
Figure 7.2: By moving up, down, sideways, or diagonally through the squares,
order the letters to spell out a nine letter word.
The state space for the problem is extremely large, corresponding to the set of all permissible tracings through the squares, but it is extremely unlikely that a human being would attempt to solve the problem by laboriously trying each possible path in turn until a word emerges. Rather, the person will draw on knowledge of English spellings, fragments of words, and common prefixes and suffixes in order to try out various hypotheses. For instance, sequences of three consecutive consonants (unless the first is an S, which is not the case with the present puzzle) and sequences of vowels, such as -IOE-, -II-, and -EOI-, are atypical if not impossible in English; on the other hand, -ER and -TION are common suffixes and RE-, EN-, and NON- are common prefixes. Fragments like -TIND- and -DRIN- are neither inherently meaningful nor familiar sequences in the vocabulary of English, while -NINE and -REND- are. You should be able to solve the puzzle in a few minutes following this kind of reasoning, while a serial testing of all possible tracings would take very much longer. In the former case, however, you will not know exactly what your personal search tree looks like until, by working through the puzzle, you plot it out for yourself.
If you knew beforehand what your search tree for this kind of task looked like, you would, of course, be able to specify the kinds of information that constitute the knowledge states in the tree, together with the set of operations that allow you to move from one state to the next; and you could view your behaviour quite simply as a search within this space. Since these facts are unknown, how might you go about discovering them? Having discovered them, how might you go about representing them in such a way that a computer program might simulate human problem-solving activity?
Protocol analysis was developed by Allen Newell and Herbert Simon as a method of studying subjects' mental processes in the performance of tasks. It consists of having subjects say what is going on in their minds as they go about solving a problem. Their `thinking aloud' is tape-recorded and later closely analyzed by the researcher. The analysis is a lengthy and complex process, since a protocol can run into hundreds of utterances. The researcher will break down the subject's protocol into what are taken to be the smallest, atomic, units of thought: discrete mental operations that cannot be further analyzed (or need not be for the purpose of understanding the subject's approach to the task). We shall from now on refer to these as operators. The application of an operator changes the problem state from one state of knowledge to the next. Taken together, the states and the operators applied to them constitute a problem space, and it is within this, rather than within a pre-given state space, that the search for a solution to a task or problem takes place. The problem space can be represented schematically by a problem behaviour graph (figure 7.3), a step-by-step reconstruction of the search, in which each node in the graph is a knowledge state (rather than, for example, a board position) and each directed link a mental operation (rather than a pre-defined legal move). At any moment the subject is viewed as being at some node in the problem behaviour graph, each node representing what the problem solver knows at that point, and each arrow representing a mental process that will put the problem solver in a new state of knowledge. Suppose that the problem is the one in figure 7.2. State1 will then correspond to the initial knowledge state of the subject faced with the puzzle; OP1 might be to look for a possible subpart of the word, leading to, say, the suffix -ER. The knowledge state has now changed, to state2. The subject may next look (OP2) for some sequence having a plausible syllable structure or meaning that will extend the word back from the suffix: say, -INNER (state3). It may be that the subject reasons into a `blind alley' in which no further progress can be made (as is the case in the present puzzle); the subject may then give up on that line, and return to an earlier state of knowledge (state2) as a starting point for a new line of reasoning (state4), finding now (OP4) another possible extension back, -DINER (state5). This too is a blind alley; the correct solution is given at the end of the chapter.
[IMAGE ]
Figure 7.3: A problem behaviour graph.
It is unlikely that a subject uses a unique operator for each new problem state. We can assume that people do not possess an indefinitely large number of distinct mental operations for finding their way through a lifetime of everyday situations. The researcher will therefore look next for recurrences of similar operations (in this example, ones like `find a plausible suffix that has not already been tried') and assume these to be repeated uses of the same operator. Eventually, the researcher should be able to abstract from the subject's protocol a small number of distinct operators, some used more than once, that are sufficient to account for progress from the statement of the initial problem or task to its solution.
A good verbal protocol will allow a researcher to abstract a theory of how a subject went about solving the problem; the theory may then be viewed as a model of the problem solver's activities in tackling a task, and can be translated into a computer program. In this sense, Newell and Simon claim, ``the theory performs the task it explains':
A good information processing theory of a good human chess player can play good chess; a good theory of how humans create novels will create novels; a good theory of how children read will likewise read and understand. (Newell and Simon, 1972, pp. 10-11)
We shall have more to say about protocol analysis, and the use of production rules for the modelling of human problem solving, in the next chapter, but for the present we shall take up Newell and Simon's claim above, by sketching out the architecture of a production system, pointing out along the way some of the important features in its design, and illustrating through a series of examples of production rules how a theory about rule-based reasoning can be translated into programs. Finally, we shall describe how you might implement your own production system as part of the Automated Tourist Guide.