There is a long history in artificial intelligence of attempts to write programs to play games such as chess. Chess playing programs are now better than most human chess players, though still not so good as the very best human players. In fact, games programs work using a form of search adapted to take account of the fact that the human adversary, who wants to beat the program, can change the state of the game at each play.
Take the game of chess. The state-space consists here of all the possible board positions that could arise during play. There is an enormous number of them, and we could never write down the whole of the state-space. Nonetheless, given a particular board position, the program has to decide which of those board positions that it can produce on its next move will give it the greatest advantage over its opponent. It does so by searching through them, investigating the possible ways the game will develop from each position. Once again, a tree will develop, with the course of many possible games shown down its various branches. Once again, the program is trying out many ``What if?'' questions, in this case of the form ``What if I move to X?'' and ``What if my opponent then moves to Y?''
The state-space for chess is so vast that no program could ever complete the tree, following up all the possible games that could develop to their conclusions. Instead, a program has to stop after looking ahead for only a few moves; it must then assess the board states that have developed using criteria such as the number of pieces taken and the number of squares threatened by the various pieces. The choice of these criteria is a matter of the greatest difficulty in designing a good chess playing program, and involves the application of a great deal of human intelligence. The criteria are combined into a number which reflects the overall assessment of the state of the board; this is called a static evaluation function. It is `static' because it must depend not on further looking ahead, but only on a calculation based on the board state. (A state of play where the computer had won would be given a very high static evaluation, one where it had lost a very low one.) The tree grown is thus cut down to a manageable size. At the end of each branch is a board state whose degree of advantage to the program has been assessed using static evaluation.
Once the program has made this tree for the current state of play, how does it use it to decide what move to make, when it cannot know what the opponent will do? The program has to find choices in the tree that will lead to a board with the highest possible static value, but only alternate choices are under its control. For its opponent's moves, the best the program can do is to put itself in the opponent's shoes, and decide what move it would make in their place. We will shortly give an example of how this works in practice.
Of course the opponent will not always do as predicted. After all, they will probably have a different strategy to the program, and so sometimes the method of swapping roles will yield the wrong guess as to what the opponent will do. However, if the opponent's strategy is worse than the program's, the program will still win, and if the opponent's strategy is better then the program's, then the program can hardly use it to make predictions! This uncertainty means that part of the search tree has to be reconstructed at each play in the game, as it takes unforeseen twists and turns; this is a crucial difference between planning a route in the fixed layout of the Underground and playing against adversaries who will do their best to destroy your plans at each move in the game.
There is another important distinction to make. The Underground route finder was guaranteed to find the shortest route (granted its assumptions about travel times). In a very simple game such as noughts and crosses (tic-tac-toe in the United States) a program can map out the whole of the game tree and hence ensure that it never makes a losing move (though it may still be forced to a draw). In chess, because exhaustive search is impossible, rules for evaluating game states have to be introduced. Such rules cannot guarantee that a board position is the best, and so the search is no longer certain to find the best solution. Chess programs can be beaten. The rules used to assess board positions are therefore heuristics in that they indicate how to proceed without guaranteeing the best solution (see chapter 1). Search strategies that use heuristics go under the name heuristic search.
If you want to beat a chess playing program, then you need to understand its weaknesses. First, most chess programs call on a stored set of conventional opening moves, along with the best responses. So start the game in an unconventional way, such as by moving the knight. From then on you will be competing against a search program. Now, whenever you go on the attack, you are putting pieces at risk. Against a human opponent you can generally assess that risk by looking ahead and anticipating possible responses. But assessing possibilities is just what a search program is best at doing, and by systematically evaluating all the states two or three moves ahead, it may well find an excellent response that you have overlooked. So do not give it that chance; play a conservative, defensive game, so that the program cannot easily choose between reponses, and wait for it to make an obvious mistake that you can exploit.