We have shown how a computer program can solve a problem by means of a search strategy that tries out different possibilities until the right one is found. Providing that we can represent each state of the search, and the links between adjacent states, then we can draw out a state-space and define an algorithm to search it. We gave examples of three different search methods: depth first search, which involves exploring to the end of each branch of the search tree in turn and then backtracking if the branch reaches a dead-end; breadth first search, in which the tree is searched one layer at a time; and branch and bound search, in which information about the distance from the start to a node is used to determine the next branch to explore. We also showed how search can be used in game playing, using the minimax method to examine a tree of game states. Heuristics can produce a static evaluation of each board state, and this can be used to predict an opponent's move. The assumption behind minimax is that the opponent will always make what the program judges to be a the best move. The program chooses a move that will minimize the `damage' caused by the opponent. We gave some examples of search, each of which could be generalized: the route finder to working out complex plans of action, the jigsaw puzzle to allocating resources subject to constraints, and the game player to evolving plans to deal with changing situations. Search is a general-purpose method, suited to a class of problems that can be solved by mapping out the state-space and applying a quick and simple evaluation function to the legal moves from the current state.