Human experts do not simply apply their knowledge to problems; they can also explain exactly why they have made a decision or reached a particular conclusion. This facility is also built into the expert systems that simulate human expert performance: they can be interrogated at any moment and asked, for example, to display the rule they have just used or to account for their reasons in using that rule. That a system is able to explain its reasoning is in itself no guarantee that the human user will understand the explanation: if the advice is to be of use, it is important that the system be able to justify its reasoning process in a cognitively plausible manner, by working through a problem in much the same way as a human expert would. Production systems, and the more sophisticated expert systems, can be made to reason either forwards, from initial evidence towards a conclusion, or backwards, from a hypothesis to the uncovering of the right kind of evidence that would support that hypothesis, or by a combination of the two. One significant factor which will determine whether a system will use forward or backward reasoning is the method used by the human expert.
If all the necessary data are either pre-given or can be gathered, and if it is possible to state precisely what is to be done in any particular set of circumstances, then it is more natural and likely that a human being -- and hence any machine modelling human performance -- would work forward from the data towards a solution. An expert system which reasons forward in this way is R1, which configures DEC-VAX computer systems. R1 reasons forward until a single good system configuration is found (others may be possible, but are not considered so long as one good solution is found). Medical diagnosis, on the other hand, normally proceeds abductively from a patient's symptoms back to the possible causes of the illness, and hence to an appropriate course of treatment. This was a crucial factor in the design of, for example, MYCIN, a system that diagnoses blood infections. MYCIN is a moderately large expert system, having around 450 rules, of which the following is typical. (The rule is shown in its English form, which is used by MYCIN to generate explanations to the user. For reasoning, the system calls on rules coded in an extension of the LISP programming language.)
IF: 1) THE STAIN OF THE ORGANISM IS GRAMNEG AND2) THE MORPHOLOGY OF THE ORGANISM IS ROD AND
3) THE AEROBICITY OF THE ORGANISM IS AEROBIC
THEN: THERE IS STRONGLY SUGGESTIVE EVIDENCE (0.8)
THAT THE CLASS OF THE ORGANISM IS
ENTEROBACTERIACEAE
You will notice that in the `actions' part of the MYCIN rule, the certainty of the conclusion is not absolute: it is simply a hypothesis or estimate supported by the premises stated in the conditions. The parenthesized number represents the degree of certainty of the inference made by the rule: if evidence is found that satisfies the premises, then there is a (0.8) likelihood that the hypothesis is correct (a likelihood of 1.0 being absolute certainty). Being able to use statistical and probabilistic reasoning in this manner is important in real-world situations where, for example, there is some factor of randomness in the situation itself, or where we do not have access to sufficient data to be able to know with any real certainty that our conclusions are correct. Medical diagnosis is a clear instance of such a class of problems: a complex domain in which medical knowledge is incomplete, and in which the diagnostician may not have all the data needed. Often, inexact reasoning is necessary if the doctor is to make any diagnosis at all. Various methods exist for estimating the certainty of conclusions: Bayes' theorem for calculating probabilities (used by PROSPECTOR), Zadeh's fuzzy logic, and Shortliffe's scheme based on measures of `belief' and `disbelief', which is used in MYCIN. Values attached to rules can be passed on to further rules, and combined in various ways with other values, so as to produce final values for conclusions. For a highly readable overview of MYCIN and other expert systems, you might like to look at Feigenbaum and McCorduck (1984). Rich (1983) and Charniak and McDermott (1985) give more technical accounts of statistical and probabilistic reasoning.