Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
SYNOPSIS OF CAL AUDIENCE DIS=JSSION Concerns of several varieties were expressed about the knowledge engineering aspects of e ~ systems. Members of the audience with direct experience with developing expert systems gave these remarks special cogency. Expert systems se ~ to work better where good extensive formulations of the knowledge base already exist. Attempting co Develop anal Knowledge ease ~~ part or One expert system effort often fails. The domains of expert systems are often exceedingly narrow, limited even to the particularity of the individual case. Given the dependence of the knowledge in expert systems upon the informants, there exists a rain danger of poor s`ystems if the human experts are full of erroneous and imperfect knowledge. There is no easy way to root out such bad knowledge. On this last point it was noted that the learning apprentice systems ~ . ~ . .. . . . . . . The human ~ ~ . ~ ~ ~ ~ ~ ~ ~ ~ ~ . ~ . . alscussea In Mltanellts paper provide some protection. · . . . . experts give advice for the systems to construct explanations of the prior experience, and what the systems learn permanently is only what these explanations support. Thus the explanations operate as a filter on incorrect or incomplete knowledge from the human experts. Concern was expressed about when one could put trust in expert systems and what was require J to validate them. This was seen ~~ a major issue, especially as the communication frum the system Acted towards a clinked "Yes sir, will do". It was pointed out that the Issue Was exactly the same complexity with humans and with machines, in terms of the need to accumulate broad-band experience with the system or human on which to finally build up a sense of trust. Trust an] validation are related to robustness in the sense used in Newell's discussion. It was pointed out that one path is to endow such machines with reasoning for validation at the moment of decision or _ _ _, , . . . .. , _ _ _ action, when the context is available. This at least provides the right type of guarantee, namely that the system will consider some relevant issues before it acts. To make such an approach work requires providing additional global context to the machines, so the information is available on which to make appropriate checks. Finally, there was a discussion to clarify the immediate-knawledge vs search diagram that Newell used to describe the nature of expert systems. One can move along an isobar, trading off less immediate-kna~riedge for more search (moving Can and to the right) or,, 147
148 vice-versa, more immediate-knowledge for less search (moving up and to the left). Or one can move toward systems of increased power (moving up acrves the isobars) by pumping in sufficient additional knowledge and/or search in same combination. The actual shape of the equal-performance isobars depends on the task domain being covered. They can behave like hyperbolic asymptotes, where further tradeoff is always possible at the cost of more and more knowledge (say) to reduce search by less and less. But task drains can also be absolutely finite, such that systems with zero search are possible, with all correct response simply known. For these, there comes a point when all relevant knowledge is available, and no further addition of knowledge incrust-= performance. #