From rutgers!sun-barr!cs.utexas.edu!uunet!mcvax!ukc!icdoc!syma!aarons Mon Aug  7 12:31:15 EDT 1989
Article 4702 of comp.ai:
Path: sunybcs!rutgers!sun-barr!cs.utexas.edu!uunet!mcvax!ukc!icdoc!syma!aarons
>From: aarons@syma.sussex.ac.uk (Aaron Sloman)
Newsgroups: comp.ai
Subject: Re: Is there a definition of AI?
Keywords: defining AI
Date: 6 Aug 89 17:02:11 GMT
References: <KIM.89Aug4221740@watsup.waterloo.edu>
Organization: School of Cognitive & Computing Sciences, Sussex Univ. UK
Lines: 199

kim@watsup.waterloo.edu (T. Kim Nguyen) writes:

> Date: 5 Aug 89 02:17:40 GMT
> Organization: PAMI Group, U. of Waterloo, Ontario
>
> Anyone seen any mind-blowing (I mean, *GOOD*) definitions of AI?  All
> the books seem to gloss over it...
> --
> Kim Nguyen 					kim@watsup.waterloo.edu
> Systems Design Engineering  --  University of Waterloo, Ontario, Canada

Most people who attempt to define AI give limited definitions based
on ignorance of the breadth of the field. E.g. people who know
nothing about work on computer vision, speech, or robotics often
define AI as if it were all about expert systems. (I even once
saw an attempt to define it in terms of the use of LISP!).

What follows is a discussion of the problem that I previously posted
in 1985 (I've made a few minor changes this time)!

-- Some inadequate definitions of AI ------------------------------

Marvin Minsky once defined Artificial Intelligence as '... the
science of making machines do things that would require intelligence
if done by men'.

I don't know if he still likes this definition, but it is often
quoted with approval. A slightly different definition, similar in
spirit but allowing for shifting standards, is given in the textbook
on AI by Elaine Rich (McGraw-Hill 1983):
    '.. the study of how to make computers do things at which, at
    the moment, people are better.'

There are several problems with these definitions.

 (a) They suggest that AI is primarily a branch of engineering
concerned with making machines do things (though Minsky's use of the
word 'science' hints at a study of general principles).

 (b) Perhaps the main objection is their concern with WHAT is done
rather than HOW it is done. There are lots of things computers do
that would require intelligence if done by people but which have
nothing to do with AI, because there are unintelligent ways of
getting them done if you have enough speed. E.g. calculators can do
complex sums which would require intelligence if done by people.
Even simple sums done by a very young child would be regarded as an
indication of high intelligence, though not if done by a simple
mechanical calculator. Was building calculators to go faster or be
more accurate than people once AI? For Rich, does it matter in what
way people are currently better?

 (c) Much AI (e.g. work reported at IJCAI) is concerned with
studying general principles in a way that is neutral as to whether
it is used for making new machines or explaining how existing
systems (e.g. people or squirrels) work. For instance, John McCarthy
is said to have coined the term 'Artificial Intelligence' but it is
clear that his work is of this more general kind, as is much of the
work by Minsky and many others in the field. Many of those who use
computers in AI do so merely in order to test, refine, or
demonstrate their theories about how people do something, or, more
profoundly, because only with the aid of computational concepts can
we hope to express theories with rich enough explanatory power.
(Which does not mean that present-day computational concepts are
sufficient.)

For these reasons, the 'Artificial' part of the name is a misnomer,
and 'Cognitive Science' or 'Computational Cognitive Science' or
'Epistemics' might have been better names. But it is too late to
change the name now, despite the British Alvey Programme's silly use
of "IKBS" (Intelligent Knowledge Based Systems) instead of "AI"


-- Towards a better definition of AI ------------------------------

Winston, in the second edition of his book on AI (Addison Wesley,
1984) defines AI as 'the study of ideas that enable computers to be
intelligent', but quickly moves on to identify two different goals:

    'to make computers more useful'
    'to understand the principles that make intelligence
        possible'.

His second goal captures the spirit of my complaint about the other
definitions. (I made similar points in my book 'The Computer
Revolution in Philosophy' (Harvester Press and Humanities Press,
1978; now out of print)).

All this assumes that we know what intelligence is: and indeed we
can recognise instances even when we cannot define it, as with many
other general concepts, like 'cause' 'mind' 'beauty' 'funniness'.
Can we hope to have a study of general principles concerning X
without a reasonably clear definition of X?

Since almost any behaviour can be the product of either an
intelligent system (e.g. using false or incomplete beliefs or
bizarre motives), or an unintelligent system (e.g. an enormously
fast computer using an enormously large look-up table) it is
important to define intelligence in terms of HOW the behaviour is
produced.

-- Towards a definition of Intelligence ---------------------------

Intelligent systems are those which:

 (A) are capable of using structured symbols (e.g. sentences or
states of a network; i.e. not just quantitative measures, like
temperature or concentration of blood sugar) in a variety of roles
including the representation of facts (beliefs), instructions
(motives, desires, intentions, goals), plans, strategies, selection
principles, etc.

NOTE.1. - The set of structures should not be pre-defined: the
system should have the "generative" capability to produce new
structures as required. The set of uses to which they can be put
should also be open ended.

 (B) are capable of being productively lazy (i.e. able to use the
information expressed in the symbols in order to achieve goals with
minimal effort).

Although it may not be obvious, various kinds of learning capabilities
can be derived from (B) which is why I have not included learning as
an explicit part of the definition, as some people would.

There are many aspects of (A) and (B) which need to be enlarged and
clarified, including the notion of 'effort' and how different sorts
can be minimised, relative to the system's current capabilities. For
instance, there are situations in which the intelligent (productively
lazy) thing to do is develop an unintelligent but fast and reliable
way to do something which has to be done often. (E.g. learning
multiplication tables.)

NOTE.2 on above "NOTE.1". I think it is important for intelligence
as we conceive it that the mechanisms used should not have any
theoretical upper bound to the complexity of the structures with
which they can cope, though they may have practical (contingent)
limits such as memory limits, and addressing limits..... (The notion
of "generative power", i.e. which of a mechanism's limits are
theoretically inherent in its design and and which are practical or
contingent on the implementation requires further discussion. One
test is whether the mechanism could easily make use of more memory
if it were provided. A table-lookup mechanism would not be able to
extend the table if given more space.)

NOTE.3. No definition of intelligence should be regarded as final.
As in all science it is to be expected that further investigation
will lead to revision of the basic concepts used to define the
field.

Starting from a suitable (provisional) notion of what an intelligent
system is, I would then define AI as the study of principles
relevant to explaining or designing actual and possible intelligent
systems, including the investigation of both general design
requirements and particular implementation tradeoffs.

The reference to 'actual' systems includes the study of human and
animal intelligence and its underlying principles, and the reference
to 'possible' systems covers principles of engineering design for
new intelligent systems, as well as possible organisms that might
develop one day.

NOTE.4: this definition subsumes connectionist (PDP) approaches to
the study of intelligence. There is no real conflict between
connectionism and AI as conceived of by their broad minded
practitioners.

The study of ranges of design possibilities (what the limits and
tradeoffs are, how different possibilities are related, how they can
be generated, etc.) is a part of any theoretical understanding, and
good AI MUST be theoretically based. There is lots of bad AI -- what
John McCarthy once referred to as the 'look Ma, no hands' variety.

The definition of intelligence could be tied more closely to human
and animal intelligence by requiring the ability to cope with
multiple motives in real time, with resource constraints, in an
environment which is partly friendly partly unfriendly. But probably
(B) can be interpreted as including all this as a special case!

More generally, it is necessary to say something about the nature of
the goals and the structure of the environment in which they are to
be achieved.

But I have gone on long enough.

Conclusion: any short and simple definition of AI is likely to be
    shallow, one-sided, or just wrong as an description of the range
    of existing AI work.

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QN, England
    INTERNET: aarons%uk.ac.sussex.cogs@nsfnet-relay.ac.uk
              aarons%uk.ac.sussex.cogs%nsfnet-relay.ac.uk@relay.cs.net
    JANET     aarons@cogs.sussex.ac.uk
    BITNET:   aarons%uk.ac.sussex.cogs@uk.ac
        or    aarons%uk.ac.sussex.cogs%ukacrl.bitnet@cunyvm.cuny.edu

    UUCP:     ...mcvax!ukc!cogs!aarons
            or aarons@cogs.uucp


From cs!rutgers!iuvax!cica!ctrsol!IDA.ORG!rwex Fri Aug 18 09:56:13 EDT 1989
Article 4733 of comp.ai:
Path: cs!rutgers!iuvax!cica!ctrsol!IDA.ORG!rwex
>From: rwex@IDA.ORG (Richard Wexelblat)
Newsgroups: comp.ai
Subject: Re: Is there a definition of AI?
Keywords: defining AI
Date: 11 Aug 89 11:36:20 GMT
References: <KIM.89Aug4221740@watsup.waterloo.edu> <1213@syma.sussex.ac.uk>
Reply-To: rwex@csed-42.UUCP (Richard Wexelblat)
Organization: IDA, Alexandria, VA
Lines: 27

In article <1213@syma.sussex.ac.uk> aarons@syma.sussex.ac.uk (Aaron Sloman) writes:
>kim@watsup.waterloo.edu (T. Kim Nguyen) writes:
>> Anyone seen any mind-blowing (I mean, *GOOD*) definitions of AI?  All
>> the books seem to gloss over it...
>Most people who attempt to define AI give limited definitions based
>on ignorance of the breadth of the field. E.g. people who know
>nothing about work on computer vision, speech, or robotics often
>define AI as if it were all about expert systems. (I even once
>saw an attempt to define it in terms of the use of LISP!).

A semi-jocular definition I have often quoted (sorry, I don't know the
source, I first saw it in net.jokes) is:

	AI is making computers work like they do in the movies.

Clearly, this is circular and less than helpful operationally.  But it's
a good way to set the scene, especially with layfolks.

A problem with the breadth of AI is that as soon as anything begins to
be successful, it's not considered AI anymore--as if the opprobrium of
being associated with the AI community were something to get away from
as soon as possible.  Ask someone in NatLang or Robot Vision if they're
doing AI.
-- 
--Dick Wexelblat  |I must create a System or be enslav'd by another Man's; |
  (rwex@ida.org)  |I will not Reason and Compare: my business is to Create.|
  703  824  5511  |   -Blake,  Jerusalem                                   |


From rutgers!cs.utexas.edu!csd4.milw.wisc.edu!bionet!agate!shelby!lindy!news Fri Aug 18 09:56:30 EDT 1989
Article 4736 of comp.ai:
Path: sunybcs!rutgers!cs.utexas.edu!csd4.milw.wisc.edu!bionet!agate!shelby!lindy!news
>From: GA.CJJ@forsythe.stanford.edu (Clifford Johnson)
Newsgroups: comp.ai
Subject: Re: Is there a definition of AI?
Date: 11 Aug 89 17:18:04 GMT
Sender: news@lindy.Stanford.EDU (News Service)
Distribution: usa
Lines: 51

Here's a footnote I wrote describing "AI" in a document re
nuclear "launch on warning" that only mentioned the term in
passing.  I'd be interested in criticism.  It does seem a rather
arbitrary term to me.

  Coined by John McCarthy at Dartmouth in the 1950s, the phrase
  "Artificial Intelligence" is longhand for computers.  Today's
  machines think.  For centuries, classical logicians have
  pragmatically defined thought as the processing of raw
  perceptions, comprising the trinity of: categorization of
  perceptions (Apprehension); comparison of categories of
  perceptions (Judgment); and the drawing of inferences from
  connected comparisons (Reason).  AI signifies the performance
  of these definite functions by computers.  AI is also a
  buzz-term that salesmen have applied to virtually all 1980's
  software, but which to data processing professionals especially
  connotes software built from large lists of axiomatic "IF x
  THEN y" rules of inference.  (Of course, all programs have some
  such rules, and, viewed at the machine level, are logically
  indistinguishable.) The idiom artificial intelligence is
  curiously convoluted, being applied more often where the coded
  rules are rough and heuristic (i.e. guesses) rather than
  precise and analytic (i.e. scientific).  The silly innuendo is
  that AI codifies intuitive expertise.  Contrariwise, most AI
  techniques amount to little more than brute trial-and-error
  facilitated by rule-of-thumb short-cuts.  An analogy is jig-saw
  reconstruction, which proceeds by first separating pieces with
  corners and edges, and then crudely trying to find adjacent
  pairs by exhaustive color and shape matching trials.  This
  analogy should be extended by adding distortion to all pieces
  of the jig-saw, so that no fit is perfect, and by repainting
  some, removing other, and adding a few irrelevant pieces.  A
  most likely, or least unlikely, fit is sought.  Neural nets are
  computers programmed with an algorithm for tailoring their
  rules of thumb, based on statistical inference from a large
  number of sample observations for which the correct solution is
  known.  In effect, neural nets induce recurrent patterns from
  input observations.  They are limited in the patterns that they
  recognize, and are stumped by change.  Their programmed rules
  of thumb are not more profound, although they are more
  complicated, raw "IF... THEN" constructs.  Neural nets derive
  their conditional branchings from underlying rules of
  statistical inference, and cannot extrapolate beyond the
  fixations of their induction algorithm.  Like regular AI
  applications, they must select an optimal hypotheses from a
  simple, predefined set.  Thus, all AI applications are largely
  probabilistic, as exemplified by medical diagnosis and missile
  attack warning.  In medical diagnosis, failure to use and heed
  a computer can be grounds for malpractice, yet software bugs
  have gruesome consequences.  Likewise, missile attack warning
  deters, yet puts us all at risk.


From rutgers!tut.cis.ohio-state.edu!ucbvax!decwrl!nsc!voder!berlioz!andrew Fri Aug 18 09:56:49 EDT 1989
Article 4740 of comp.ai:
Path: sunybcs!rutgers!tut.cis.ohio-state.edu!ucbvax!decwrl!nsc!voder!berlioz!andrew
>From: andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head )
Newsgroups: comp.ai
Subject: Re: Is there a definition of AI?
Summary: arrant rubbish
Date: 12 Aug 89 05:03:39 GMT
References: <4298@lindy.Stanford.EDU>
Distribution: usa
Organization: National Semiconductor, Santa Clara
Lines: 12

In article <4298@lindy.Stanford.EDU>, GA.CJJ@forsythe.stanford.edu (Clifford Johnson) writes:
>   [Neural nets] are limited in the patterns that they
>   recognize, and are stumped by change.  

		* flame bit set *
Go read about Adaptive Resonance Theory (ART) before making sweeping
and false generalisations of this nature!
-- 
...........................................................................
Andrew Palfreyman	There's a good time coming, be it ever so far away,
andrew@berlioz.nsc.com	That's what I says to myself, says I, 
time sucks					   jolly good luck, hooray!


From rutgers!tut.cis.ohio-state.edu!ucbvax!agate!shelby!lindy!news Fri Aug 18 09:57:17 EDT 1989
Article 4742 of comp.ai:
Path: sunybcs!rutgers!tut.cis.ohio-state.edu!ucbvax!agate!shelby!lindy!news
>From: GA.CJJ@forsythe.stanford.edu (Clifford Johnson)
Newsgroups: comp.ai
Subject: Re: Is there a definition of AI?
Date: 12 Aug 89 18:37:38 GMT
Sender: news@lindy.Stanford.EDU (News Service)
Distribution: usa
Lines: 12

In <615@berlioz.nsc.com>, Lord Snooty writes:
>In <4298@lindy.Stanford.EDU>, Clifford Johnson writes:
>>   [Neural nets] are limited in the patterns that they
>>   recognize, and are stumped by change.

>Go read about Adaptive Resonance Theory (ART) before making sweeping
>and false generalisations of this nature!

I would have thought stochastic convergence theory more relevant
than resonance theory.

What exactly is your point, and what, specifically, should I read?


From rutgers!usc!apple!voder!berlioz!andrew Fri Aug 18 09:57:31 EDT 1989
Article 4745 of comp.ai:
Path: sunybcs!rutgers!usc!apple!voder!berlioz!andrew
>From: andrew@berlioz (Lord Snooty @ The Giant Poisoned Electric Head )
Newsgroups: comp.ai
Subject: Re: Is there a definition of AI?
Summary: reference citation
Date: 12 Aug 89 20:39:19 GMT
References: <4318@lindy.Stanford.EDU>
Distribution: usa
Organization: National Semiconductor, Santa Clara
Lines: 26

In article <4318@lindy.Stanford.EDU>, GA.CJJ@forsythe.stanford.edu (Clifford Johnson) writes:
> >In <4298@lindy.Stanford.EDU>, Clifford Johnson writes:
> >>   [Neural nets] are limited in the patterns that they recognize,
> >>   and are stumped by change.
> 					*flame bit set*
> >Go read about Adaptive Resonance Theory (ART) before making sweeping
> >and false generalisations of this nature!
> 
> I would have thought stochastic convergence theory more relevant
> than resonance theory.
> What exactly is your point, and what, specifically, should I read?

I refer to "stumped by change", which admittedly is rather
inexact in itself. I am not familiar with "stochastic convergence",
although perhaps there is another name for it?

A characteristic of ART nets is that they are capable of dealing with
realtime input and performing dynamic characterisations.

A good start would be "Neural Networks & Natural Intelligence" by
Stephen Grossberg (ed), 1988, MIT Press.  Enjoy.
-- 
...........................................................................
Andrew Palfreyman	There's a good time coming, be it ever so far away,
andrew@berlioz.nsc.com	That's what I says to myself, says I, 
time sucks					   jolly good luck, hooray!

From ub!zaphod.mps.ohio-state.edu!samsung!cs.utexas.edu!helios!wfsc4!hmueller Thu Aug 30 12:44:08 EDT 1990
Article 7622 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!samsung!cs.utexas.edu!helios!wfsc4!hmueller
>From: hmueller@wfsc4.tamu.edu (Hal Mueller)
Newsgroups: comp.ai
Subject: Re: What actually is AI?
Message-ID: <7838@helios.TAMU.EDU>
Date: 30 Aug 90 16:29:37 GMT
References: <90241.112651F0O@psuvm.psu.edu> <1990Aug29.183823.25108@msuinfo.cl.msu.edu> <34175@eerie.acsu.Buffalo.EDU> <25392@boulder.Colorado.EDU> <38294@siemens.siemens.com>
Sender: usenet@helios.TAMU.EDU
Organization: Dept. of Wildlife and Fisheries Sciences, Texas A&M University
Lines: 26

In article <38294@siemens.siemens.com> wood@jfred.siemens.edu (Jim Wood) writes:
>    Artificial Intelligence is a computer science and engineering
>    discipline which attempts to model human reasoning methods
>    computationally.

I've spent the last year working with a group that tries to build
models of ANIMAL reasoning methods; we use the same techniques
that you'd apply to any other AI problem.

Everything Jim said in his posting is true in this domain as well.
Shifting from human to animal reasoning doesn't make the problem
any easier.  In fact it's rather annoying to be unable to use 
introspection as a development aid:  I can watch myself solve a
problem and try to build into a program the techniques I see myself
using, but you can't ask an elk or a mountain lion what's going through
its brain.  All we can do is watch the behavior of our models and 
compare it to experimentally observed behavior, using the experience
of ethologists to guide us.

Watching elk in the mountains is much more pleasant than watching
a gripper arm pick up blocks, however.

--
Hal Mueller            			Surf Hormuz.
hmueller@cs.tamu.edu          
n270ca@tamunix.Bitnet


From ub!zaphod.mps.ohio-state.edu!usc!rutgers!rochester!heron.cs.rochester.edu!yamauchi Tue Sep  4 12:53:42 EDT 1990
Article 7623 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!usc!rutgers!rochester!heron.cs.rochester.edu!yamauchi
>From: yamauchi@heron.cs.rochester.edu (Brian Yamauchi)
Newsgroups: comp.ai
Subject: Re: What actually is AI?
Message-ID: <1990Aug30.175352.2710@cs.rochester.edu>
Date: 30 Aug 90 17:53:52 GMT
References: <90241.112651F0O@psuvm.psu.edu> <1990Aug29.183823.25108@msuinfo.cl.msu.edu> <34175@eerie.acsu.Buffalo.EDU> <25392@boulder.Colorado.EDU> <38294@siemens.siemens.com>
Sender: news@cs.rochester.edu (Usenet news)
Reply-To: yamauchi@heron.cs.rochester.edu (Brian Yamauchi)
Organization: University of Rochester Computer Science Department
Lines: 30

In article <38294@siemens.siemens.com>, wood@jfred.siemens.edu (Jim
Wood) writes:
> After being in the field for seven years, this is MY informal
> definition of Artificial Intelligence:
> 
>     Artificial Intelligence is a computer science and engineering
>     discipline which attempts to model human reasoning methods
>     computationally.

Actually, this sounds more like the (usual) definition of Cognitive
Science (since the emphasis is on modeling human reasoning).

No doubt if you query a dozen AI researchers, you will receive a dozen
different definitions, but my definition would be:

	Artificial Intelligence is the study of how to build intelligent
	systems.

The term "intelligent" is both fuzzy and open to debate.  The usual
definition involves symbolic reasoning, but, in my opinion, a better
definition would be the ability to generate complex, goal-oriented
behavior in a rich, dynamic environment (and perhaps also the ability to
learn from experience and extend system abilities based on this
learning).  But I'm a robotics researcher, so naturally I'm biased :-).

_______________________________________________________________________________

Brian Yamauchi				University of Rochester
yamauchi@cs.rochester.edu		Computer Science Department
_______________________________________________________________________________


From ub!zaphod.mps.ohio-state.edu!rpi!dali.cs.montana.edu!milton!forbis Tue Sep  4 12:54:34 EDT 1990
Article 7626 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!rpi!dali.cs.montana.edu!milton!forbis
>From: forbis@milton.u.washington.edu (Gary Forbis)
Newsgroups: comp.ai
Subject: Re: TM's (Was: Re: Searle and Radical Translation)
Message-ID: <6889@milton.u.washington.edu>
Date: 30 Aug 90 19:58:06 GMT
References: <628@ntpdvp1.UUCP>
Organization: University of Washington, Seattle
Lines: 43

I've been following this line for some time.  Ken Presting made me think
of an important difference between formal TMs and the day to day computation
machines actually do.

In article <628@ntpdvp1.UUCP> kenp@ntpdvp1.UUCP (Ken Presting) writes:
>> kohout@cme.nist.gov (Robert Kohout) writes:
>... The output of real computers is
>dependent on the past sequence of inputs, and this is exactly the
>phenomenon which concerns me. ...
>One reason that change in output over time is important is simply, learning.
>I do not see any hope of defining "learning" in terms of machines which
>always produce the same output from a given input.
>
>>If you are saying that a real machine can accept its inputs in little
>>chunks, while a TM requires its input up front I maintain that this adds
>>nothing to the computing ability of the machine. Obviously, one could take
>>the entire input over the life of a real machine and encode it in some
>>fashion that could suffice to be the single, "initial" input of a TM.

(I am sorry if any feel I have condenced too much.  I am trying to keep 
this article short and pnews requires and equal or greater amount of new 
text when compared to old text.  This lengthens what would otherwise be a
short reply to the context setting quoted material.)

There is more to real machines than accepting input and producing output.
In many cases there is a causal link between previous output and subsequent
input.  This is an additional reason that no real machine is equivalent to
a single TM whose input stream is predetermined.  If "the entire input over
the life of a real machine" were encoded "in some fashion that could suffice 
to be the single, 'initial' input of a TM" it would not represent the causal
link and as such would require some oracle to be defined.

An example.

A normal online application session involves separate create, inquiry, update,
and delete functions.  Unless the imput oracle knows the results of the create
prior to actually doing it it cannot encode input for update which relies upon
the output of the inquiry.  Now I could chop the input into little chunks for
each function but then carry some information as input to subsequent calls that
are not normally considered part of the input stream (the part Ken is calling
remembered.)

--gary forbis@milton.u.washington.edu


From ub!zaphod.mps.ohio-state.edu!uwm.edu!psuvax1!psuvm!f0o Tue Sep  4 12:56:06 EDT 1990
Article 7630 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!uwm.edu!psuvax1!psuvm!f0o
>From: F0O@psuvm.psu.edu
Newsgroups: comp.ai
Subject: Re: What actually is AI?
Message-ID: <90243.142616F0O@psuvm.psu.edu>
Date: 31 Aug 90 18:26:16 GMT
References: <90241.112651F0O@psuvm.psu.edu>
 <1990Aug29.183823.25108@msuinfo.cl.msu.edu> <34175@eerie.acsu.Buffalo.EDU>
 <6287@jhunix.HCF.JHU.EDU>
Organization: Penn State University
Lines: 12


     In following the threads of my original posting, it seems that there
is not one definition of what AI is.  However, what my original question
was is, what is it that makes one program an AI one, and another one non-AI?
Again, I imagine there is not one magical answer to that, but for instance,
I'm finishing up a prolog program that plays unbeatable tictactoe.  Of
course, this is a very simple game, but would it be considered an AI program?
If not, how about a checkers or chess program?  And it they would be AI
programs, what would make them AI, but tictactoe not-AI?


                                                        [Tim]


From ub!zaphod.mps.ohio-state.edu!wuarchive!cs.utexas.edu!uunet!mcsun!unido!uklirb!powers Thu Sep  6 12:45:53 EDT 1990
Article 7642 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!wuarchive!cs.utexas.edu!uunet!mcsun!unido!uklirb!powers
>From: powers@uklirb.informatik.uni-kl.de (David Powers AG Siekmann)
Newsgroups: comp.ai
Subject: Re: What actually is AI?
Message-ID: <6560@uklirb.informatik.uni-kl.de>
Date: 3 Sep 90 12:02:27 GMT
References: <90241.112651F0O@psuvm.psu.edu> <90243.142616F0O@psuvm.psu.edu>
Organization: University of Kaiserslautern, W-Germany
Lines: 94

F0O@psuvm.psu.edu writes:


>     In following the threads of my original posting, it seems that there
>is not one definition of what AI is.  However, what my original question
>was is, what is it that makes one program an AI one, and another one non-AI?
>Again, I imagine there is not one magical answer to that, but for instance,
>I'm finishing up a prolog program that plays unbeatable tictactoe.  Of
>course, this is a very simple game, but would it be considered an AI program?
>If not, how about a checkers or chess program?  And it they would be AI
>programs, what would make them AI, but tictactoe not-AI?


We have now seen 2 definitions, I prefer to characterize them so:

the engineering perspective: 

	to build systems to do the things we can't build systems to do
	because they require intelligence

the psychological perspective:

	to build systems to do the things we ourselves can do to help
	us to understand our intelligence

The former was the original aim, the latter came from psychology
and is represented in Margaret Boden's book: AI and Natural Man.
But it is also a natural extension of our familiar introspection.
This has now been distinguished with its own name: Cognitive Science.

Note that a corollary of the first definition is that once we can
build something, then the task no longer lies within artificial
intelligence.  AI has lost several subfields on this basis, from
pattern recognition to chess playing programs to expert systems.

I would say the real ai definition is this:

the heuristic perspective:

	to build systems relying on heuristics (rules of thumb)
	rather than pure algorithms

This excludes noughts and crosses (tic-tac-toe) and chess if the
progam is dumb and exhaustive (chess) or pre-analyzed and
exhaustive (ttt).  Unfortunately it could also include expert
systems which I see as a spin-off of ai technology and by no means
main stream, but expert systems capture conscious knowledge or at
least high level knowledge.  The capture of the knowledge is
straightforward and intrinsically no different from the
introspection involved in writing any program - we think "How
would I do it by hand?"  Of course knowledge engineering techniques
can be applied to any domain, even those hard to introspect, by
using the techniques with the experts in the field - e.g. on linguists,
for natural language.  But this won't in general reveal how we are
actually really using language.

This brings us back to the cognitive science definition.

The definition which guides my own work is:

	to build systems which are capable of modifying their
	behaviour dynamically by learning

This takes the responsibility of acquiring and inputting the
heuristics or knowledge from the programmer or knowledge engineer
and gives it to the programmer.  Machine Learning is a subfield of
AI, but somehow central to its future.  Expert Systems are also
really only still AI in so far as we use AI (=heuristic+learning)
techniques in the acquisition of the knowledge base.  But there is
also a lot of work to be done in establishing the foundations
within which learning is possible.

Another definition of AI is:

	Anything written in LISP or PROLOG.  

This definition (or either half thereof) is believed by some.  It
is not so silly as it sounds.  E.g, PROLOG does have something of
the property of automatically finding a way of satisfying
specifications, and logic and induction and theorem proving
technology are the underpinings of machine learning research.
This technology can now be guided by heuristics, and these
heuristics can be learned.  It's only beginning, but it's exciting!
And, of course, you can still misuse any language!

I hope this has stirred the pot a bit.

David
------------------------------------------------------------------------
David Powers		 +49-631-205-3449 (Uni);  +49-631-205-3200 (Fax)
FB Informatik		powers@informatik.uni-kl.de; +49-631-13786 (Prv)
Univ Kaiserslautern	 * COMPULOG - Language and Logic
6750 KAISERSLAUTERN	 * MARPIA   - Parallel Logic Programming
WEST GERMANY		 * STANLIE  - Natural Language Learning


From ub!zaphod.mps.ohio-state.edu!usc!samsung!munnari.oz.au!metro!grivel!gara!pnettlet Thu Sep  6 12:47:56 EDT 1990
Article 7649 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!usc!samsung!munnari.oz.au!metro!grivel!gara!pnettlet
>From: pnettlet@gara.une.oz.au (Philip Nettleton)
Newsgroups: comp.ai
Subject: What AI is exactly.
Summary: Let's look a what AI really is, not just some airy-fairy notions.
Message-ID: <3543@gara.une.oz.au>
Date: 6 Sep 90 02:43:59 GMT
References: <34175@eerie.acsu.Buffalo.EDU> <25392@boulder.Colorado.EDU> <3797@se-sd.SanDiego.NCR.COM>
Organization: University of New England, Armidale, Australia
Lines: 116

In article <3797@se-sd.SanDiego.NCR.COM>, jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:
> In article <38294@siemens.siemens.com> wood@jfred.siemens.edu (Jim Wood) writes:
> >    Artificial Intelligence is a computer science and engineering
> >    discipline which attempts to model human reasoning methods
> >    computationally.
> >
> 
> I think this is a pretty good definition, taken from the engineers point
> of view.  A psychologist might take a different view of the definition/
> purpose of AI.
> 
> One thing I'd include is that it's a cognitive psychological as well as
> computer science and engineering discipline.  You have to know something
> about how people think in order to model human reasoning methods.

I think it is a terribly poor definition, actually, for the following reasons:

a)	Human Intelligence is NOT the only form of intelligence. This is an
	extremely one eyed view point. Dolphins are extremely intelligent and
	the only reason we cannot communicate with them to date is because of
	the extreme differences in our vocal ranges and auditory senses. There
	is also a huge cultural gap. What concerns do dolphins have? What form
	does their communication take? We need to know these BEFORE we can
	even look at syntax and semantics. Hence their intelligence is very
	alien to ours.

b)	People tend to assume that at machine cannot be intelligent. Human
	Intelligence is well documented, much research has been done into
	Animal Intelligence, but what of Machine Intelligence? Is there a
	specific type of intelligence that a machine can have? Is there any
	need to base this intelligence on Human or Animal Intelligence?

Saying that AI is modelling "Human Intelligence" is totally inadequate. It
may not even be possible because we have such a limited understanding of the
processes involved. Artificial Intelligence means:

	An intelligent system designed by mankind to run on a man-made
	artifact, ie, a computer. The term Machine Intelligence is more
	succinct because it identifies the type of intelligence created.

Please no arguments about:

	What is intelligence?

This has been discussed ad nauseum, and obviously, we don't know. However,
it must exibit intelligence behaviour. With regards to intelligent human
behaviour, we can test this with the Turing Test. As for intelligent animal
behaviour, there is no appropriate test. And what is intelligent behaviour
for a machine? It could be quite alien in appearance from the other two.

Let us produce a general requirement for intelligent behaviour:

a)	The system MUST be able to learn. This implies that the system MUST have
	a memory for learning to be maintained. Also learning comes in a
	number of varieties:

	i)	It MUST be able to learn from its own experiences. These can
		be broken down into further criteria:

		1)	Learning through trial and error.
		2)	Learning through observation.
		3)	Learning through active deduction (see reasoning).

	ii)	It SHOULD be able to learn by instruction, but this is not
		necessary. At the very least the system MUST have preprogrammed
		instincts. This is a boot strap for the developing intelligence.
		Without a starting point, the system cannot progress.

b)	The system MUST be autonomous. This can be disected as:

	i)	The system MUST be able to effect its environment based on
		its own independent conclusions.

	ii)	The system MUST be its own master and therefore doesn't
		require operator intervention.

	iii)	The system MUST be motivated. It must have needs and
		requirements that can to be satisfied by its own actions.

c)	The system MUST be able to reason. That is to say, it must use some
	form of deductive reasoning, based on known facts and capable of
	producing insights (deductions) which later become known facts.

d)	The system MUST be self aware. This is related to autonomy, reasoning
	and learning, but also embodies the need for external senses. Without
	external senses there is no way of appreciating the difference between
	"me" and "outside of me". Sensationations of pain and pleasure can
	provide motivation.

It is clear to see that a human easily satisfies these requirements and so is
an intelligent system. A cat also satisfies these requirements. So we now have
a common basis for known intelligent behaviour. An intelligent machine would
need to satisfy these requirements to be classed as an intelligent system.

One last point of clarifaction:

	The ENVIRONMENT in which the intelligent system operates need not
	be the physical environment of the world around us. It could be a
	computer environment.

I invite responses from those who would like to clarify any points made here
or those who would like to extend or advance further points into a
constructive debate. But please, if you are hung up on the divity of the human
race or you want to bring the Searle debate into this, do us all a favour and
refrain.

		With Regards,

				Philip Nettleton,
				Tutor in Computer Science,
				Department of Maths, Stats, and Computing,
				The University of New England,
				Armidale,
				New South Wales,
				2351,
				AUSTRALIA.



From ub!zaphod.mps.ohio-state.edu!rpi!uupsi!njin!princeton!siemens!jfred!wood Fri Sep 21 12:15:41 EDT 1990
Article 7654 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!rpi!uupsi!njin!princeton!siemens!jfred!wood
>From: wood@jfred.siemens.edu (Jim Wood)
Newsgroups: comp.ai
Subject: Re: What AI is exactly.
Message-ID: <38801@siemens.siemens.com>
Date: 6 Sep 90 14:43:37 GMT
References: <34175@eerie.acsu.Buffalo.EDU> <25392@boulder.Colorado.EDU> <3797@se-sd.SanDiego.NCR.COM> <3543@gara.une.oz.au>
Sender: news@siemens.siemens.com
Lines: 70

I originally wrote:

>>    Artificial Intelligence is a computer science and engineering
>>    discipline which attempts to model human reasoning methods
>>    computationally.

and pnettlet@gara.une.oz.au (Philip Nettleton) writes [and I edit]:

>I think it is a terribly poor definition, actually, for the following
>reasons:

>a)	Human intelligence is NOT the only form of intelligence.  This is an
>	extremely one-eyed viewpoint.  Dolphins are extremely intelligent, and
>	the only reason we cannot communicate with them to date is because of
>	the extreme differences in our vocal ranges and auditory senses.
>	There is also a huge cultural gap.  What concerns do dolphins have?
>	What form does their communication take?  We need to know these
>	BEFORE we can even look at syntax and semantics.  Hence their
>	intelligence is very alien to ours.

Agreed with (a), but I do not recall having implied human intelligence is
the only form of intelligence.  However, it is certainly the most
interesting to artificial intelligence scientists and engineers.  From the
practical perspective, it is the only type of intelligence which interests
industry, from which the purse flows.

My definition involves a model of human REASONING methods.  The strongest
areas of artificial intelligence, in my opinion, are expert systems (modeling
the knowledge of an expert), natural language systems (modeling languages
and how humans process them), robotics (modeling human sensory and motor
functions), and neural networks (modeling the cognitive processes of the
human brain).  Each of these involves human reasoning.

>b)	People tend to assume that a machine cannot be intelligent.  Human
>	intelligence is well documented, and much research has been done into
>	animal intelligence, but what of machine intelligence?  Is there a
>	specific type of intelligence that a machine can have?  Is there any
>	need to base this intelligence on human or animal intelligence?

Your reference to machine intelligence is a good one, but it is a mistake
to overshadow human intelligence with it in defining artificial intelligence.
A machine is no more than an extension of human computability.  There is
nothing which a machine does which is not a direct product of the exercise
of human intelligence.  Consequently, machine intelligence is a subset of
human intelligence.

>Saying that AI is modeling "human intelligence" is totally inadequate.  It
>may not even be possible because we have such a limited understanding of
>the processes involved.

I did not say AI models human intelligence.  I was very specific to say that
it models human reasoning methods.  I also believe our knowledge of human
reasoning is limited, but that does not stop AI scientists and engineers
from developing theories and applications.

>Artificial Intelligence means:
>	An intelligent system designed by mankind to run on a man-made
>	artifact, for example, a computer. The term Machine Intelligence
>	is more succinct because it identifies the type of intelligence
>	created.

Artificial intelligence is not a system, any more than computer science is
a system.  Intelligent systems are the product of artificial intelligence
METHODOLOGIES.  For example, an expert system is not "artificial
intelligence", rather it is the result of applying artificial intelligence
methodologies.
--
Jim Wood [wood@cadillac.siemens.com]
Siemens Corporate Research, 755 College Road East, Princeton, NJ  08540
(609) 734-3643


From ub!zaphod.mps.ohio-state.edu!sdd.hp.com!ucsd!sdcc6!odin!demers Fri Sep 21 12:16:03 EDT 1990
Article 7655 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!sdd.hp.com!ucsd!sdcc6!odin!demers
>From: demers@odin.ucsd.edu (David E Demers)
Newsgroups: comp.ai
Subject: Re: What actually is AI?
Message-ID: <12563@sdcc6.ucsd.edu>
Date: 6 Sep 90 19:20:26 GMT
References: <90241.112651F0O@psuvm.psu.edu> <90243.142616F0O@psuvm.psu.edu> <6560@uklirb.informatik.uni-kl.de>
Sender: news@sdcc6.ucsd.edu
Organization: CSE Dept., U. C. San Diego
Lines: 53
Nntp-Posting-Host: odin.ucsd.edu

In article <6560@uklirb.informatik.uni-kl.de> powers@uklirb.informatik.uni-kl.de (David Powers AG Siekmann) writes:
>F0O@psuvm.psu.edu writes:


>>     In following the threads of my original posting, it seems that there
>>is not one definition of what AI is.  However, what my original question
>>was is, what is it that makes one program an AI one, and another one non-AI?
>>Again, I imagine there is not one magical answer to that, but for instance,
>>I'm finishing up a prolog program that plays unbeatable tictactoe.  Of
>>course, this is a very simple game, but would it be considered an AI program?
>>If not, how about a checkers or chess program?  And it they would be AI
>>programs, what would make them AI, but tictactoe not-AI?


>We have now seen 2 definitions, I prefer to characterize them so:

>the engineering perspective: 

>	to build systems to do the things we can't build systems to do
>	because they require intelligence

the psychological perspective:

>	to build systems to do the things we ourselves can do to help
>	us to understand our intelligence
[...]
>I would say the real ai definition is this:

>the heuristic perspective:

>	to build systems relying on heuristics (rules of thumb)
>	rather than pure algorithms
[...]
>The definition which guides my own work is:

>	to build systems which are capable of modifying their
>	behaviour dynamically by learning

[...]

>Another definition of AI is:
>
>	Anything written in LISP or PROLOG.  
>I hope this has stirred the pot a bit.


I'm still looking for the originator of the definition:

"AI is the art of making computers act like the
ones in the movies"


Dave


From ub!zaphod.mps.ohio-state.edu!samsung!munnari.oz.au!metro!grivel!gara!pnettlet Fri Sep 21 12:17:48 EDT 1990
Article 7661 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!samsung!munnari.oz.au!metro!grivel!gara!pnettlet
>From: pnettlet@gara.une.oz.au (Philip Nettleton)
Newsgroups: comp.ai
Subject: What AI is exactly - A follow up.
Keywords: intelligence
Message-ID: <3569@gara.une.oz.au>
Date: 7 Sep 90 04:04:43 GMT
Organization: University of New England, Armidale, Australia
Lines: 72

lynch@aristotle.ils.nwu.edu (Richard Lynch) writes:
> 
> "No man is an island unto himself."
> 
> Certainly an intelligent machine should be able to handle many things for
> itself, but clearly at some point it must be capable of depending on others,
> dealing and negotiating with others.
> 
> "TANSTAAFL" -> "There Ain't No Such Thing As A Free Lunch", Cheers!

Agreed -> see new definition.

forbis@milton.u.washington.edu (Gary Forbis) writes:
>
> Self awareness does not exist in very young children yet their intelligence
> seems apparent to me. Defining the limits of "me" is one of the first tasks
> an intelligence has to solve; these limits are fuzzy.

Agreed -> see new definition.

Let us produce a slightly more refined general requirement for intelligent
behaviour:

a)	The system MUST be able to learn. This implies that the system MUST have
	a memory for learning to be maintained. Also learning comes in a
	number of varieties:

	i)	It MUST be able to learn from its own experiences. These can
		be broken down into further criteria:

		1)	Learning through trial and error.
		2)	Learning through observation.
		3)	Learning through active deduction (see reasoning).

	ii)	It SHOULD be able to learn by instruction, but this is not
		necessary. At the very least the system MUST have preprogrammed
		instincts. This is a boot strap for the developing intelligence.
		Without a starting point, the system cannot progress.

b)	The system MUST be autonomous. That is to say, it MUST be able to
	do things by itself (however may choose to accept aid). This can
	be disected as:

	i)	The system MUST be able to effect its environment based on
		its own independent conclusions.

	ii)	The system MUST be its own master and therefore doesn't
		require operator intervention.

	iii)	The system MUST be motivated. It must have needs and
		requirements that can to be satisfied by its own actions.

c)	The system MUST be able to reason. That is to say, it must use some
	form of deductive reasoning, based on known facts and capable of
	producing insights (deductions) which later become known facts.

d)	The system MUST be able to develop self awareness. This is related
	to autonomy, reasoning and learning, but also embodies the need for
	external senses. Without external senses there is no way of
	appreciating the difference between "me" and "outside of me".
	Sensationations of pain and pleasure can provide motivation.

		With Regards,

				Philip Nettleton,
				Tutor in Computer Science,
				Department of Maths, Stats, and Computing,
				The University of New England,
				Armidale,
				New South Wales,
				2351,
				AUSTRALIA.


From ub!acsu.buffalo.edu Fri Sep 21 12:20:25 EDT 1990
Article 7675 of comp.ai:
Path: ub!acsu.buffalo.edu
>From: dmark@acsu.buffalo.edu (David Mark)
Newsgroups: comp.ai
Subject: Re: What AI is exactly.
Message-ID: <35282@eerie.acsu.Buffalo.EDU>
Date: 8 Sep 90 16:13:33 GMT
References: <3797@se-sd.SanDiego.NCR.COM> <3543@gara.une.oz.au> <3815@se-sd.SanDiego.NCR.COM>
Sender: news@acsu.Buffalo.EDU
Organization: SUNY Buffalo
Lines: 54
Nntp-Posting-Host: autarch.acsu.buffalo.edu

In article <3815@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin, Cognitologist domesticus) writes:

  [90 lines deleted]
>
>Umm, a cat can't reason, or learn in any human sense.  ...
                                      ^^^
Hope you are not offended, Jim, but I think this claim is just plain silly.
Cats, and other mammals, and birds, and indeed even many invertebrates,
DO learn things!  I remember an article in SCIENCE a few years back that
showed that the time required for a butterfly to insert its proboscis into
the nectaries of a flower decreases with number of trials.  That
is "learning", isn't it?   And it is A type of learning that humans
undoubtedly exhibit.  Thus the "any" in the above quote seems inappropriate.
(Anyone test a human on time needed to, say, thread a needle?)  Yet I don't
think I would want to claim that butterflies are "intelligent" in a realistic
sense.  

But, by my everyday definition of "intelligence", cats and crows and many
other birds and mammals certainly have it.  Their "intelligence" does not
seem to be as elaborate or as developed as ours.  But they do "learn", and 
"remember" (experiments with food caching and re-finding in birds; I
can find references if you want), and "solve problems" (parrot pulling string
"foot over beak" to raise food to its perch), and even "form generalizations".
For the latter, I was told of an appartment-raised cat whose owner moved to
a house with a front door and a back door.  Initially, the cat would "ask" to 
go out one of the doors, and if it was raining, it would retreat and then "ask"
at the other door.  But within a few days, the cat, when seeing rain at one
door, would NOT attempt the other.  It seems obvious that the cat
had "generalized" that rain out one door meant rain out the other, 
or had "learned" that the two doors connect to the "same real world."
And as for communication, many animal species have fairly elaborate
vocal and behavioral methods for "communicating".  And the experiments with
signing apes, even if interpreted rather enthusiastically by the authors,
seem to indicate abilities at fairly complex communication for these
creatures.

It seems to me that human "intelligence" differs from the "intelligence"
of other vertebrates in degree rather than kind.  (I agree that the
degree is VERY large in most cases.)  Is there any "EVIDENCE"
that humans have "kinds" of "intelligence" that no other species
exhibits even to a primitive degree?  (By the usual standards of science,
I would guess that solid "evidence" either way would be pretty hard to 
come by.)

And finally, is the domain or goal of "Artificial Intelligence" really
"Artificial HUMAN Intelligence" ?  Or do folks mostly want to claim that
"Artificial Human Intelligence" is redundant, that "intelligence" is
a strictly-human trait?  And if so, is it strictly-human BY DEFINITION? 
And if so, what do we want to call the collective set of cognitive
abilities to "learn", "communicate", "solve problems", etc., that many
"higher" vertebrates seem to possess?

David Mark
dmark@acsu.buffalo.edu


From ub!acsu.buffalo.edu Fri Sep 21 12:28:02 EDT 1990
Article 7729 of comp.ai:
Path: ub!acsu.buffalo.edu
>From: dmark@acsu.buffalo.edu (David Mark)
Newsgroups: comp.ai
Subject: Re: What AI is exactly.
Message-ID: <36268@eerie.acsu.Buffalo.EDU>
Date: 14 Sep 90 22:35:31 GMT
References: <3815@se-sd.SanDiego.NCR.COM> <35282@eerie.acsu.Buffalo.EDU> <3851@se-sd.SanDiego.NCR.COM>
Sender: news@acsu.Buffalo.EDU
Organization: SUNY Buffalo
Lines: 27
Nntp-Posting-Host: autarch.acsu.buffalo.edu

In article <3851@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin
, Cognitologist domesticus) writes:
>
>Perhaps the crux of this problem is the definition of "learning" as
>a purely behavioural one.  IMO, learning is more than just displaying
>certain behaviour.
>
>>Thus the "any" in the above quote seems inappropriate.
>
>Agreed, if you look merely at the behavioural aspects of learning.  Otherwise,
>maybe there's little similarities between the exhibited behaviour in humans
>and cats.

Jim, it is difficult to discuss issues such as these if people are
using the key terms to mean sharply different things.  Would you please
provide us with the definition of "learning" that you are using,
either by making up your own or by quoting some source?  I presume that
we are not disagreeing much about the facts of animal behavior and
human behavior, but are disagreeing about what definitions of "intelligence"
and "learn" are appropriate.  And since "intelligence" is such a slippery
one, let's start with "learn" or "learning".  In particular, could you detail
what the "non-behavioral" aspects of learning are?

David Mark
dmark@acsu.buffalo.edu




From ub!zaphod.mps.ohio-state.edu!swrinde!ucsd!ogicse!plains!person Fri Sep 21 12:29:17 EDT 1990
Article 7739 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!swrinde!ucsd!ogicse!plains!person
>From: person@plains.NoDak.edu (Brett G. Person)
Newsgroups: comp.ai
Subject: Re: What actually is AI?
Message-ID: <5901@plains.NoDak.edu>
Date: 15 Sep 90 23:39:21 GMT
References: <90241.112651F0O@psuvm.psu.edu> <90243.142616F0O@psuvm.psu.edu> <6560@uklirb.informatik.uni-kl.de>
Organization: North Dakota State University, Fargo
Lines: 10


I had an instructor tell me once that AI was anything that hadn't already
been done in AI.  

He said that once the AI app. had been written that no one considered it
to be AI anymore
-- 
Brett G. Person
North Dakota State University
uunet!plains!person | person@plains.bitnet | person@plains.nodak.edu


From ub!acsu.buffalo.edu Fri Sep 21 12:31:52 EDT 1990
Article 7744 of comp.ai:
Path: ub!acsu.buffalo.edu
>From: pmm@acsu.buffalo.edu (patrick m mullhaupt)
Newsgroups: comp.ai
Subject: Re: What AI is exactly.
Message-ID: <36424@eerie.acsu.Buffalo.EDU>
Date: 17 Sep 90 04:15:27 GMT
References: <25392@boulder.Colorado.EDU> <3797@se-sd.SanDiego.NCR.COM> <3543@gara.une.oz.au>
Sender: news@acsu.Buffalo.EDU
Organization: SUNY Buffalo
Lines: 76
Nntp-Posting-Host: autarch.acsu.buffalo.edu

>a) The system MUST be able to learn.
>b)	The system MUST be autonomous.
>c)	The system MUST be able to reason.
>d)	The system MUST be self aware.
>It is clear to see that a human easily satisfies these requirements and so is
>an intelligent system. A cat also satisfies these requirements. So we now have
>a common basis for known intelligent behaviour. An intelligent machine would
>need to satisfy these requirements to be classed as an intelligent system.
>
>		With Regards,
>
>				Philip Nettleton,
>				AUSTRALIA.






	I don't have any problems with these constraints.  I do have a
question though.

	Would a group of individuals, say the congress of the USA,
qualify as an "intelligent system"? :-)  More generally, do you allow
collective intelligences?  I would guess that you might not, but your
definition seems to allow it.

	G'day,
		Patrick Mullhaupt

Newsgroups: comp.ai
Subject: 
Followup-To: 
Distribution: world
Organization: SUNY Buffalo
Keywords: 

Newsgroups: comp.ai
Subject: Re: What AI is exactly.
Summary: 
Expires: 
References: <25392@boulder.Colorado.EDU> <3797@se-sd.SanDiego.NCR.COM> <3543@gara.une.oz.au>
Sender: 
Followup-To: 
Distribution: 
Organization: SUNY Buffalo
Keywords: 

>a) The system MUST be able to learn.
>b)	The system MUST be autonomous.
>c)	The system MUST be able to reason.
>d)	The system MUST be self aware.
>It is clear to see that a human easily satisfies these requirements and so is
>an intelligent system. A cat also satisfies these requirements. So we now have
>a common basis for known intelligent behaviour. An intelligent machine would
>need to satisfy these requirements to be classed as an intelligent system.
>
>		With Regards,
>
>				Philip Nettleton,
>				AUSTRALIA.

	I don't have any problems with these constraints.  I do have a
question though.

	Would a group of individuals, say the congress of the USA,
qualify as an "intelligent system"? :-)  More generally, do you allow
collective intelligences?  I would guess that you might not, but your
definition seems to allow it.

	G'day,
		Patrick Mullhaupt






From ub!zaphod.mps.ohio-state.edu!sdd.hp.com!decwrl!hayes.fai.alaska.edu!accuvax.nwu.edu!mmdf Fri Sep 21 12:32:05 EDT 1990
Article 7746 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!sdd.hp.com!decwrl!hayes.fai.alaska.edu!accuvax.nwu.edu!mmdf
>From: lynch@aristotle.ils.nwu.edu (Richard Lynch)
Newsgroups: comp.ai
Subject: Re: What is AI Exactly?
Message-ID: <12236@accuvax.nwu.edu>
Date: 17 Sep 90 15:04:26 GMT
Sender: mmdf@accuvax.nwu.edu
Lines: 15

Philip Mullhaupt asks, "wouldn't the US Congress" meet the following
requirements and thus be classified as 'intelligent':
a) Able to learn
b) Autonomous
c) Able to reason
d) Self-aware


No, Philip, the US Congress clearly does NOT satisfy c) Able to reason.

On a more serious note, one could inject e) Be a single organism, but that
would rule out all those sci-fi aliens with only a "collective intelligence",
and I don't think we want to do that.

"TANSTAAFL" Rich lynch@aristotle.ils.nwu.edu


From ub!zaphod.mps.ohio-state.edu!samsung!munnari.oz.au!metro!grivel!gara!pnettlet Fri Sep 21 12:32:38 EDT 1990
Article 7749 of comp.ai:
Path: ub!zaphod.mps.ohio-state.edu!samsung!munnari.oz.au!metro!grivel!gara!pnettlet
>From: pnettlet@gara.une.oz.au (Philip Nettleton)
Newsgroups: comp.ai
Subject: What AI is Exactly - Another Update.
Keywords: intelligence
Message-ID: <3734@gara.une.oz.au>
Date: 17 Sep 90 22:27:47 GMT
Organization: University of New England, Armidale, Australia
Lines: 97

Some new people have recently entered this debate so I thought it was
time to repost the definition of an "Intelligent System" that we have
developed so far. Pinning this debate back to its origins, we would
be interested in hearing from anyone with a CONSTRUCTIVE critism of
any part of the definition or any additions they feel are necessary.
Remember, the underlying assumption is that to be human is not a
necessary condition for being intelligent, this point has been flogged
to death in recent postings.

Let us produce a slightly more refined "general requirements" for the
behaviour of an "intelligent system".

----------------------------------------------------------------------
			DEFINITION:
	GENERAL REQUIREMENTS OF AN INTELLIGENT SYSTEM.

a)	The system MUST be able to learn. This implies that the
	system MUST have a memory for learning to be maintained.
	Also learning comes in a number of varieties:

	i)	It MUST be able to learn from its own experiences.
		These can be broken down into further criteria:

		1)	Learning through trial and error.
		2)	Learning through observation.
		3)	Learning through active deduction (see
			reasoning).

	ii)	It SHOULD be able to learn by instruction, but this
		is not necessary. At the very least the system MUST
		have preprogrammed instincts. This is a boot strap
		for the developing intelligence.  Without a starting
		point, the system cannot progress.

b)	The system MUST be autonomous. That is to say, it MUST be
	able to do things by itself (however may choose to accept
	aid).  This can be disected as:

	i)	The system MUST be able to effect its environment
		based on its own independent conclusions.

	ii)	The system MUST be its own master and therefore
		doesn't require operator intervention.

	iii)	The system MUST be motivated. It must have needs and
		requirements that can to be satisfied by its own
		actions.

c)	The system MUST be able to reason. That is to say, it must
	use some form of deductive reasoning, based on known facts
	and capable of producing insights (deductions) which later
	become known facts.

d)	The system MUST be able to develop self awareness. This is
	related to autonomy, reasoning and learning, but also
	embodies the need for external senses. Without external
	senses there is no way of appreciating the difference between
	"me" and "outside of me". Sensationations of pain and
	pleasure can provide motivation.
----------------------------------------------------------------------
			DEFINITION OF TERMS.

1)	A "system" CAN be comprised of multiple subsystems, each one
	of these could be a system in its own right (systems theory).

2)	The "environment" in which the system exists MUST be external
	to the system, but that is as far as the definition of the
	environment goes (it could be computer generated).

3)	The terms "learning", "reasoning" and "autonomy" are
	BEHAVIOURAL characteristics, further supported by our
	understanding (to date) of how they MIGHT work.

4)	The term "self awareness" is based on learning, reasoning
	and autonomy, and is the state where the system is aware
	(has knowledge) of its own existence as separate from its
	environment.

5)	"Intelligence" is a BEHAVIOURAL phenomena displayed by
	intelligent systems.
----------------------------------------------------------------------

NOTE:	If you step OUTSIDE the boundaries of the "definition of
	terms", your comments will simply be ignored, but feel free to
	add definitions or modify them if it will help clarify the
	"general requirements for an intelligent system".

		With Regards,

				Philip Nettleton,
				Tutor in Computer Science,
				Department of Maths, Stats, and Computing,
				The University of New England,
				Armidale,
				New South Wales,
				2351,
				AUSTRALIA.