PAPERS ADDED IN THE PERIOD 2000-2002 (APPROXIMATELY)
See also
PAPERS 2000 -- 2002 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE
SLIDE PRESENTATIONS ON THE COGAFF TOPICS can be found in http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
Closely related publications are available at the web site of Matthias Scheutz
This file is
http://www.cs.bham.ac.uk/research/projects/cogaff/00-02.html
Maintained by Aaron Sloman.
It contains an index to files in the Cognition and Affect
Project's FTP/Web directory produced or published in the period
2000-2002. Some of the papers published in this period were produced
earlier and are included in one of the lists for an earlier period
http://www.cs.bham.ac.uk/research/cogaff/0-INDEX.html#contents
A list of PhD and MPhil theses was added in June 2003
Last updated: 2 May 2010; 11 Oct 2010; 30 Oct 2010; 13 Nov 2010; 7
Jul 2012
PDF versions of postscript files can be provided on request. Email A.Sloman@cs.bham.ac.uk requesting conversion.
JUMP TO DETAILED LIST (After Contents)
Title: A Framework for Comparing Agent Architectures
Author: Aaron Sloman and Matthias Scheutz
(Relocated to another file)
Title: How to derive "better" from "is" (1969)
Author: Aaron Sloman
Title: An Anytime Planning Agent For Computer Game Worlds
Author: Nick Hawes
Title: Anytime Planning For Agent Behaviour
Author: Nick Hawes
Title: More things than are dreamt of in your biology:
Information processing in biologically-inspired robots.
Author: Aaron Sloman and Ron Chrisley
Virtual Machines and Consciousness
Author: Aaron Sloman and Ron Chrisley
Title: Must Intelligent Systems Be Scruffy?
Author: Aaron Sloman
Title: Reflective Architectures for Damage Tolerant Autonomous Systems.
Author: Catriona Kennedy and Aaron Sloman
Title: Autonomous Recovery from Hostile Code Insertion using Distributed Reflection
Author: Catriona Kennedy and Aaron Sloman
Title: Closed Reflective Networks: a Conceptual Framework for Intrusion-Resistant Autonomous Systems
Author: Catriona Kennedy and Aaron Sloman
Title: The Computer Revolution in Philosophy: Philosophy Science and Models of
Mind
(1978 book, now relocated)
Author: Aaron Sloman
Title: The Irrelevance of Turing Machines to AI
In Computationalism: New Directions ed. Scheutz
Author: Aaron Sloman
Title: Evolvable Biologically Plausible Visual Architectures
Author: Aaron Sloman
Title: Reducing Indifference: Steps towards Autonomous Agents with Human Concerns
Author: Catriona Kennedy
Title: Beyond Shallow Models of Emotion
Author: Aaron Sloman
Title: Varieties of Affect and the CogAff Architecture Schema
Author: Aaron Sloman
Title: Affective vs. Deliberative Agent Control
Author: Matthias Scheutz and Brian Logan
Title: Experiencing Computation: A tribute to Max Clowes
(Moved to new location 26 Feb 2016)
Author: Aaron Sloman
Title: Did Searle attack strong strong or weak strong AI?
Author: Aaron Sloman (Moved to another file 22 May 2015)
Title: DRAFT: A Framework for Comparing Agent Architectures
(Now superseded by
UKCI'02 paper)
Authors: Aaron Sloman and Matthias Scheutz
Title: Affect and Agent Control: Experiments with Simple Affective States
Authors: Matthias Scheutz and Aaron Sloman
Title: The primacy of non-communicative language
Author: Aaron Sloman
Title: AI as a method? Commentary on Green on AI-Cognitive-Science
Author: Matthias Scheutz
Title: What are virtual machines? Are they real?
Author: Aaron Sloman
Title: Real-Time Goal-Orientated Behaviour for Computer Game Agents
Author: Nick Hawes
Title: Emotional States and Realistic Agent Behaviour
Author: Matthias Scheutz, Aaron Sloman Brian Logan
Title: Interacting Trajectories in Design Space and Niche Space:
A philosopher speculates about evolution
Author: Aaron Sloman
Title: Are Turing machines relevant to AI? (Superseded)
Author: Aaron Sloman
Title: How many separately evolved emotional beasties live within us?
Author: Aaron Sloman
Title: Code and Documentation for PhD Thesis:
Author: Steve Allen
Title: Diagrams in the Mind?
Author: Aaron Sloman
Title: Architecture-Based Conceptions of Mind (Final version)
Author: Aaron Sloman
Title: Models of models of mind
Author: Aaron Sloman
Title: Evolvable architectures for human-like minds
Authors: Aaron Sloman and Brian Logan
Filename: sloman-scheutz-ukci02.pdf
Filename: sloman-scheutz-ukci02.ps
Title: A Framework for Comparing Agent Architectures
Revised version of 2001 paper below with same title.
Author: Aaron Sloman and Matthias Scheutz
Originally Published in: Proceedings UKCI'02, UK Workshop on Computational Intelligence, September 2002, Birmingham, UK.
Abstract:
Research on algorithms and representations once dominated AI. Recently
the importance of architectures has been acknowledged, but researchers
have different objectives, presuppositions and conceptual frameworks,
and this can lead to confused terminology, argumentation at cross
purposes, re-invention of wheels and fragmentation of the research. We
propose a methodological framework: develop a general representation of
a wide class of architectures within which different architectures can
be compared and contrasted. This should facilitate communication and
integration across sub-fields of and approaches to AI, as well as
providing a framework for evaluating alternative architectures. As a
first-draft example we present the CogAff architecture schema, and show
how it provides a draft framework. But there is much still to be done.
Keywords: AI architectures, autonomous agents, cognitive
modelling, philosophical foundations, software agents, dimensions of
variation.
(Relocated to another file)
Title: How to derive "better" from "is" (1969)
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#better
http://www.research.ibm.com/journal/sj41-3.html
Title: An architecture of diversity for commonsense reasoning.
Authors:
John McCarthy,
Marvin Minsky,
Aaron Sloman,
Leiguang Gong,
Tessa Lau,
Leora Morgenstern,
Erik T. Mueller,
Doug Riecken,
Moninder Singh,
and
Push Singh,
Published as: 'An architecture of diversity for commonsense reasoning', in IBM Systems Journal, 41(3), pp. 530-539. (2002).
Abstract:
Although computers excel at certain bounded tasks that are difficult for
humans, such as solving integrals, they have difficulty performing
commonsense tasks that are easy for humans, such as understanding
stories. In this Technical Forum contribution, we discuss commonsense
reasoning and what makes it difficult for computers. We contend that
commonsense reasoning is too hard a problem to solve using any single
artificial intelligence technique. We propose a multilevel architecture
consisting of diverse reasoning and representation techniques that
collaborate and reflect in order to allow the best techniques to be used
for the many situations that arise in commonsense reasoning. We present
story understanding specifically, understanding and answering questions
about progressively harder children's texts as a task for evaluating and
scaling up a commonsense reasoning system.
(Report of a workshop held at the IBM Thomas J. Watson Research Center
in March 2002).
Filename: nick.hawes.cg02.pdf
Filename: nick.hawes.cg02.ps
Title: An Anytime Planning Agent For Computer Game Worlds
Authors: Nick Hawes
In Proceedings, Workshop on Agents in Computer Games at The 3rd International Conference on Computers and Games (CG'02), July 27th 2002. Pages 1 -- 14.
Abstract:
Computer game worlds are dynamic and operate in real-time.
Any agent in such a world must utilize techniques that can deal with these
environmental factors. Additionally, to advance past the current
state-of-the-art, computer game agents must display intelligent
goal-orientated behaviour. Traditional planners, whilst fulfilling the need to
generate intelligent, goal-orientated behaviour, fail dramatically when placed
under the demands of a computer game environment. This paper introduces
A-UMCP, an anytime hierarchical task network planner, as a feasible approach
to planning in a computer game environment. It is a planner that can produce
intelligent agent behaviour whilst being flexible with regard to the time used
to produce plans.
Filename: nick.hawes.plansig01.pdf
Filename: nick.hawes.plansig01.ps
Title: Anytime Planning For Agent Behaviour
In Proceedings, PLANSIG 2001, December 13-14 December 2001. Pages 157 -- 166.
Abstract:
For an agent to act successfully in a complex and dynamic environment (such as
a computer game) it must have a method of generating future behaviour that
meets the demands of its environment. One such method is anytime planning. This
paper discusses the problems and benefits associated with making a planning
system work under the anytime paradigm, and introduces Anytime-UMCP (A-UMCP),
an anytime version of the UMCP hierarchical task network (HTN) planner. It also
covers the necessary abilities an agent must have in order to execute plans
produced by an anytime hierarchical task network planner.
Filename: sloman-chrisley-rs.ps
Filename: sloman-chrisley-rs.pdf
Title: More things than are dreamt of in your biology:
Information processing in biologically-inspired robots.
Revised version of paper presented at: International Workshop Biologically-Inspired Robotics: The Legacy of W.Grey Walter, 14-16 August 2002, Bristol, UK http://www.ecs.soton.ac.uk/~rid/wgw02/home.html
Abstract:
This paper is concerned with some methodological and philosophical problems
related both to the long-term objective of building human-like robots (like
those `in the movies') and short- and medium-term objectives of building robots
with capabilities of more or less intelligent animals. In particular, we claim
that organisms are information-processing machines, and thus
information-processing concepts will be essential for designing
biologically-inspired robots. However, identifying relevant concepts is
non-trivial since what an information-processor is doing cannot in
general be determined by using the standard observational techniques of
the physical sciences. Having a general framework for describing and
comparing agent architectures may help.
Keywords:
Architecture, biology, evolution,
information-processing, ontology, ontological blindness, robotics,
virtual machines
Filename: sloman-chrisley-jcs.pdf
Title: Virtual Machines and Consciousness
Original version submitted for publication in 2002. Now out of date. Final version is in http://www.cs.bham.ac.uk/research/cogaff/03.html#03-02
Abstract:
See new version.
Filename: sloman.scruffy.ai.ps
Filename: sloman.scruffy.ai.pdf
Title: Must Intelligent Systems Be Scruffy?
Presented at Evolving Knowledge Conference. Reading University Sept 1989
In Evolving Knowledge in Natural Science and Artificial Intelligence, Eds J.E.Tiles, G.T.McKee, G.C.Dean, London: Pitman, 1990
Comments to: A.Sloman@cs.bham.ac.uk
Abstract:
o Introduction: Neats vs Scruffies
o The scope of AI
o Bow to the inevitable: why scruffiness is unavoidable
o Non-explosive domains
o The physical (biological, social) world is even harder to deal with
o Limits of consistency in intelligent systems
o Scruffy semantics
o So various kinds of scruffiness are inevitable
o What should AI do about this?
o Conclusion
Filename:
kennedy.sloman.CSR-02-01.ps
Filename:
kennedy.sloman.CSR-02-01.pdf
Title: Reflective Architectures for Damage Tolerant Autonomous Systems.
Also technical report number CSR-02-1 School of Computer Science, The University of Birmingham.
Comments to: C.M.Kennedy@cs.bham.ac.uk
Abstract:
Most existing literature on reflective architectures is
concerned with language interpreters and object-oriented
programming methods. In contrast, there is little work on
reflective architectures which enable an {\it autonomous
system} to have these types of access to its own operation
for the purpose of survival in a hostile environment.
Using the principles of natural immune systems, we present
an autonomous system architecture which first acquires a
model of its own normal operation and then uses this model
to detect and repair faults and intrusions (self/nonself
discrimination in immune systems). To enable the system to
repair damage in {\it any} part of its operation,
including its monitoring and repair mechanisms, the
architecture is distributed so that all components are
monitored by some other component within the system. We
have distributed the system in the form of mutually
protecting agents which monitor and repair each other's
self-protection mechanisms. This paper presents the first
version of a prototype implementation in which only
omission failures occur.
Filename:
kennedy.sloman.CSR-02-02.ps
Filename:
kennedy.sloman.CSR-02-02.pdf
Title: Autonomous Recovery from Hostile Code Insertion using Distributed Reflection
Also technical report number CSR-02-2 School of Computer Science, The University of Birmingham
Comments to: C.M.Kennedy@cs.bham.ac.uk
Abstract:
In a hostile environment, an autonomous system requires a
reflective capability to detect problems in its own
operation and recover from them without external
intervention. We present an architecture in which
reflection is distributed so that components mutually
observe and protect each other, and where the system has a
distributed model of all its components, including those
concerned with the reflection itself. Some reflective (or
"meta-level") components enable the system to monitor
its execution traces and detect anomalies by comparing
them with a model of normal activity. Other components
monitor "quality" of performance in the application
domain. Implementation in a simple virtual world shows
that the system can recover from certain kinds of hostile
code attacks that cause it to make wrong decisions in its
application domain, even if some of its self-monitoring
components are also disabled.
Filename:
kennedy.sloman.CSR-02-03.ps
Filename:
kennedy.sloman.CSR-02-03.pdf
Title: Closed Reflective Networks: a Conceptual Framework for Intrusion-Resistant Autonomous Systems
Also technical report number CSR-02-3 School of Computer Science, The University of Birmingham
Comments to: C.M.Kennedy@cs.bham.ac.uk
Abstract:
Intrusions may sometimes involve the insertion of hostile
code in an intrusion-detection system, causing it to
"lie", for example by giving a flood of false-positives.
To address this problem we consider an intrusion detection
system as a reflective layer in an autonomous system which
is able to observe the whole system's internal behaviour
and take corrective action as necessary. To protect the
reflective layer itself, several mutually reflective
components (agents) are used within the layer. Each agent
acquires a model of the normal behaviour of a group of
other agents under its protection and uses this model to
detect anomalies. The ideal situation is a "closed
reflective network" where all components are monitored
and protected by other components within the same
autonomous system, so that no component is left
unprotected.
Using informal rapid-prototyping we implemented a closed reflective network based on three agents, where the agents use majority voting to determine if an intrusion has occurred and whether a response is required. The main conclusion is that such a network may be better implemented on multiple hardware processors connected together as a simple neural network.
Entry for The Computer Revolution in Philosophy (1978) now moved to
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#crp
Filename: sloman.turing.irrelevant.pdf
Filename sloman.turing.irrelevant.html
Filename sloman.turing.irrelevant.ps.gz
Title: The Irrelevance of Turing Machines to AI
In Computationalism: New Directions, Ed Matthias Scheutz pages 87--127, Cambridge, MA, MIT Press, 2002. http://www.nd.edu/~mscheutz/publications/scheutz02mitbook.html
Abstract:
The common view that the notion of a Turing machine is directly relevant
to AI is criticised. It is argued that computers are the
result of a convergence of two strands of development with a long
history: development of machines for automating various physical
processes and machines for performing abstract operations on abstract
entities, e.g. doing numerical calculations. Various aspects of these
developments are analysed, along with their relevance to AI, and the
similarities between computers viewed in this way and animal brains.
This comparison depends on a number of distinctions: between energy
requirements and information requirements of machines, between ballistic
and online control, between internal and external operations, and
between various kinds of autonomy and self-awareness. The ideas are all
intuitively familiar to software engineers, though rarely made fully
explicit. Most of this has nothing to do with Turing machines or most of
the mathematical theory of computation. But it has everything to do with
both the scientific task of understanding, modelling or replicating
human or animal intelligence and the engineering applications of AI, as
well as other applications of computers.
Filename: sloman.bmvc01.ps
Filename: sloman.bmvc01.pdf
Original version:
http://www.bmva.org/bmvc/2001/papers/120/index.html
Title: Evolvable Biologically Plausible Visual Architectures
in Proceedings of British Machine Vision Conference, Manchester, Sept 2001.
Conference web site: http://www.bmva.org/bmvc/2001/index.html
Abstract:
Much work in AI is fragmented, partly because the subject is so huge
that it is difficult for anyone to think about all of it. Even within
sub-fields, such as language, reasoning, and vision, there is
fragmentation, as the sub-sub-fields are rich enough to keep people
busy all their lives. However, there is a risk that results of isolated
research will be unsuitable for future integration, e.g. in models of
complete organisms, or human like robots. This paper offers a framework
for thinking about the many components of visual systems and how they
relate to the whole organism or machine. The viewpoint is biologically
inspired, using conjectured evolutionary history as a guide to some of
the features of the architecture. It may also be useful both for
modelling animal vision and designing robots with similar capabilities.
Filename: kennedy.ethics.pdf
Filename: kennedy.ethics.ps
Title: Reducing Indifference: Steps towards Autonomous Agents with Human Concerns
in Proceedings of the Symposium "AI, Ethics and (Quasi-) Human Rights" at the 2000 Convention of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB'00), Birmingham, April 2000.
Abstract:
In this paper, we consider a hypothetical software agent that informs users of
possible human rights violations by scanning relevant new reports. Such an
agent suffers from the "indifference" problem if it allows the definition of
human rights in its knowledge base to be arbitrarily modified. We do not
believe that embodiment in the human world is necessary to overcome this
problem. Instead, we propose that a {\it reflective architecture} is required
so that the agent can protect the integrity of its knowledge base and
underlying software mechanisms. Furthermore, the monitoring coverage must be
{\it sufficient} so that the reflective mechanisms themselves are also
monitored and protected. To avoid the problem of infinite regress, we are
exploring a biologically inspired form of {\it distributed} reflection, where
the agent's functionality is distributed over several "micro-level" agents.
These agents mutually acquire models of each other and subsequently use their
models to observe and repair each other; in particular, they look for
deviations from normal execution patterns (anomalies). We present a working
architecture which solves a restricted version of the indifference problem in
a simple virtual world. Finally, we give a conceptual outline of how this
architecture can be applied in the human rights scenario.
Filename: sloman.iqcs01.pdf
Filename: sloman.iqcs01.ps
Title: Beyond Shallow Models of Emotion
In Cognitive Processing, Vol 2, No 1, 2001, pp 177-198 (Summer 2001).Author: Aaron SlomanThis is an extended version of the paper with the same name presented at I3 Spring Days Workshop on Behavior planning for life-like characters and avatars Sitges, Spain, March 1999)
Abstract:
There is a huge diversity of definitions of "emotion" some of which are associated with relatively shallow behavioural or measurable criteria or introspectable experiences, for instance use of facial expression, physiological measures, activity of specific regions of the brain, or the experience of bodily changes or desires, such as wanting to run away, or to hurt someone. There are also deeper theories that link emotional states to a variety of mechanisms within an information processing architecture that are not easily observable or measurable, not least because they are components of virtual machines rather than physical or physiological mechanisms. We can compare this with "shallow" definitions of chemical compounds such as salt, sugar, or water, in terms of their appearance and observed behaviours in various test situations, and their definitions in the context of a theory of the architecture of matter which is mostly concerned with postulated sub-atomic entities and and a web of relationships between them which cannot easily be observed, so that theories about them are not easily confirmed or refuted. This paper outlines an approach to the search for deeper explanatory theories of emotions and many other kinds of mental phenomena, which includes an attempt to define the concepts in terms of the underlying information processing architectures and the classes of states and processes that they can support. A serious problem with this programme is the difficulty of finding good constraints on theories, since in general observable facts are consistent with infinitely many explanatory mechanisms. This "position paper" offers as a partial solution the requirement that proposed architectures be capable of having been produced by biological evolution, in addition to being subject to constraints such as implementability in known biological mechanisms, various resource limits (time, memory, energy, etc.) and being able to account for a wide range of human functionality. Within such an architecture-based theory we can distinguish (at least) primary emotions, secondary emotions, and tertiary emotions, and produce a coherent theory which explains a wide range of phenomena and also partly explains the diversity of theories: most theorists focus on only a subset of types of emotions, like the proverbial blind men trying to say what an elephant is on the basis of feeling only a leg, an ear, a tusk, the trunk, etc.
Title: Varieties of Affect and the CogAff Architecture Schema
A paper for
the Symposium on Emotion, Cognition, and Affective Computing
at
the AISB'01
Convention, 21st - 24th March 2001.
Author: Aaron Sloman
Date installed: 2 Mar 2001
Abstract:
In the last decade and a half, the study of affect in general and
emotion in particular has become fashionable in scientific psychology,
cognitive science and AI, both for scientific purposes and for the
purpose of designing synthetic characters in games and entertainments.
Such work understandably starts from concepts of ordinary language (e.g.
"emotion", "feeling", "mood", etc.). However, these concepts can
be deceptive: they appear to have clear meanings but are used in very
imprecise and systematically ambiguous ways. This is often because of
explicit or implicit theories about mental states and process. In the
Cognition and Affect project we have been attempting to explore the
benefits of developing architecture-based concepts, i.e. starting
with specifications of architectures for complete agents and then
finding out what sorts of states and processes are supported by those
architectures. So, instead of presupposing one theory of the
architecture and explicitly or implicitly basing concepts on that, we
define a space of architectures generated by the CogAff architecture
schema, where each theory supports different collections of concepts. In
that space we focus on one architecture H-Cogaff, a particularly rich
instance of the CogAff architecture schema, conjectured as a theory of
human information processing. The architecture-based concepts that it
supports provide a framework for defining with greater precision than
previously a host of mental concepts, including affective concepts. We
then find that these map more or less loosely onto various
pre-theoretical concepts, such as "emotion", etc. We indicate some
of the variety of emotion concepts generated by the H-Cogaff
architecture A different architecture might be appropriate for exploring
affective states of insects, or reptiles, or other mammals, or even
young children.
Filename: scheutz-logan-aisb01.pdf
Filename: scheutz-logan-aisb01.ps
Title: Affective vs. Deliberative Agent Control
A paper for
the Symposium on Emotion, Cognition, and Affective Computing
at
the AISB'01
Convention, 21st - 24th March 2001, extending the
"GAME-ON 2000" paper below.
Authors: Matthias Scheutz (Birmingham) and Brian Logan (Nottingham)
Date installed: 2 Mar 2001
Abstract:
In this paper, we outline a research strategy for analysing
the properties of different agent architectures, in particular the
cognitive and affective states/processes they can support. We
demonstrate this architecture-based research strategy, which
effectively views cognitive and affective states as
architecture-dependent, with an example of a simulated multi-agent
environment, where agents with different architectures have to compete
for survival. We show that agents with "affective" and
"deliberative" capabilities do best in different kinds of
environments and briefly discuss the implications of combining
affective and deliberative capabilities in a single architecture. We
argue that such explorations of the trade-offs of alternative
architectures will help us understand the role of affective processes
in agent control and reasoning, and may lead to important new insights
in the attempt to understand natural intelligence and evolutionary
trajectories.
Title: Experiencing Computation: A tribute to Max Clowes
With biography and bibliography added 2014
THIS HAS NOW MOVED TO
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#61
Title: Did Searle attack strong strong or weak strong AI?
Moved to new location
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#54
Filename: sloman.scheutz.framework.ps
Filename: sloman.scheutz.framework.pdf
Title: DRAFT: A Framework for Comparing Agent Architectures
{\bf NB} now superseded by
UKCI'02 paper
Authors: Aaron Sloman and Matthias Scheutz
Date: 9 Jan 2001 (Revised on 1 Jul 2002)
Abstract:
Research on algorithms and representations once dominated AI.
Recently the importance of architectures has been
acknowledged, but researchers have different objectives,
presuppositions and conceptual frameworks, and this, can lead to
confused terminology, argumentation at cross purposes,
re-invention of wheels and fragmentation of the research. We
propose a methodological framework: develop a representation of a
general class of architectures within which different architectures can
be compared and contrasted. This should facilitate communication
and integration across sub-fields of and approaches to AI, as well
providing a framework for evaluating alternative architectures.
As a first-draft example we present the CogAff architecture schema,
and show how it provides a draft framework.
But there is much still to be
done.
Filename: scheutz.sloman.affect.control.ps
Filename:
scheutz.sloman.affect.control.pdf
Title: Affect and Agent Control: Experiments with Simple Affective States
Affect and Agent Control: Experiments with Simple Affective States.
In Ning Zhong et al. (Eds.) Intelligent Agent Technology: Research and
Development. World Scientific Publisher: New Jersey, 200-209.
Presented at
IAT01
International Conference on Intelligent Agent Technology,
Japan, October 2001.
Authors: Matthias Scheutz and Aaron Sloman
Date: 3rd July 2001
Abstract:
In this paper we analyze functional roles of affective states in agent
control in relatively simple agents in a variety of environments.
The analysis is complemented by various simulation experiments
in a competitive multi-agent environment, which show that simple affective
states (like "hunger") can be very effective in agent control and are
likely to evolve even in competitive environments.
This illustrates the methodology of exploring neighbourhoods in
"design space" in order to understand tradeoffs in the development of
different kinds of agent architectures, whether natural or artificial.
Keywords: Artificial life, AI architectures,
multiagent systems, philosophical foundations.
Filename: sloman.primacy.inner.language.pdf
Filename: sloman.primacy.inner.language.ps
Filename: sloman.primacy.inner.language.txt (Plain text)
Title: The primacy of non-communicative language
Author: Aaron Sloman
In The Analysis of Meaning, Proceedings 5,
(Invited talk for ASLIB Informatics Conference, Oxford, March 1979,)
ASLIB and British Computer Society, London, 1979.
Eds M. MacCafferty and K. Gray, pages 1--15.
Abstract:
How is it possible for symbols to be used to refer to or describe things? I
shall approach this question indirectly by criticising a collection of widely
held views of which the central one is that meaning is essentially concerned
with communication. A consequence of this view is that anything which could be
reasonably described as a language is essentially concerned with
communication. I shall try to show that widely known facts, for instance facts
about the behaviour of animals, and facts about human language learning and
use, suggest that this belief, and closely related assumptions (see A1 to A3,
in the paper) are false. Support for an alternative framework of
assumptions is beginning to emerge from work in Artificial Intelligence,
work concerned not only with language but also with perception,
learning, problem-solving and other mental processes. The subject has
not yet matured sufficiently for the new paradigm to be clearly
articulated. The aim of this paper is to help to formulate a new
framework of assumptions, synthesising ideas from Artificial
Intelligence and Philosophy of Science and Mathematics.
Filename: http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.097
Title: AI as a method? Commentary on Green on AI-Cognitive-Science
In Psycoloquy: 11(097)
Author: Matthias Scheutz
Date: 26 Oct 2000
Abstract:
In his target article "Is AI the right method for cognitive science?"
Green (2000) wants to establish that results in AI have
little or no explanatory value for psychology and cognitive science
as AI attempts to "simulate something that is not, at present, at
all well understood". While Green is right that the foundations of
psychology are still insufficiently worked out, there is no reason
for his pessimism, which rests on a misconception of AI. AI
properly understood can be seen to contribute to the clarification
of foundational issues in psychology and cognitive science.
REF
Green, C.D. (2000) Is AI the Right Method for Cognitive Science?
PSYCOLOQUY 11(061)
ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/psyc.00.11.061.ai-cognitive-science.1.green
http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061
Filename: sloman.virtual.slides.pdf
Filename: sloman.virtual.slides.ps
Title: What are virtual machines? Are they real?
Slides for seminar presented on 12th Sept., 2000.
Author: Aaron Sloman
Date: 16 Oct 2000 (Updated 17 Feb 2001 - but still a draft)
Abstract:
Philosophers have long discussed the relationship between mental
phenomena and physical phenomena. Theories about this include various
types of dualism (there are two kinds of stuff), various types of monism
(there's only one kind of stuff), pluralism/polyism (there are many
kinds of stuff), each of which has its own variants. E.g. according
to one sort of dualism, epiphenomenalism, causal traffic is one-way:
physical events can cause mental events and processes but not vice
versa, because the physical realm is "causally closed." There seem to be
only two options: the only "true reality" is the physical world and
either everything else is just an interpretation of it, or else its just
a collection of "powerless shadows". Both views are hard to square
with common sense.
Computer scientists and software engineers can now help philosophers sort out this mess. We can make progress because there is a new type of non-physical realm which we understand, because we have created it: the realm of virtual machines in computers. However much of this know-how is still at the stage of a "craft", i.e. it is mostly intuitions and practical know-how of engineers and designers, though theory is in hot pursuit.
Virtual machines have components which interact causally and change over time. When they run they produce many sorts of physical effects, e.g. changes on the screen and in the computer's memory, or movements of a robot's limbs. How is that possible if the underlying physical circuitry is causally closed?
Maybe our notions of causation become deeply confused when we address questions about causal closure. My conjecture is that by understanding more clearly what we mean by "X caused Y" in the context of these "simple" computational virtual machines we may begin to get a deeper understanding of all sorts of older, vastly more complex and subtle, biological, social, mental, virtual machines and how their reality, and their causal powers, do not contradict anything in physics. They are not an illusion, not just an arbitrary interpretation of the physical world, not ghostly powerless shadows. Philosophy needs help from software engineers in order to understand all this.
Filename: nick.hawes.gameon2000.ps
Filename: nick.hawes.gameon2000.pdf
Title: Real-Time Goal-Orientated Behaviour for Computer Game Agents
Author: Nick Hawes
Date: 29 Sep 2000
Abstract: To increase the depth and appeal of computer games, the intelligence of the characters they contain needs to be increased. These characters should be played by intelligent agents that are aware of how goals can be achieved and reasoned about. Existing AI methods struggle in the computer game domain because of the real-time response required from the algorithms and restrictive processor availability. This paper discusses the CogAff architecture as the basis for an agent that can display goal orientated behaviour under real-time constraints. To aid performance in real-time domains (e.g. computer games) it is proposed that both the processes encapsulated by the architecture, and the information it must operate on should be structured in a way that encourages a fast yet flexible response from the agent. In addition, anytime algorithms are discussed as a method for planning in real-time.
Filename: scheutz-sloman-logan-gameon.pdf
Filename: scheutz-sloman-logan-gameon.ps
Filename: scheutz-sloman-logan-gameon.doc
Title: Emotional States and Realistic Agent Behaviour
Author: Matthias Scheutz, Aaron Sloman, Brian Logan
Published/Presented:
In Proceedings GAME-ON 2000, Imperial College London, 11-12 Nov 2000,
http://hobbes.rug.ac.be/~scs/conf/gameon2000
Date: 26 Sep 2000
Abstract:
In this paper we discuss some of the relations between cognition and
emotion as exemplified by a particular type of agent architecture, the
CogAff agent architecture. We outline a strategy for analysing
cognitive and emotional states of agents along with the processes they
can support, which effectively views cognitive and emotional states as
architecture-dependent. We demonstrate this architecture-based
research strategy with an example of a simulated multi-agent
environment, where agents with different architectures have to compete
for survival and show that simple affective states can be surprisingly
effective in agent control under certain conditions. We conclude by
proposing that such investigations will not only help us improve
computer entertainments, but that explorations of alternative
architectures in the context of computer games may also lead to
important new insights in the attempt to understand natural
intelligence and evolutionary trajectories.
Title: Interacting Trajectories in Design Space and Niche Space:
A philosopher speculates about evolution
Authors: Aaron Sloman
Invited keynote talk, PPSN200, Paris, Sept 2000,
http://www.inria.fr/ppsn2000
in
Parallel Problem Solving from Nature -- PPSN VI
Eds: Marc Schoenauer, Kalyanmoy Deb, Gu"nter Rudolph, Xin Yao,
Evelyne Lutton, Juan Julian Merelo, Hans-Paul Schwefel,
Springer: Lecture Notes in Computer Science, No 1917, 2000
pp. 3--16
Date: 26 Sep 2000
Abstract:
There are evolutionary trajectories in two different but related spaces,
\emph{design space} and \emph{niche space}. Co-evolution occurs in
parallel trajectories in both spaces, with complex feedback loops
linking them. As the design of one species evolves, that changes the
niche for others and vice versa. In general there will never be a unique
answer to the question: does this change lead to higher fitness? Rather
there will be tradeoffs: the new variant is better in some respects and
worse in others. Where large numbers of mutually interdependent species
(designs) are co-evolving, understanding the dynamics can be very
difficult. If intelligent organisms manipulate some of the mechanisms,
e.g. by mate selection or by breeding other animals or their own kind,
the situation gets even more complicated. It may be possible to show how
some aspects of the evolution of human minds are explained by all these
mechanisms.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/turing-relevant.html
Title: Are Turing machines relevant to AI? (superseded)
Author: Aaron Sloman
Date: 27 May 2000
Now superseded by version dated
13 Jul 2001
Abstract:
It is often assumed, especially by people who attack AI, that the
concept of a Turing machine and the concept of computation defined in
terms of Turing computability (or mathematically equivalent notions) are
crucial to the role of computers in AI and Cognitive Science, especially
so-called "good old fashioned AI" (or GOFAI, a term which is often used
by people who have read only incomplete and biased accounts of the
history of AI).
It is also often assumed that the notion of computation is inherently linked with Turing machines, with a collection of mathematically equivalent concepts (e.g. a class of recursive functions) and with logic.
In this paper I shall try to show that these assumptions are incorrect, at least as regards the most common ways of thinking about and using computers. I shall try to clarify what it was about computers in the early days (e.g. by around 1960, or earlier) that made them eminently suitable, unlike previous physical man-made machines, for use as a basis for cognitive modelling and for building thinking machines, and also as a catalyst for new theoretical ideas about how minds work.
I think it had little to do with Turing machines, or with predicate logic, but was a result of natural developments of two pre-existing threads in the history of technology. The first thread is concerned with the production and use of calculating machines to perform arithmetical operations. The second, probably more important thread, was the development of mechanisms to control the behaviour of physical machines, such as textile weaving machines.
NOTE: this is a draft discussion note which will probably be re-written extensively in the light of comments and criticisms. It is made accessible here in order to invite criticisms.
Title: How many separately evolved emotional beasties live within us?
Published/Presented:
Revised version of Invited Talk: at workshop on
To appear in
Author: Aaron Sloman
Date: 27 May 2000 (Revised: 8 Sep 2006}
The version installed here on 8th September 2006 has a few minor
changes, including using the word 'CogAff' as a label for an
architecture schema not an architecture, using the label 'H-cogaff'
for the special case of the proposed human-like architecture, using
'ecosystem' instead of 'ecology', and an improved version of figure 11.
Abstract:
A problem which bedevils the study of emotions, and the study of
consciousness, is that we assume a shared understanding of many everyday
concepts, such as 'emotion', 'feeling', 'pleasure', 'pain', 'desire',
'awareness', etc. Unfortunately, these concepts are inherently very
complex, ill-defined, and used with different meanings
by different people. Moreover this goes unnoticed, so that people think
they understand what they are referring to even when their understanding
is very unclear. Consequently there is much discussion that is
inherently vague, often at cross-purposes, and with apparent
disagreements that arise out of people unwittingly talking about
different things. We need a framework which explains how there can be
all the diverse phenomena that different people refer to when they talk
about emotions and other affective states and processes. The conjecture
on which this paper is based is that adult humans have a type of
information-processing architecture, with components which evolved at
different times, including a rich and varied collection of components
whose interactions can generate all the sorts of phenomena that
different researchers have labelled "emotions". Within this framework
we can provide rational reconstructions of many everyday concepts of
mind. We can also allow a variety of different architectures,
found in children, brain damaged adults, other animals, robots, software
agents, etc., where different architectures support different classes of
states and processes, and therefore different mental ontologies. Thus
concepts like 'emotion', 'awareness', etc. will need to be interpreted
differently when referring to different architectures. We need to limit
the class of architectures under consideration, since for any class of
behaviours there are indefinitely many architectures which can produce
those behaviours. One important constraint is to consider architectures
which might have been produced by biological evolution. This leads to
the notion of a human architecture composed of many components which
evolved under the influence of the other components as well as
environmental needs and pressures. From this viewpoint, a mind is a kind
of {\em ecosystem} (previously described as an 'ecology') of
co-evolved sub-organisms acquiring and using
different kinds of information and processing it in different ways,
sometimes cooperating with one another and sometimes competing. Within
this framework we can hope to study not only mechanisms underlying
affective states and processes, but also other mechanisms which are
often studied in isolation, e.g. vision, action mechanisms, learning
mechanisms, 'alarm' mechanisms, etc. We can also explain why some
models, and corresponding conceptions of emotion, are shallow whereas
others are deeper. Shallow models may be of practical use, e.g. in
entertainment and interface design. Deeper models are required if we are
to understand what we are, how we can go wrong, etc. This paper is
a snapshot of a long term project addressing all these issues.
Title: Code and Documentation for PhD Thesis:
Author: Steve Allen
Date: 27 May 2000
Abstract:
The directory above gives pointers to the code for Steve Allen's Abbott
system, and links to his PhD thesis
Concern Processing in Autonomous
Agents,
submitted in February 2000.
Title: Diagrams in the mind?
Published/Presented:
Revised version of Invited Talk: Thinking With Diagrams Conference,
Aberystwyth, 1998,
In
Author: Aaron Sloman
Date Added: 11 Apr 2000
Abstract:
Clearly we can solve problems by thinking about them. Sometimes we have
the impression that in doing so we use words, at other times diagrams or
images. Often we use both. What is going on when we use mental diagrams
or images? This question is addressed in relation
to the more general multi-pronged question: what are representations,
what are they for, how many different types are they, in how many
different ways can they be used, and what difference does it make
whether they are in the mind or on paper? The question is related to
deep problems about how vision and spatial manipulation work. It is
suggested that we are far from understanding what is going on. In
particular we need to explain how people understand spatial structure
and motion, and how we can think about objects in terms of a basic
topological structure with more or less additional metrical information.
I shall try to explain why this is a problem with hidden
depths, since our grasp of spatial structure is inherently a grasp of a
complex range of possibilities and their implications. Two
classes of examples discussed at length illustrate requirements for
human visualisation capabilities. One is the problem of removing
undergarments without removing outer garments. The other is thinking
about infinite discrete mathematical structures, such as infinite
ordinals. More questions are asked than answered.
Norman Foo enjoyed this paper.
Search for 'Deductive Reasoning' in his 'Jokes' web site: http://www.cse.unsw.edu.au/~norman/JOKES.html
Title: Architecture-Based Conceptions of Mind (Final version)
(Invited talk at
11th International Congress of Logic, Methodology and Philosophy of Science,
Krakow, Poland,
August 20-26, 1999. Published in:
P. Gardenfors and K. Kijania-Placek and J. Wolenski, Eds.,
In
the Scope of Logic, Methodology, and Philosophy of Science (Vol II),
(Synthese Library Vol. 316),
Kluwer,
Dordrecht,
pp. 403--427, 2002.
Author: Aaron Sloman
Date: 1 Apr 2000
Slide presentation (2-up):
Sloman.cracow.slides.2page.pdf
(PDF)
Abstract:
It is argued that our ordinary concepts of mind are both implicitly
based on architectural presuppositions and also cluster concepts. By
showing that different information processing architectures support
different classes of possible concepts, and that cluster concepts have
inherent indeterminacy that can be reduced in different ways for
different purposes we point the way to a research programme that
promises important conceptual clarification in disciplines concerned
with what minds are, how they evolved, how they can go wrong, and how
new types can be made, e.g. philosophy, neuroscience, psychology,
biology and artificial intelligence.
Title: Models of models of mind
Programme Chair's introduction to booklet of papers for the Symposium on
How to Design a Functioning Mind,
at the
AISB'00 convention,
at Birmingham University, April 17-20, 2000.
Author: Aaron Sloman
Date: 24 Mar 2000
Abstract:
Many people are working on architectures of various kinds for
intelligent agents. However different objectives, presuppositions,
techniques and conceptual frameworks (ontologies) are used by different
researchers. These differences together with the fact that many of the
words and phrases of ordinary language used to refer to mental phenomena
are radically ambiguous, or worse, indeterminate in meaning, leads to
much argumentation at cross purposes, misunderstanding, re-invention of
wheels (round and square) and fragmentation of the research community.
It was hoped that this symposium would bring together many different
sorts of researchers, along with a well known novelist with ideas about
consciousness, who might, together, achieve something that would not
happen while they continued their separate ways. This introduction sets
out a conceptual framework which it is hoped will help that
communication and integration to occur. That includes explaining some of
the existing diversity and conceptual confusion and offering some
dimensions for comparing architectures.
Title: Evolvable architectures for human-like minds
In Affective Minds, Ed. Giyoo Hatano,
Elsevier, October 2000
Invited talk at 13th Toyota Conference, on
"Affective Minds" Nagoya Japan, Nov-Dec 1999
Authors: Aaron Sloman and Brian Logan
Date: 2 Feb 2000
Abstract:
There are many approaches to the study of mind, and much ambiguity in
the use of words like `emotion' and `consciousness'. This paper adopts
the design stance, in an attempt to understand human minds as information
processing virtual machines with a complex multi-level architecture
whose components evolved at different times and perform different sorts
of functions. A multi-disciplinary perspective combining ideas from
engineering as well as several sciences helps to constrain the proposed
architecture. Variations in the architecture should
accommodate infants and adults, normal and pathological cases, and
also animals. An analysis of states and processes that each
architecture supports provides a new framework for systematically
generating concepts of various kinds of mental phenomena.
This framework can be
used to refine and extend familiar concepts of mind, providing a new,
richer, more precise theory-based collection of concepts. Within this
unifying framework we hope to explain the diversity of definitions and
theories and move towards deeper explanatory theories and more powerful
and realistic artificial models, for use in many applications, including
education and entertainment.
See also the School of Computer Science Web page.
This file is maintained by
Aaron Sloman, and designed to be
lynx-friendly,
and
viewable with any browser.
Email A.Sloman@cs.bham.ac.uk